Sherlock Holmes as Cyborg and the Future of Retail

Lately, I’ve been enjoying reruns of the relatively new BBC series Sherlock, starring Benedict Cumberbatch, which imagines Arthur Conan Doyle’s famous detective in our 21st century world. The thing I really enjoy about the show is that it’s the first time I can recall that anyone has managed to make Sherlock Holmes funny without at the same time undermining the whole premise of a character whose purely logical style of thinking make him seem more a robot than a human being.

Part of the genius of the series is that the characters around Sherlock, especially Watson, are constantly trying to press upon him the fact that he is indeed human, kind of in the same way Bones is the emotional foil to Spock.

Sherlock uses an ingenious device to display Holmes’ infamous powers of deduction. When Sherlock is focusing his attention on a character words will float around them that display some relevant piece of information, say, the price of a character’s shoes, what kind of razor they used, or what they did the night before. The first couple of times I watched the series I had the eerie feeling that I’d seen this device before, but I couldn’t put my finger on it. And then it hit me, I’d seen it in a piece of design fiction called Sight that I’d written about a while back.

In Sight the male character is equipped with contact lenses that act as an advanced form of Google Glasses. This allows him to surreptitiously access information such as the social profile, and real-time emotional reactions of a woman he is out on a date with. The whole thing is down right creepy and appears to end with the woman’s rape and perhaps even her murder- the viewer is left to guess the outcome.

It’s not only the style of heads-up display containing intimate personal detail that put me in mind of the BBC’s Sherlock where the hero has these types of cyborg capabilities not on account of technology, but built into his very nature, it’s also the idea that there is this sea of potentially useful information just sitting there on someone’s face.

In the future it seems anyone who wants to will have Sherlock Holmes types of deductive powers, but that got me thinking who in the world would want to? I mean,it’s not like we’re not already bombarded with streams of useless data we are not able to process. Access to Sight level amounts of information about everyone we came into contact with and didn’t personally know, would squeeze our mental bandwidth down to dial-up speed.

I think the not-knowing part is important because you’d really only want a narrow stream of new information about a person you were already well acquainted with. Something like the information people now put on their FaceBook wall. You know, it’s so and so’s niece’s damned birthday and your monster-truck driving, barrel-necked cousin Tony just bagged something that looks like a mastodon on his hunting trip to North Dakota.

Certainly the ability to scan a person like a QR code would come in handy for police or spooks, and will likely be used by even more sophisticated criminals and creeps than the ones we have now. These groups work up in a very narrow gap with people they don’t know all the time.  There’s one other large group other than some medical professionals I can think of that works in this same narrow gap, that, on a regular basis, it is of benefit to and not inefficient to have in front of them the maximum amount of information available about the individual standing in front of them- salespeople.

Think about a high end retailer such as a jeweler, or perhaps more commonly an electronics outlet such as the Apple Store. It would certainly be of benefit to a salesperson to be able to instantly gather details about what an unknown person who walked in the store did for a living, their marital status, number of family members, and even their criminal record. There is also the matter of making a sale itself, and here thekinds of feedback data seen in Sight would come in handy.  Such feedback data is already possible across multiple technologies and only need’s to be combined. All one would need is a name, or perhaps even just a picture.

Imagine it this way: you walk into a store to potentially purchase a high-end product. The salesperson wears the equivalent of Google Glasses. They ask for your name and you give it to them. The salesperson is able to, without you ever knowing, gather up everything publically available about you on the web, after which they can buy your profile, purchasing, and browser history, again surreptitiously, perhaps by just blinking, from a big data company like Axicon and tailor their pitch to you. This is similar to what happens now when you are solicited through targeted ads while browsing the Internet, and perhaps the real future of such targeted advertising, as the success of FaceBook in mobile shows, lies in the physical rather than the virtual world.

In the Apple Store example, the salesperson would be able to know what products you owned and your use patterns, perhaps getting some of this information directly from your phone, and therefore be able to pitch to you the accessories and upgrades most likely to make a sale.

The next layer, reading your emotional reactions, is a little tricker, but again much of the technology already exists, or is in development. We’ve all heard those annoying messages when dealing with customer service over the phone that bleats at us “This call may be monitored….”. One might think this recording is done as insurance against lawsuits, and that is certainly one of the reasons. But, as Christopher Steiner in his Automate This, another major reason for these recording is to refine algorithms thathelp customer service representatives filter customers by emotional type and interact with customers according to such types.

These types of algorithms will only get better, and work on social robots used largely for medical care and emotional therapy is moving the rapid fire algorithmic gauging and response to human emotions from the audio to the visual realms.

If this use of Sherlock Holmes type power by those with a power or wealth asymmetry over you makes you uncomfortable, I’m right there with you. But when it gets you down you might try laughing about it. One thing we need to keep in mind both for sanity’s sake, not to mention so that we can have a more accurate gauge of how people in the future might preserve elements of their humanity in the face of technological capacities we today find new and often alien, is that it would have to be a very dark future indeed for there not to be a lot to laugh at in it.

Writers of utopia can be deliberately funny, Thomas More’s Utopia is meant to crack the reader up. Dystopian visions whether of the fictional or nonfictional sort, avoid humor for a reason. The whole point of their work is to get us to avoid such a future in the first place, not, as is the role of humor, to make almost unlivable situations more human, or to undermine power by making it ridiculous, to point out the emperor has no clothes, and defeat the devil by laughing at him. Dystopias are a laughless affair.

Just like in Sherlock it’s a tough trick to pull off, but human beings of the future, as long as there are actually still human beings, will still find the world funny. In terms of technology used as a tool of power, the powerless, as they always have, are likely to get their kicks subverting it, and twisting it to the point of breaking underlying assumptions.

Laughs will be had at epic fails and the sheer ridiculousness of control freaks trying to squish an unruly, messy world into frozen and pristine lines of code.  Life in the future will still sometimes feel like a sitcom, even if the sitcom’s of that time are pumped in directly to our brains through nano-scale neural implants.

Utopias and dystopias emerging from technology are two-sides of a crystal-clear future, which, because of their very clarity, cannot come to pass. What makes ethical judgement of the technologies discussed above, indeed all technology, difficult is their damned ambiguity, an ambiguity that largely stems from dual use.

Persons suffering from autism really would benefit from a technology that allowed them to accurately gauge and guide response to the emotional cues of others. An EMT really would be empowered if at the scene of a bad accident they could instantly access such a stream of information all from getting your name or even just looking at your face, especially when such data contains relevant medical information, as would a person working in child protective services and myriad forms of counseling.

Without doubt, such technology will be used by stalkers and creeps, but it might also be used to help restore trust and emotional rapport to a couple headed for divorce.

I think Sherry Turkle is essentially right, that the more we turn to technology to meet our emotional needs, the less we turn to each other. Still, the real issue isn’t technology itself, but how we are choosing to use it, and that’s because technology by itself is devoid of any morality and meaning, even if it is a cliché to say it. Using technology to create a more emotionally supportive and connected world is a good thing.

As Louise Aronson said in a recent article for The New York Times on social robots to care for the elderly and disabled:

But the biggest argument for robot caregivers is that we need them. We do not have anywhere near enough human caregivers for the growing number of older Americans. Robots could help solve this work-force crisis by strategically supplementing human care. Equally important, robots could decrease high rates of neglect and abuse of older adults by assisting overwhelmed human caregivers and replacing those who are guilty of intentional negligence or mistreatment.

Our sisyphean condition is that any gain in our capacity to do good seems to also increase our capacity to do ill. The key I think lies in finding ways to contain and control the ill effects. I wouldn’t mind our versions of Sherlock Holmes using the cyborg like powers we are creating, but I think we should be more than a little careful they don’t also fall into the hands of our world’s far too many Moriarties, though no matter how devilish these characters might be, we will still be able to mock them as buffoons.

What’s Wrong With Borgdom?

Lately, I’ve had superorganisms on the brain. My initial plan for my post this week was to do a piece comparing earthly superorganisms like the Leaf Cutter Ant with the most well known of science-fiction superorganisms- the Borg. Two things happened that diverted me from this path. First,  the blogger and minimalist beekeeper, James Cross, shared with me an article he wrote that largely said what I wanted to say about the Borg and insects as well or better than I could have. Second, during the course of my research I ran into a recent article that stuck in my craw, which I felt compelled to address.

Someday I will return to the Borg, but now for that article I mentioned. The article, by the author, Travis James Leland, I found on the Institute for Ethics and Emerging   Technologies website, and is entitled We Are the Borg, And That is a Good Thing.”.

Leland’s point in the article is that many of the positive changes from technology that have been brought to our lives in recent decades have first been imagined in science-fiction, and one of the most powerful of these visions has been that of Star Trek.
But, while Star Trek has this incredibly powerful vision of a hopeful technological world it also has a “dark-side” in the form of the Borg. This darker view of technology, Leland thinks, leads to neo-luddism: the phrase “resistance is futile”- with which the Borg “introduce” themselves to species they are about to assimilate, now greets every new technology, and frankly scares people into seeing our technological future in a much more dismal light than should be the case. Leland hopes to cast doubt on this pessimistic vision:

But is it such a bad thing for humanity to want to become a collective? Isn’t one of the main selling points of the internet, social media, etc. the fact that we are all now closer than ever? What I write on my Twitter account can be read by hundreds or even thousands of people instantly. They know what I am thinking, and I can see what is on the mind of all the people I follow. Facebook allows me to share videos, photos, music, status updates and more (although I rarely use Facebook anymore, but I plan on returning to it). Foursquare, Google+, LinkdIn, Skype, and all the other apps and social media are being used to keep us constantly “plugged in” to our peers, our favorite celebrities, causes, politicians, businesses and anyone or anything else we want.

Leland completely embraces the cyborg- transhumanist ideal of the merger of human beings with their technology. In other words, he wants to be a cyborg:

I have been keeping up on the research into Google Glasses, Augmented Reality, implanted microchips, prosthetic limbs, brain uploading and more and I have to say “bring it on!”

For Leland, opposition, or fear, of cyborg-transhumanism isn’t a matter of ethical, egalitarian, or spiritual objections- it’s all about bad “PR”, and we have Star Trek’s Borg partially to blame. If opposition to cyborg-transhumanism is all about spin, then what is needed to undo such opposition is spin in the opposite direction to make cyborg- transhumanism “sexier”. He states:

This one will all come down to advertising, I think.

Leland then shares two videos that he thinks make the positive case for cyborgs. The first one is of a young woman, deaf from birth, who has been granted the ability to hear via a cochlear implant. It is a very touching video. It may be a little too harsh to characterize Leland posting it as an advertisement for cyborg-transhumanism as despicable, but it was certainly disturbing enough for me to feel inspired to write this post.

The second video is, ehem, an advertisement for Corning Glass that pictures a near human future in which a family is blissfully connected to one another via technology, all embedded in glass, of course.

Let me first tackle the issue of social media. The best counter to Leland’s Corning Glass video, with its happy family and their augmented reality bubble, is a dark and frankly brilliant piece of design fiction called Sight  that was brought to my attention by the Atlantic blogger Kasia Cieplak-Mayr von Baldegg (that’s a mouthful).

Sight  is a very short film that shows us the potential dark side of a world of ubiquitous augmented reality and social profiles- a world in many ways scarier that the Borg because it seems so possible. In this film, which I really encourage you to check out for yourself, a tech- savvy hotshot, seduces, and we are led to believe probably rapes, a young woman using a “dating app” that gives him access to almost everything about her.

Sight  gets to the root of the potential problem with social media which isn’t the ability to interconnect and communicate with others , which it undoubtedly provides,  but the very real potential that it could also be used as a tool of manipulation and control.

Sight  is powerful because it shows this manipulation and control person to person, but on a more collective level manipulation and control is the actual objective of advertisement. It is the bread and butter of social media itself. It is quite right of us to wish to limit the ways our personal information can be used, and to ask questions of it such as: Does using a social media website mean that all my “friends” are bombarded with advertisements for product X because I recently bought X? Do social media companies have the right to sell to third parties my “social map” revealing my friends, my interests, my location? What the advertisement Leland shared from Corning Glass showed was a world of ubiquitous video displays against the backdrop of loving families almost without advertisements- the just as, if not more likely scenario, is the kind of nightmare of manipulation found in Sight  and a world of almost constant bombardment by personalized advertisements based on our location, what we are doing, and our “social map”.

Corporate abuse of social media goes well beyond targeted advertising. Upwards of 37% of  hiring managers use social media to prescreen applicants. Even if one would argue that such screening is merely due diligence on the part of corporations one must be aware of the extremely negative knock-on-effects this might be having on American democracy. I can’t tell you how many bloggers I have run into who are afraid to reveal their own names out of fear that what they might say could have a negative impact on their “employment prospects”. This leads me to wonder how many people are dissuaded from blogging or engaging in other forms of publicly accessible speech at all. What this means is that, on some level, people are beginning to become afraid of actually saying what they think which means social media, which should have the effect of facilitating democracy, is instead sucking the very lifeblood out of it.

If individuals or corporations with such deep information about us make some of our skin crawl, governments themselves in possession of such knowledge is just as, or even creepier. As an article in the Economist recently pointed out wiretapping laws have not kept pace with social media. What this means is that the governments essentially have open access to not merely our publicly accessible profiles, but all the background information such as where we went, who we called, emailed, SKYPED with etc. all without need of a warrant.

Conta-Leland being wary of the potential for abuse of social media, and fighting to short-circuit such abuse  is not neo-luddism, but a prerequisite for freedom in a world in which social media is here to stay- for good and for ill.

Leland engages in the very sort of manipulation that seems to underlie the dark side of social media when he essentially appropriates the private, even if shared,  experience of the young deaf woman regaining her hearing for his own ends.

His posting of this video as a form of “advertising” for the kind of cyborg-transhumanism he wishes for the future raises a whole set of questions: Are we supposed to think that questions, concerns or even opposition towards cyborg- transhumanism would mean keeping people deaf and blind? Why is this video so moving for viewers, and the experience of regaining her hearing bring such joy to the woman in the first place? Is it not that she has been given the capacity to do what most of us take for granted? Would our reaction not be different if instead of being brought into the human world of sound the woman was instead given an implant to be able to hear high frequency sounds that could be heard by bats or dogs? I think it most certainly would be.

One thing is clear, pressures are building among medical technology companies to extend the sort of technology that has brought the joy and wonder of sound to this young woman well beyond bringing people within the range of normal human experience or curring debilitating diseases, but first for the good news.

As a recent article in the Financial Times, Health Care: Into the Cortex , by Clive Cookson points out, at least some in the pharmaceutical industry think we are on the cusp of a bioelectronics revolution. Moncef Slaoui, head of research for the health care juggernaut GlaxcoSmithKline believes according to Cookson that:

…this is a moment comparable to the birth of the modern pharmaceuticals industry at the end of the19th and beginning of the 20th centuries, when chemical companies began to realize that they could design drugs with compound effects.

Here are some of the wonders of bioelectronics that have either already come down the pike, or are in process according to Cookson:

  • Paralyzed persons with bioelectronic implants being able to control robots with their just their thoughts.    
  • Bioelectronic communication with persons in a comatose state.
  • Robotic “suits” that give tetraplegics the ability to walk.
  • “Smart-Skin” implanted under the skin of epileptics to diminish/prevent seizures.
  • Deep brain stimulation to reduce the symptoms of Parkinson’s Disease, and although Cookson doesn’t mention this, clinical depression that is not responsive to existing treatments.
  • Stimulation of the peripheral  nervous system, outside of the brain, to treat conditions such as obesity.

All of these actual and potential inventions promise to reduce or end human suffering, and in that respect, are certainly a good thing. The danger, I think, will come from the fact that the shear economies of scale necessary to sustain medical bioelectronics is likely, as it has in the areas of pharmaceuticals and cosmetic surgery (the latter which began as a way to repair the horrendous scars of war suffered by some injured soldiers, and to help ease the emotional burden of women undergoing mastectomies), to demand the creation  of a mass market for bioelectronics, which will mean taking it beyond the treatment of disease.

I can imagine it working something like “plug-in-play” devices do now with people being able to essentially dock their brain into computerized systems to access networks instantly, or to communicate “telepathically” or to gain instant access to a new skill. I can see these products being aggressively marketed to quite healthy people, much as pharmaceuticals are aggressively marketed today. I can imagine all kinds of intense social pressures to use these products in order to effectively compete in the corporate world or the dating game. I cringe at the thought of how such hyper-expensive “enhancements” will further exacerbate an already pernicious social and economic inequality.  And I can lament like the words in the Arcade Fire song We Used to Wait, “that I hope that something pure can last”.

The ethical questions this type of cyborg-transhumanism will open up are real and are likely to be legion: questions about equity and individual rights and forced competition and peer pressure and morality and who decides what kind of world we live. The current field of bioethics, which currently deals with the ethical dilemmas posed by biological technology, will probably need to spawn a whole new subfield, or even new field of ethics itself, that will deal with the specific problems posed by bioelectronics. Let’s hope that the forums for discussing, debating, and deciding such issues are more open and democratic than the current ones for bioethics, which tends to be locked up in universities and exclusive publications. And let’s be fast in creating such forums because it seems very likely that we are about to plunge ourselves headlong into the dreams and nightmares of Leland’s borgdom.