Lately, I’ve been enjoying reruns of the relatively new BBC series Sherlock, starring Benedict Cumberbatch, which imagines Arthur Conan Doyle’s famous detective in our 21st century world. The thing I really enjoy about the show is that it’s the first time I can recall that anyone has managed to make Sherlock Holmes funny without at the same time undermining the whole premise of a character whose purely logical style of thinking make him seem more a robot than a human being.
Part of the genius of the series is that the characters around Sherlock, especially Watson, are constantly trying to press upon him the fact that he is indeed human, kind of in the same way Bones is the emotional foil to Spock.
Sherlock uses an ingenious device to display Holmes’ infamous powers of deduction. When Sherlock is focusing his attention on a character words will float around them that display some relevant piece of information, say, the price of a character’s shoes, what kind of razor they used, or what they did the night before. The first couple of times I watched the series I had the eerie feeling that I’d seen this device before, but I couldn’t put my finger on it. And then it hit me, I’d seen it in a piece of design fiction called Sight that I’d written about a while back.
In Sight the male character is equipped with contact lenses that act as an advanced form of Google Glasses. This allows him to surreptitiously access information such as the social profile, and real-time emotional reactions of a woman he is out on a date with. The whole thing is down right creepy and appears to end with the woman’s rape and perhaps even her murder- the viewer is left to guess the outcome.
It’s not only the style of heads-up display containing intimate personal detail that put me in mind of the BBC’s Sherlock where the hero has these types of cyborg capabilities not on account of technology, but built into his very nature, it’s also the idea that there is this sea of potentially useful information just sitting there on someone’s face.
In the future it seems anyone who wants to will have Sherlock Holmes types of deductive powers, but that got me thinking who in the world would want to? I mean,it’s not like we’re not already bombarded with streams of useless data we are not able to process. Access to Sight level amounts of information about everyone we came into contact with and didn’t personally know, would squeeze our mental bandwidth down to dial-up speed.
I think the not-knowing part is important because you’d really only want a narrow stream of new information about a person you were already well acquainted with. Something like the information people now put on their FaceBook wall. You know, it’s so and so’s niece’s damned birthday and your monster-truck driving, barrel-necked cousin Tony just bagged something that looks like a mastodon on his hunting trip to North Dakota.
Certainly the ability to scan a person like a QR code would come in handy for police or spooks, and will likely be used by even more sophisticated criminals and creeps than the ones we have now. These groups work up in a very narrow gap with people they don’t know all the time. There’s one other large group other than some medical professionals I can think of that works in this same narrow gap, that, on a regular basis, it is of benefit to and not inefficient to have in front of them the maximum amount of information available about the individual standing in front of them- salespeople.
Think about a high end retailer such as a jeweler, or perhaps more commonly an electronics outlet such as the Apple Store. It would certainly be of benefit to a salesperson to be able to instantly gather details about what an unknown person who walked in the store did for a living, their marital status, number of family members, and even their criminal record. There is also the matter of making a sale itself, and here thekinds of feedback data seen in Sight would come in handy. Such feedback data is already possible across multiple technologies and only need’s to be combined. All one would need is a name, or perhaps even just a picture.
Imagine it this way: you walk into a store to potentially purchase a high-end product. The salesperson wears the equivalent of Google Glasses. They ask for your name and you give it to them. The salesperson is able to, without you ever knowing, gather up everything publically available about you on the web, after which they can buy your profile, purchasing, and browser history, again surreptitiously, perhaps by just blinking, from a big data company like Axicon and tailor their pitch to you. This is similar to what happens now when you are solicited through targeted ads while browsing the Internet, and perhaps the real future of such targeted advertising, as the success of FaceBook in mobile shows, lies in the physical rather than the virtual world.
In the Apple Store example, the salesperson would be able to know what products you owned and your use patterns, perhaps getting some of this information directly from your phone, and therefore be able to pitch to you the accessories and upgrades most likely to make a sale.
The next layer, reading your emotional reactions, is a little tricker, but again much of the technology already exists, or is in development. We’ve all heard those annoying messages when dealing with customer service over the phone that bleats at us “This call may be monitored….”. One might think this recording is done as insurance against lawsuits, and that is certainly one of the reasons. But, as Christopher Steiner in his Automate This, another major reason for these recording is to refine algorithms thathelp customer service representatives filter customers by emotional type and interact with customers according to such types.
These types of algorithms will only get better, and work on social robots used largely for medical care and emotional therapy is moving the rapid fire algorithmic gauging and response to human emotions from the audio to the visual realms.
If this use of Sherlock Holmes type power by those with a power or wealth asymmetry over you makes you uncomfortable, I’m right there with you. But when it gets you down you might try laughing about it. One thing we need to keep in mind both for sanity’s sake, not to mention so that we can have a more accurate gauge of how people in the future might preserve elements of their humanity in the face of technological capacities we today find new and often alien, is that it would have to be a very dark future indeed for there not to be a lot to laugh at in it.
Writers of utopia can be deliberately funny, Thomas More’s Utopia is meant to crack the reader up. Dystopian visions whether of the fictional or nonfictional sort, avoid humor for a reason. The whole point of their work is to get us to avoid such a future in the first place, not, as is the role of humor, to make almost unlivable situations more human, or to undermine power by making it ridiculous, to point out the emperor has no clothes, and defeat the devil by laughing at him. Dystopias are a laughless affair.
Just like in Sherlock it’s a tough trick to pull off, but human beings of the future, as long as there are actually still human beings, will still find the world funny. In terms of technology used as a tool of power, the powerless, as they always have, are likely to get their kicks subverting it, and twisting it to the point of breaking underlying assumptions.
Laughs will be had at epic fails and the sheer ridiculousness of control freaks trying to squish an unruly, messy world into frozen and pristine lines of code. Life in the future will still sometimes feel like a sitcom, even if the sitcom’s of that time are pumped in directly to our brains through nano-scale neural implants.
Utopias and dystopias emerging from technology are two-sides of a crystal-clear future, which, because of their very clarity, cannot come to pass. What makes ethical judgement of the technologies discussed above, indeed all technology, difficult is their damned ambiguity, an ambiguity that largely stems from dual use.
Persons suffering from autism really would benefit from a technology that allowed them to accurately gauge and guide response to the emotional cues of others. An EMT really would be empowered if at the scene of a bad accident they could instantly access such a stream of information all from getting your name or even just looking at your face, especially when such data contains relevant medical information, as would a person working in child protective services and myriad forms of counseling.
Without doubt, such technology will be used by stalkers and creeps, but it might also be used to help restore trust and emotional rapport to a couple headed for divorce.
I think Sherry Turkle is essentially right, that the more we turn to technology to meet our emotional needs, the less we turn to each other. Still, the real issue isn’t technology itself, but how we are choosing to use it, and that’s because technology by itself is devoid of any morality and meaning, even if it is a cliché to say it. Using technology to create a more emotionally supportive and connected world is a good thing.
As Louise Aronson said in a recent article for The New York Times on social robots to care for the elderly and disabled:
But the biggest argument for robot caregivers is that we need them. We do not have anywhere near enough human caregivers for the growing number of older Americans. Robots could help solve this work-force crisis by strategically supplementing human care. Equally important, robots could decrease high rates of neglect and abuse of older adults by assisting overwhelmed human caregivers and replacing those who are guilty of intentional negligence or mistreatment.
Our sisyphean condition is that any gain in our capacity to do good seems to also increase our capacity to do ill. The key I think lies in finding ways to contain and control the ill effects. I wouldn’t mind our versions of Sherlock Holmes using the cyborg like powers we are creating, but I think we should be more than a little careful they don’t also fall into the hands of our world’s far too many Moriarties, though no matter how devilish these characters might be, we will still be able to mock them as buffoons.