Back to News & Commentary

Telemarketing Calls and the Blurring Human-Computer Divide

Jay Stanley,
Senior Policy Analyst,
ACLU Speech, Privacy, and Technology Project
Share This Page
December 3, 2012

I’ve written before about how talking on the phone to a telemarketer or customer-service agent is often more like dealing with a computer than a human being. Even though the person on the other end is human, their discretion is often tightly circumscribed by the computer in front of them—often including the words they say, which are confined to computer-generated scripts. I got a political telemarketing call recently that reshaped my understanding of this dynamic in very interesting ways and raised some new questions in my mind.

I answered the phone and the woman on the other end asked for my wife. Loosely paraphrased/reconstructed from my memory, the conversation went something like this:

ME: I’m sorry, she’s not here right now.

HER: That’s okay, I really just wanted to talk to any adult in the household. I’m calling with [Family-something-or-other group], and I’d like to give you a short poll with just four questions. Would you like to participate?”

ME [After hesitating a moment—it sounded like a right-wing push-poll of the kind we seem to get sometimes, but I was also intrigued]: Four questions? Sure, okay.

HER: Great. The first question is, do you think that the amount of material on television, cable, and the internet that is inappropriate for children is increasing?

ME: Ummm—yeah, probably.

HER: [basically the same question phrased differently]

ME: Yes.

HER: Do you as a parent worry about your kids’ exposure to a lot of the material that is available through the media today?

ME: Yes.

HER: Thank you. Now the last question: movies in the theater are currently subject to ratings and content-restriction standards. Would you support creation of similar standards for television, cable and internet content?

ME: No.

HER: Okay, thank you for taking the poll. I can’t be biased or anything when I give the questions, but since you’re someone who agrees that the media today is very troublesome for today’s youth I would like to point out [here followed a political pitch for new content restriction laws].

ME [Interrupting, suspicious at fact that my negative answer to final question was being ignored]: Can I ask you a question?

HER: Sure, go ahead.

ME: Are you a bot?

HER: [Laughs] No, of course not!

ME: Can I ask you, how long have you been doing these polls?

HER: Sorry, I didn’t understand the question.

ME: Have you been doing this for a long time?

HER: [Exact same laugh sound as before] No, of course not!

Oh my god! I threw a few more curveball questions at her and it became crystal-clear: I had been interacting with a computer for several minutes, and didn’t even know it.

This bot was very smoothly constructed; I’ve never heard anything like it, and when it finally dawned on me I was very surprised. Now, of course there is a fairly standard pattern to a call like this, which helps the programmers create the illusion of an extremely life-like interaction, but still it was very slickly done. I’ve gotten plenty of robo-calls (living in a swing state during the recent election guaranteed that) but this was different; everything about this call including the tone of the woman’s voice was designed to make it as natural a facsimile of a real conversation as possible. I don’t know whether the poll’s designers wrestled with whether there was any downside to being too good at creating a computerized conversation—whether there is some audio-AI version of the uncanny valley into which they ought not dip, but it does raise some questions.

Certainly the conduct of a poll is perhaps a perfect application of this kind of technology (putting aside the fact that this was almost certainly not a real poll). Polls, after all, aim at neutrally—robotically, really—collecting yes/no/maybe data, and it is actually expected that a pollster will read from a script. In fact part of the illusion with this call may have come from the contrast between the usual stiffness and roteness of poll questioners, and the cheerful chattiness and informality of this robo-pollster. But still, I must say that I felt a bit deceived. I’m not sure if I was just abashed at my lack of savviness in not knowing what I was dealing with, or whether I’m intuiting actual ethical implications for this buried in there somewhere.

This call made me realize that voice-recognition technology will find many other applications other than Siri and customer service menus in coming years, especially since it’s likely to improve rapidly. And it made me wonder about the implications of that. Will it become increasingly common to mistake robo-interactions for human? Will people design their own little personal Turing tests, as I did, to ferret out the difference? Or will robo-deployments become so pervasive that people will simply assume they are dealing with a robot within a certain universe of interactions, unless otherwise indicated? Will people stop caring?

Basically, we should expect that anywhere a robot can be used, a robot will be used. Not just every phone interaction that is routine in any way, but think fast-food drive-throughs, box-office ticket windows, and receptionists of all kinds. And, not just speech and speech recognition will get better, but also the branching decision-tree and/or deep learning algorithms that back them up. That raises the possibility that even less-routine interactions could be automated, such as those with auto mechanics, triage nurses, doctors, and other experts. In college I had a job answering the help line for a computer service and, as is so often the case, a 90/10 rule was definitely in effect (90% of the callers had the same 10% of problems over and over). By the end of that summer, I could have written a computer program myself to handle those calls.

As robot interlocutors improve and become both more seamless and more intelligent, should people expect some kind of notice that they are not talking to a human? It’s hard to answer that question until we see the full implications of the technology, but my initial gut reaction is that there is no need for that. It might evolve into a good idea as a matter of etiquette, but as I have argued elsewhere, the important question is not whether one is communicating with a computer or a human, but rather how those communications are handled and how they may affect a person down the line.

In any case, the next time a conservative group selling censorship schemes calls me, I will be on guard, ready to whip out a little Turing test right at the start (“excuse me, hold on a second, I need to let my dog in. Do you have a dog?”). That way, whether or not I decide to listen to the rest of the human- or computer-delivered script, at least I’ll know what I’m dealing with. I'm not totally sure why, but I still want to know.

But perhaps I should be comforted that at least my privacy hasn’t been invaded too much yet—these groups wouldn’t bother calling me if they knew where I worked.

Update:

A followup to this post with additional thoughts about robot-human interactions is here.

Learn More About the Issues on This Page