Back to News & Commentary

Instagram, Jetliners, and Human Computation Engines (Friday links)

Jay Stanley,
Senior Policy Analyst,
ACLU Speech, Privacy, and Technology Project
Share This Page
January 18, 2013

Instagram has lost half its daily users in just one month as a result of all the bad publicity over its new terms of service, according to a story in the International Business Times. That is a stunning report—perhaps the most surprising indication of mass rebellion over an online policy issue since the defeat of SOPA. Perhaps I am overly conditioned to thinking that these kinds of seemingly obscure issues about the distribution of power on the internet—privacy, openness, intellectual property, etc.—are the provenance of geeks and policy nerds and reporters looking for stories. But losing half their daily users in one month? I think that’s a reminder that for all the assaults on our privacy by internet advertisers and others, people do still want and demand a sense of control when it comes to their online lives. Especially when it comes to services that people have made a part of their daily existence—which they feel they have a relationship with. Many privacy and other internet issues seem abstract and removed, and may not trigger a passionate backlash, but sometimes (as with this story, SOPA, and Facebook Beacon) they do.

“There were so many innovations on this plane that it is hard to fathom how it got approved so quickly.” That was the comment of an aviation consultant on the troubles faced by Boeing’s 787 Dreamliner in this Guardian piece on how the plane was a “nightmare waiting to happen.” It’s funny to hear rapid innovation framed as a negative in that way—but on reflection it makes sense. A modern jetliner, with more than 2 million parts, is a highly complex piece of machinery. Other highly complex things often require a period of “seasoning” and very extensive testing by very large numbers of people—software, for example. Another highly complex thing, of course, is our legal system, which has evolved slowly over time, tempered by experience and slowly seasoned through years of common-law evolution. And as with jetliners, the rapid pace of innovation—particularly innovation in surveillance technologies—is overstraining the system’s ability to digest that innovation, to integrate it with our values.

I’ve written several times about the interesting—and fast-changing—boundary between humans and machines. For example, how when one is dealing with a customer-service agent who has no discretion, one is really dealing with a computer program that happens to have a human interface. Or a fake human interface. I’ve also argued that when it comes to privacy, it makes no difference whether one is dealing with a human or a machine. The latest wrinkle on this front comes from Twitter, which has announced (via this blog post) that it is building what it calls a “real-time human computation engine” to help it identify the meaning behind new trending search queries and use that understanding to serve ads. Twitter’s systems automatically identify popular new queries and send them to human beings who answer questions about them—they, unlike a computer, being able to parse what a tag such as #bindersfullofwomen or #bigbird likely refers to in that day’s news. Then, Twitter’s automatic systems, having been informed by the humans that these queries are related to politics, use that knowledge to serve relevant ads. It’s fascinating to think how such hybrid human-computer systems for creating meaning out of data could evolve—and how, in many applications, they will make the computer-human distinction even less relevant when it comes to privacy.

Learn More About the Issues on This Page