TSA Response to Universal Criticism of Behavior Detection: More Behavior Detection
The Transportation Security Administration is turning to video technology to double down on its embattled effort to figure out our thoughts based on our behavior.
In a Privacy Impact Assessment released last week, TSA revealed that it is field-testing (heads-up if you’ll be traveling through the airport in Providence, Rhode Island) the “Centralized Hostile Intent” program, which will assess “whether behavioral indicators of malicious intent” can be observed on a live video feed by TSA officers in remote locations.
The program is part of TSA’s larger Behavior Detection and Analysis program—formerly Screening Passengers by Observation Techniques, or “SPOT”—through which thousands of “behavior detection officers” in airports across the country scrutinize travelers for signs of “mal-intent.” According to leaked documents, those signs can include conduct as menacing as being late for your flight, yawning, or having body odor (we need you now more than ever, Dr. Armpit).
I hesitate to call the TSA’s behavior detection program controversial, because that implies that it has at least some meaningful support. Virtually everyone outside the TSA who has reviewed the program—government auditors, members of Congress from both parties, independent experts—has concluded that it is flawed and wasteful. We’ve long been critical of the program as not only divorced from science, but also for encouraging discriminatory racial profiling. In March we filed a lawsuit for more information about the program—and perhaps any insight into why TSA continues to fund it.
So we were confused and disoriented (those are also among TSA’s signs of deception!) when we learned of the Centralized Hostile Intent experiment, which uses techniques that the TSA says would allow it to “expand the scale of its behavior detection program.” To test the program, TSA is sending volunteer actors into airport screening areas, filming them while they “mimic passengers who exhibit suspicious behaviors with hostile intent,” and then seeing if behavior detection officers watching the video can detect the suspicious behaviors.
I’m not a social scientist, but trying to detect volunteer actors pretending to be suspicious hardly seems like a bulletproof validation method. The results are likely to reflect the acting (or over-acting) ability of the volunteers as much as anything real. More troubling, however, is that TSA still seems oblivious to the fundamental problems with behavior detection: even if officers can detect these behaviors reliably, there’s no indication that the behaviors actually reflect deception or “mal-intent,” as opposed to everyday innocent conduct. That being the case, it’s difficult to see how these programs amount to anything more than what a former behavior detection officer called a “license to harass,” and another called “a racial profiling program.”
The picture gets darker still. This week’s privacy impact assessment also stated that “video data” from the project will be used to develop “tracking algorithms for multi-camera person and object detection to determine a person’s path or possible associates in an operational environment.” So if TSA officers in think you’re too fidgety, too sweaty, too harried—and this is at the airport, remember—they’ll use video technology to track you, identify your family and friends, and track them, too.
On the same day that the TSA disclosed the Centralized Hostile Intent program, the Department of Homeland Security’s Inspector General issued a scathing report to Congress on TSA’s “lack of stewardship of taxpayer dollars,” “questionable investment in security,” and “failure to understand the gravity of the situation.” Those are apt descriptions of the mind-reading and surveillance schemes that make up the TSA’s behavior detection programs.