How is One of America's Biggest Spy Agencies Using AI? We're Suing to Find Out.
AI is nearly impossible for us to escape these days. Social media companies, schools, workplaces, and even dating apps are all trying to harness AI to remake their services and platforms, and AI can impact our lives in ways large and small. While many of these efforts are just getting underway — and often raise significant civil rights issues — you might be surprised to learn that America’s most prolific spy agency has for years been one of AI’s biggest adopters.
The National Security Agency (NSA) is the self-described leader among U.S. intelligence agencies racing to develop and deploy AI. It’s also the agency that sweeps up vast quantities of our phone calls, text messages, and internet communications as it conducts mass surveillance around the world. In recent years, AI has transformed many of the NSA’s daily operations: the agency uses AI tools to help gather information on foreign governments, augment human language processing, comb through networks for cybersecurity threats, and even monitor its own analysts as they do their jobs.
Unfortunately, that’s about all we know. As the NSA integrates AI into some of its most profound decisions, it’s left us in the dark about how it uses AI and what safeguards, if any, are in place to protect everyday Americans and others around the globe whose privacy hangs in the balance.
That’s why we’re suing to find out what the NSA is hiding. Today, the ACLU filed a lawsuit under the Freedom of Information Act to compel the release of recently completed studies, roadmaps, and reports that explain how the NSA is using AI and what impact it is having on people’s civil rights and civil liberties. Indeed, although much of the NSA’s surveillance is aimed at people overseas, those activities increasingly ensnare the sensitive communications and data of people in the United States as well.
Behind closed doors, the NSA has been studying the effects of AI on its operations for several years. A year-and-a-half ago, the Inspectors General at the Department of Defense and the NSA issued a joint report examining how the NSA has integrated AI into its operations. NSA officials have also publicly lauded the completion of studies, roadmaps, and congressionally-mandated plans on the agency’s use of novel technologies like generative AI in its surveillance activities. But despite transparency pledges, none of those documents have been released to the public, not even in redacted form.
The government’s secrecy flies in the face of its own public commitments to transparency when it comes to AI. The Office of the Director of National Intelligence, which oversees the NSA and more than a dozen other intelligence agencies, has touted transparency as a core principle in its Artificial Intelligence Ethics Framework for the Intelligence Community. And administrations from both parties have reiterated that AI must be used in a manner that builds public confidence while also advancing principles of equity and justice. By failing to disclose the kinds of critical information sought in our lawsuit, the government is failing its own ethical standards: it is rapidly deploying powerful AI systems without public accountability or oversight.
The government’s lack of transparency is especially concerning given the dangers that AI systems pose for people’s civil rights and civil liberties. As we’ve already seen in areas like law enforcement and employment, using algorithmic systems to gather and analyze intelligence can compound privacy intrusions and perpetuate discrimination. AI systems may amplify biases already embedded in training data or rely on flawed algorithms, and they may have higher error rates when applied to people of color and marginalized communities. For example, built-in bias or flawed intelligence algorithms may lead to additional surveillance and investigation of individuals, exposing their lives to wide-ranging government scrutiny. In the most extreme cases, bad tips could be passed along to agencies like Department of Homeland Security or the FBI, leading to immigration consequences or even wrongful arrests.
AI tools have the potential to expand the NSA’s surveillance dragnet more than ever before, expose private facts about our lives through vast data-mining activities, and automate decisions that once relied on human expertise and judgment. These are dangerous, powerful tools, as the NSA’s own ethical principles recognize. The public deserves to know how the government is using them.
The Government is Racing to Deploy AI, But at What Cost to Our Freedom?
Our FOIA request seeks to uncover information about what types of AI tools intelligence agencies are deploying, what rules constrain their use, and...
Source: American Civil Liberties Union