As part of TechCrunch’s ongoing Women in AI series, which aims to highlight the achievements of women in AI, TechCrunch interviewed Lakshmi Raman, the director of AI at the CIA. The discussion covered her journey to becoming the director, the CIA’s use of AI, and the balance between embracing new technology while deploying it responsibly.
Raman has a long history in intelligence. She joined the CIA in 2002 as a software developer after earning her bachelor’s degree from the University of Illinois Urbana-Champaign and her master’s degree in computer science from the University of Chicago. Several years later, she moved into management roles, eventually leading the CIA’s enterprise data science efforts.
Raman acknowledges the importance of having women role models and predecessors at the CIA, especially in the historically male-dominated field of intelligence.
“I still have people who I can look to for advice and guidance about what the next level of leadership looks like,” she said. “Navigating a career as a woman comes with its own set of challenges.”
AI as an Intelligence Tool
In her role as director, Raman orchestrates and drives AI activities across the CIA. “We see AI as a support tool for our mission,” she said. “It’s about humans and machines working together at the forefront of our AI usage.”
The CIA has been exploring AI applications since around 2000, particularly in natural language processing, computer vision, and video analytics. The agency keeps abreast of newer trends like generative AI, with a roadmap informed by both industry and academia.
“When considering the vast amounts of data we handle, content triage is an area where generative AI can significantly help,” Raman said. “We’re looking at search and discovery aids, ideation tools, and generating counterarguments to help counter analytic biases.”
The urgency within the U.S. intelligence community to deploy any tools that might help the CIA address growing geopolitical tensions, from terrorism to disinformation campaigns by foreign actors, has been growing. The Special Competitive Studies Project set a two-year timeline for domestic intelligence services to adopt generative AI at scale.
One such tool, Osiris, resembles OpenAI’s ChatGPT but is tailored for intelligence use. It summarizes data, for now only unclassified and publicly or commercially available data, and allows analysts to ask follow-up questions in plain English.
Osiris is used by thousands of analysts across the 18 U.S. intelligence agencies. Raman did not disclose whether it was developed in-house or with third-party technology but mentioned partnerships with well-known vendors.
“We leverage commercial services and also employ AI tools for translation and alerting analysts to important developments during off hours,” Raman said. “Collaboration with private industry is essential for providing both broad services and niche solutions from non-traditional vendors.”
A Fraught Technology
There is considerable skepticism and concern regarding the CIA’s use of AI.
In February 2022, Senators Ron Wyden (D-OR) and Martin Heinrich (D-NM) revealed that the CIA has a secret, undisclosed data repository containing information on U.S. citizens, despite being generally barred from investigating Americans. An Office of the Director of National Intelligence report also showed that U.S. intelligence agencies buy data on Americans from brokers with little oversight.
If the CIA used AI to analyze this data, it could violate civil liberties and lead to unjust outcomes due to AI’s limitations.
Several studies have shown that predictive crime algorithms are biased and tend to disproportionately flag Black communities. Other studies indicate that facial recognition technology misidentifies people of color more often than white people.
Even the best AI today can hallucinate, or invent facts and figures, which could be problematic in intelligence work where accuracy is crucial.
Raman emphasized that the CIA complies with U.S. law and follows ethical guidelines to use AI responsibly.
“We take a thoughtful approach to AI,” she said. “Our users must understand the AI systems they use. Building responsible AI involves all stakeholders, including AI developers and our privacy and civil liberties office.”
Raman’s point is that AI system designers must make clear where the system might fall short. A recent study found that AI tools used by police were often poorly understood by those using them, leading to potential misuse.
“Any AI-generated output should be clearly understood by users, with clear labeling and explanations of how the systems work,” Raman said. “We adhere to legal requirements and ensure our users, partners, and stakeholders are aware of all relevant laws and guidelines governing our AI systems.”
This careful approach is aimed at ensuring responsible and ethical use of AI in intelligence work