As AI continues to weave itself into the fabric of our daily lives, the need to identify and manage the risks it presents has never been more urgent. For individuals, companies, and governments alike, the challenge lies in understanding the wide array of risks associated with AI systems and crafting effective rules to govern their use. From AI systems controlling critical infrastructure, where the stakes include human safety, to algorithms scoring exams, sorting resumes, or verifying travel documents, the potential risks are diverse and significant.
However, determining which risks to address in AI regulation has proven to be a complex and contentious issue. Policymakers, as seen with the EU AI Act and California’s SB 1047, have struggled to reach a consensus on the scope of risks that should be covered. Recognizing this challenge, researchers at MIT have developed a groundbreaking solution: a comprehensive “AI Risk Repository.” This database aims to serve as a crucial guide for policymakers, industry stakeholders, and academia, offering a meticulously curated and categorized collection of AI risks.
Peter Slattery, a researcher at MIT’s FutureTech group and the lead on the AI Risk Repository project, explained the motivation behind this initiative. “Our goal was to create a publicly accessible, comprehensive, extensible, and categorized risk database that anyone can copy and use,” Slattery said in an interview with TechCrunch. “We realized that such a resource was not only needed for our own project but also by many others in the field.”
The AI Risk Repository, which includes over 700 distinct AI risks, categorizes these risks by causal factors (e.g., intentionality), domains (e.g., discrimination), and subdomains (e.g., disinformation and cyberattacks). Slattery and his team embarked on this ambitious project to better understand the overlaps and gaps in existing AI safety research. Although other risk frameworks exist, Slattery notes that they often cover only a fraction of the risks identified in the MIT repository. These omissions, he warns, could have serious implications for AI development, usage, and policymaking.
“There’s a common assumption that there’s a consensus on AI risks, but our findings suggest otherwise,” Slattery emphasized. “We discovered that the average framework mentioned only 34% of the 23 risk subdomains we identified. Nearly a quarter of the frameworks covered less than 20%, and none addressed all 23 subdomains. The most comprehensive framework we found covered just 70%. This fragmentation in the literature indicates that we aren’t all on the same page when it comes to AI risks.”
To build the repository, the MIT researchers collaborated with colleagues from the University of Queensland, the nonprofit Future of Life Institute, KU Leuven, and AI startup Harmony Intelligence. Together, they scoured academic databases, retrieving thousands of documents related to AI risk evaluations.
Their findings revealed a significant disparity in how different frameworks prioritize risks. For example, over 70% of the frameworks addressed the privacy and security implications of AI, but only 44% covered misinformation. While more than half discussed the potential for AI to perpetuate discrimination and misrepresentation, only 12% considered the “pollution of the information ecosystem”—the growing problem of AI-generated spam and disinformation.
Slattery believes the repository will be invaluable for researchers, policymakers, and anyone working on AI risks. “This database provides a foundation for more specific work,” he said. “Before, people had two options: spend a lot of time reviewing scattered literature to develop a comprehensive overview or rely on a limited number of existing frameworks that might miss important risks. Now, with our repository, they can save time and increase oversight.”
However, the question remains: will this repository be widely adopted, and can it make a difference in AI regulation? The current state of AI regulation is a patchwork of differing approaches with varied goals. It’s hard to say whether having an AI risk repository like MIT’s earlier would have significantly impacted the current regulatory landscape.
Another critical consideration is whether simply aligning on the risks AI poses is enough to drive effective regulation. While many safety evaluations for AI systems have limitations, a comprehensive database of risks is an essential step forward but not a panacea.
Looking ahead, the MIT researchers plan to use the repository to evaluate how well different AI risks are being addressed. Neil Thompson, head of the FutureTech lab, shared the team’s vision for the next phase of their research. “Our repository will help us assess how effectively different risks are being managed,” Thompson explained. “We aim to identify shortcomings in organizational responses. For instance, if there is an overemphasis on one type of risk while other equally important risks are overlooked, that’s a gap we need to address.”
As AI continues to evolve and its influence expands, the MIT AI Risk Repository represents a crucial tool for those seeking to navigate the complex landscape of AI risks. By providing a comprehensive, well-organized, and up-to-date resource, it offers a foundation upon which better, more informed decisions can be made—decisions that will shape the future of AI regulation and its impact on society.