As the tech industry floods the banks of the Hudson River with shiny new AI investments, international leaders gather just across Manhattan, on the East River, for the annual UN General Assembly. Among global discussions on sustainability, conflict resolution, and climate change, one issue looms large: artificial intelligence (AI).
Amid this critical backdrop, Neil Sahota, CEO of AI research firm ACSILabs and long-time UN AI advisor, adds his voice to a growing chorus of concern. Sahota has been at the forefront of AI research for two decades, witnessing firsthand its rapid evolution. He was even part of IBM’s secret team that developed the famous Jeopardy!-winning AI, Watson. Today, he remains a key player in the UN’s AI for Good initiative, providing a unique vantage point on the rise of global AI tools—and the increasing urgency to regulate them.
But according to Sahota, the clock is ticking. “People are starting to realize we’re running out of time—or maybe we’ve already run out of time—to figure these things out,” he told Mashable. “We’re living in a time of hyper-change. We’re seeing a century’s worth of transformation in just ten years, and we don’t have the luxury of reacting anymore.”
The AI Arms Race
Across the globe, nations are scrambling to outpace each other in AI investment, triggering what the AI Now Institute calls an “AI arms race.” This competition has spawned “AI nationalism,” where AI becomes not just a technological asset but a core industrial and political resource. The U.S. and China, in particular, are locked in a race for AI supremacy, each seeking technological sovereignty.
The rapid development of AI has left many governments playing catch-up. While the UN has been discussing AI regulation since 2017, national efforts have been piecemeal at best. In March, the UN General Assembly took a major step, adopting a resolution to steer AI use for the “global good,” a landmark moment as world leaders grapple with AI’s existential challenges. “We must govern this technology rather than have it govern us,” said U.S. representatives in support of the resolution.
Yet despite these efforts, the question remains: How should AI be regulated, and who is best equipped to lead that charge? For Sahota, the answer is clear. “The UN is perhaps the only globally-trusted institution with the credibility to lead this effort,” he said. “It can help member nations—and the world—understand and create a new mindset around AI.”
Can the UN Lead the Way?
The UN has certainly tried to position itself at the forefront of AI governance. In 2023, the high-level Advisory Body on Artificial Intelligence was formed to develop a global framework for AI regulation. This year, the body published the “Governing AI for Humanity” report—a sobering but hopeful document that outlines both the risks and opportunities AI presents.
The report calls for the establishment of a new independent scientific panel to monitor AI’s development, opportunities, risks, and uncertainties. It also advocates for AI standards-sharing, the creation of an AI governance network, and a global AI fund to promote equitable investment in the technology.
“Fast, opaque, and autonomous AI systems challenge traditional regulatory structures,” the report warns. “More powerful systems could disrupt the labor market. The use of AI in weapons and public security raises profound legal and humanitarian concerns.” The document points to a global governance gap, where existing norms are incomplete, and accountability is largely absent.
Sahota contributed insights to the report but did not sit on the drafting committee. Having witnessed the political compromises needed to formalize a report of this size, he acknowledges the challenges ahead. Some regulatory suggestions were diluted, while others gained strength. Still, he insists that without a dedicated UN office for AI and technology oversight, progress will stall.
Why AI Regulation is So Complicated
The urgency of regulating AI is clear, but the path forward is fraught with challenges. One of the biggest obstacles is the lack of coordination between different regulatory bodies. While nations like the U.S. and blocs like the European Union have made strides in AI regulation—the EU’s AI Act, for instance, aims to protect citizens from high-risk AI—global efforts remain fragmented.
For Sahota, the diffuse nature of AI makes international collaboration essential. “One of my concerns is that we’re building things we don’t fully understand,” he explained. “As technologists, we often focus solely on achieving an outcome, without considering the ripple effects or unintended consequences.”
AI is also uniquely difficult to regulate because of its amorphous nature. Unlike nuclear energy, which involves a specific set of materials and processes, AI spans a vast array of applications, industries, and countries. It’s not something that can be contained or easily regulated by nation-states alone.
That’s why, Sahota argues, the UN must take the lead in setting international standards for AI. He has long advocated for the creation of an AI oversight body within the UN Secretariat, a recommendation echoed in the new report. This office, he says, could act as a central hub, bringing together working groups, committees, and projects while providing visibility and oversight for international regulatory efforts.
The UN’s ‘Lead by Example’ Approach
While much of the conversation around AI regulation focuses on mitigating risks, Sahota sees an opportunity to model AI’s positive potential. “Regulation shouldn’t just be about limiting risks or reducing legal liability,” he said. “There’s also the possibility to create good.”
The UN’s new AI report, while not a regulatory framework per se, is a roadmap for co-investment. It calls on international powers to support initiatives like a shared data trust, a global AI investment fund, and a development network to convene experts and resources. But for Sahota, the lack of visible buy-in from major member states is a missed opportunity. “It would have been nice to see next steps laid out,” he said. “It would show this is more than just talk. It would lend credibility.”
Yet the fact that the UN is even devoting high-level meetings to ethical AI discussions is a significant achievement, especially as AI nationalism rises. But technology evolves faster than political processes, Sahota noted, and time is running out. “There are more and more people who realize that this window is rapidly closing,” he said. “We only have as much time as we think we do.”
The Future of AI Governance: A Call for Urgency
As AI continues to evolve, its impact on global economies, societies, and political systems will only grow. Sahota warns that without a coordinated global response, the world risks fragmented AI governance, where national policies conflict, and bad actors exploit regulatory gaps.
The UN is not without its flaws—Sahota acknowledges that progress can be slow, and consensus-building among 192 member nations is a Herculean task. But he remains hopeful that the international body is uniquely positioned to lead the world toward responsible AI development. “We need to define what ethical AI use looks like, and there’s no better forum than the UN.”
In an era of hyper-change, the stakes couldn’t be higher. AI’s influence will touch every aspect of human life, from jobs and security to ethics and governance. The question is no longer whether to regulate AI, but how—and whether we can do so before it’s too late.
“The clock is ticking,” Sahota said. “We can either rise to the occasion and shape the future of AI, or we can let it shape us.”
In this critical juncture, Sahota’s call is clear: the time is up for deliberation. What comes next will define not just the future of AI, but the future of global collaboration in the digital age.