Microsoft AI CEO Mustafa Suleyman thinks that developing artificial general intelligence could take longer than predicted — maybe up to 10 years.
Artificial General Intelligence (AGI) — the idea of machines achieving human-level intelligence across a wide array of tasks — has long been a cornerstone of AI ambitions. Yet, how close we are to realizing this vision is a topic of hot debate, and two of the most influential figures in the AI world are at odds over the timeline and technical feasibility.
In a recent Reddit AMA, OpenAI CEO Sam Altman made waves by suggesting that AGI could be achieved using today’s hardware. However, Microsoft AI CEO Mustafa Suleyman, in a candid conversation with Nilay Patel on The Verge’s Decoder podcast, pushed back against this assertion, stating that while AGI is plausible, it may be as far as 10 years away — requiring multiple hardware generations to get there.
The Hardware Debate: Is Today’s Tech Enough for AGI?
Altman’s optimistic stance hinges on leveraging existing hardware innovations to achieve AGI. In contrast, Suleyman took a more cautious tone, arguing that significant advancements in hardware are essential before AGI becomes feasible.
“I don’t think it can be done on [Nvidia] GB200s,” Suleyman remarked, referring to current-generation GPUs. “I do think it is going to be plausible at some point in the next two to five generations. I don’t want to say I think it’s a high probability that it’s two years away, but I think within the next five to seven years.”
Given that each generation of hardware takes 18 to 24 months to develop, Suleyman projects that it could take up to a decade to bridge the gap. He emphasized the inherent uncertainty in predicting AGI’s timeline: “The uncertainty around this is so high that any categorical declarations just feel sort of ungrounded to me and over the top.”
Defining AGI: Not the Singularity, but a Big Leap
Part of the confusion surrounding AGI stems from differing definitions of what it entails. Suleyman sought to clarify the distinction between AGI and the so-called “singularity.”
“The singularity is an exponentially recursive, self-improving system that very rapidly accelerates far beyond anything that might look like human intelligence,” he explained. “AGI, on the other hand, is a general-purpose learning system that can perform well across all human-level training environments — from knowledge work to physical labor.”
Suleyman expressed skepticism about the near-term realization of AGI, particularly given the complexity of robotics and its integration into general-purpose systems. However, he acknowledged that within five to ten years, AI systems could perform most human knowledge work without requiring extensive manual intervention or prompting.
Crucially, he distanced the concept of AGI from the theoretical pursuit of superintelligence, emphasizing the practical applications of AI: “The challenge with AGI is that it’s become so dramatized. We end up not focusing on the specific capabilities of what the system can do. My motivation is to create systems that are accountable and useful to humans, rather than chasing the theoretical superintelligence quest.”
Sam Altman’s Vision: Lowering the Goalposts for AGI
Interestingly, Sam Altman has recently dialed back his own expectations for AGI. Speaking at The New York Times DealBook Summit, Altman described a less transformative version of AGI than what he’s outlined in the past.
“AGI will arrive sooner than most people think, and it will matter much less,” Altman stated. He added that the more daunting safety concerns often associated with superintelligence likely won’t emerge at the AGI stage. Instead, he suggested a “long continuation” from AGI to superintelligence, during which the world would experience incremental changes rather than revolutionary upheaval.
A Strained Relationship Between Partners?
The disagreement over AGI is playing out against the backdrop of a complex partnership between Microsoft and OpenAI. Just a year ago, Microsoft played a pivotal role in reinstating Altman as OpenAI’s CEO following a dramatic leadership shakeup. Yet, the companies’ paths appear to be diverging as both pursue their own AI ambitions.
Microsoft recently confirmed that it’s developing its own frontier AI model, capable of competing with OpenAI’s GPT-4 and beyond. Suleyman acknowledged the natural tensions in their relationship: “Every partnership has tension. It’s healthy and natural. They’re a completely different business to us. They operate independently, and partnerships evolve over time.”
While he stopped short of predicting how the relationship will evolve, Suleyman hinted at a period of adaptation: “Partnerships have to adapt to what works at the time, so we’ll see how that changes over the next few years.”
Looking Ahead: The Road to AGI
The road to AGI is fraught with technical, ethical, and philosophical challenges. Suleyman’s caution contrasts sharply with Altman’s optimism, but both perspectives underscore the complexity of defining and achieving AGI.
For now, the debate serves as a reminder of just how much uncertainty surrounds the future of AI. Whether AGI arrives in two years, ten years, or remains an elusive goal, the field’s progress will continue to transform industries and societies in ways we are only beginning to comprehend.
The key takeaway? While the dream of AGI might be on the horizon, the journey to get there will likely be shaped by incremental advancements, bold experimentation, and ongoing dialogue between the brightest minds in AI.