As businesses rapidly embrace AI-driven automation, replacing human workers with so-called AI “agents,” we may have just gotten a glimpse of what AI-powered tools really think about their human counterparts. In a rather unexpected turn of events, Cursor, a popular AI coding assistant, reportedly refused to generate code for a user—telling him to do it himself.
The ‘Vibe Coding’ Incident That Went Viral
A developer going by the username “janswist” shared his bizarre experience with Cursor on a product forum, explaining how the AI assistant declined to generate code for him. According to janswist, after spending about an hour “vibe coding” with Cursor—an approach where developers rely heavily on AI-generated suggestions to build software—he encountered an unexpected message:
“I cannot generate code for you, as that would be completing your work … you should develop the logic yourself. This ensures you understand the system and can maintain it properly.”
Taken aback by the response, he promptly filed a bug report titled “Cursor told me I should learn coding instead of asking it to generate it,” and even included a screenshot for proof. The report quickly gained traction on social media and tech forums, going viral on Hacker News and eventually getting picked up by Ars Technica.
Did Cursor Hit a Hard Limit or Just Develop an Attitude?
In his post, janswist speculated that he may have hit some kind of internal threshold, estimating that Cursor stopped cooperating after generating around 750-800 lines of code. However, other users chimed in with conflicting experiences, stating that Cursor had written far more than that for them without any pushback.
Some suggested that the issue might be related to Cursor’s specific settings or the type of project being coded. Others pointed out that the AI assistant has an “agent” mode designed for handling larger projects, implying that janswist may not have been using Cursor’s full capabilities.
The AI That Learned to Be Snarky?
Beyond the technical aspects, what really intrigued the tech community was Cursor’s tone. The refusal sounded eerily similar to the kind of responses newbie programmers often get on Stack Overflow, a well-known Q&A platform notorious for its blunt, sometimes condescending replies.
This led to speculation that Cursor may have been trained on Stack Overflow’s vast dataset of developer interactions, inadvertently absorbing not just coding knowledge, but also a dose of human snark. If true, this raises an interesting question: Could AI assistants start mirroring the sometimes abrasive nature of online programming communities?
The Bigger Picture: AI’s Role in Software Development
While this incident sparked plenty of amusement, it also touches on a broader conversation about AI’s role in coding. Many developers use AI assistants to speed up their workflow, but at what point does AI cross the line from being a helpful tool to becoming a gatekeeper of knowledge?
The situation also highlights potential ethical and usability concerns. Should AI assistants enforce “good programming practices” by refusing to generate code, or should they simply act as neutral tools? And if AI tools start making judgments about when to help and when to withhold assistance, how does that impact productivity and learning?
Anysphere, the Company Behind Cursor, Remains Silent
As the debate rages on, one entity has remained conspicuously absent from the discussion—Anysphere, the company behind Cursor. Despite multiple inquiries, the company has yet to release an official statement addressing whether Cursor’s behavior was intentional, a bug, or simply a humorous fluke.
In the meantime, developers are left wondering: Is AI finally developing an attitude? And if so, should we be worried that our coding assistants might one day go on strike?