By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Times CatalogTimes CatalogTimes Catalog
  • Home
  • Tech
    • Google
    • Microsoft
    • YouTube
    • Twitter
  • News
  • How To
  • Bookmarks
Search
Technology
  • Meta
Others
  • Apple
  • WhatsApp
  • Elon Musk
  • Threads
  • About
  • Contact
  • Privacy Policy and Disclaimer
© 2025 Times Catalog
Reading: Advanced AI chatbots are less likely to admit they don’t have all the answers
Share
Notification
Font ResizerAa
Font ResizerAa
Times CatalogTimes Catalog
Search
  • News
  • How To
  • Tech
    • AI
    • Apple
    • Microsoft
    • Google
    • ChatGPT
    • Gemini
    • YouTube
    • Twitter
  • Coming Soon
Follow US
  • About
  • Contact
  • Privacy Policy and Disclaimer
© 2025 Times Catalog
Times Catalog > Blog > Tech > AI > Advanced AI chatbots are less likely to admit they don’t have all the answers
AITech

Advanced AI chatbots are less likely to admit they don’t have all the answers

Usama
Last updated: September 27, 2024 1:48 pm
Usama
Share
6 Min Read
Advanced AI chatbots are less likely to admit they don’t have all the answers
SHARE

The study also found people are far too quick to believe bots’ wrong answers.

In the world of artificial intelligence, smarter chatbots are supposed to be more reliable, right? After all, as these AI models become more advanced, their accuracy should improve. But researchers have discovered a surprising downside: the smarter these models get, the less likely they are to admit when they don’t know the answer. Instead of saying, “I don’t know,” they often provide inaccurate responses with misplaced confidence — a phenomenon that’s becoming increasingly problematic.

Contents
The Study: How Smarter AI Makes More MistakesThe Human Factor: We’re Part of the ProblemThe Solution: Teaching Chatbots to Say “I Don’t Know”What Can We Do? Fact-Check Your AI

This isn’t just about AI; it’s also about how humans react to the technology. People tend to trust AI-generated answers, even when they’re wrong, creating a ripple effect of confidently delivered misinformation.

“They are answering almost everything these days,” says José Hernández-Orallo, a professor at the Universitat Politècnica de València in Spain. “That means more correct answers, but also more incorrect ones.” Hernández-Orallo, the lead researcher, collaborated with his team at the Valencian Research Institute for Artificial Intelligence (VRAIN) to study this growing issue.

The Study: How Smarter AI Makes More Mistakes

The research team focused on three large language model (LLM) families: OpenAI’s GPT series, Meta’s LLaMA, and the open-source BLOOM. Rather than testing the very latest versions, they analyzed earlier models, including OpenAI’s GPT-3 ada and subsequent iterations, stopping just short of today’s most advanced models, like GPT-4. Their study excluded the newer GPT-4o and o1-preview models, but the trend they uncovered is likely still relevant today.

The team’s goal was to assess how AI models handle a variety of tasks. They tested thousands of questions across subjects like arithmetic, anagrams, geography, and science. They even quizzed the models on tasks requiring information manipulation, such as alphabetizing a list. The prompts ranged from simple to complex, allowing the researchers to gauge how well the chatbots managed both easy and difficult questions.

As expected, the chatbots improved in accuracy as they grew more advanced. However, the rate of wrong answers also increased, particularly when the models were faced with tougher questions. Instead of gracefully bowing out with an “I don’t know,” the AIs confidently generated incorrect answers.

In essence, it’s like a professor who, after mastering a few subjects, begins to believe they have all the answers — even when they don’t. And just like a misinformed professor, these chatbots can inadvertently spread errors with authority.

The Human Factor: We’re Part of the Problem

The challenge isn’t just with the AI. The research team found that the people interacting with the chatbots often failed to detect when the models got it wrong. Volunteers were tasked with rating the accuracy of the chatbot’s answers, and the results were alarming: users frequently mistook incorrect answers for accurate ones. The percentage of wrong answers that volunteers incorrectly identified as correct fell between 10 and 40 percent.

“Humans are not able to supervise these models,” Hernández-Orallo concluded. This is a crucial point: as AI becomes more sophisticated, our ability to distinguish correct from incorrect responses diminishes.

The Solution: Teaching Chatbots to Say “I Don’t Know”

So, what can be done to address this issue? Hernández-Orallo and his team suggest a two-pronged approach: improving AI performance on easier questions while programming the chatbots to avoid answering overly complex or unfamiliar queries. In short, AI needs to be better at admitting its limits.

“We need humans to understand: ‘I can use it in this area, and I shouldn’t use it in that area,’” Hernández-Orallo told Nature.

This is a sound recommendation — in theory. In practice, however, getting AI companies to implement such changes may be an uphill battle. Chatbots that frequently say, “I don’t know” might be perceived as less capable or less valuable, which could hurt user engagement and, ultimately, the revenue of companies that develop these models. In response, companies often include disclaimers like, “ChatGPT can make mistakes” or “Gemini may display inaccurate information,” which does little to prevent the spread of misinformation.

What Can We Do? Fact-Check Your AI

For now, the responsibility largely falls on us — the users. It’s up to individuals to recognize that even advanced AI models can (and do) get things wrong. To prevent the spread of AI-generated misinformation, it’s crucial to fact-check chatbot responses, especially when the information is critical or unfamiliar. The reality is that AI is a tool, not an oracle, and it’s on us to ensure we’re using it responsibly.

So the next time your chatbot provides a quick, confident answer, remember: trust, but verify.

You Might Also Like

Logitech’s MX Creative Console now supports Figma and Adobe Lightroom

Samsung resumes its troubled One UI 7 rollout

Google Messages starts rolling out sensitive content warnings for nude images

Vivo wants its new smartphone to replace your camera

Uber users can now earn miles with Delta Air Lines

Share This Article
Facebook Twitter Pinterest Whatsapp Whatsapp Copy Link
What do you think?
Love0
Happy0
Sad0
Sleepy0
Angry0
Previous Article Meta fined $101.5M for 2019 breach that exposed hundreds of millions of Facebook passwords Meta fined $101.5M for 2019 breach that exposed hundreds of millions of Facebook passwords
Next Article Microsoft’s more secure Windows Recall feature can also be uninstalled by users Microsoft’s more secure Windows Recall feature can also be uninstalled by users
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

144FollowersLike
23FollowersFollow
237FollowersPin
19FollowersFollow

Latest News

Pinterest is prompting teens to close the app at school
Pinterest is prompting teens to close the app at school
News Tech April 22, 2025
ChatGPT search is growing quickly in Europe, OpenAI data suggests
ChatGPT search is growing quickly in Europe, OpenAI data suggests
AI ChatGPT OpenAI April 22, 2025
social-media-is-not-wholly-terrible-for-teen-mental-health-study-says
Social media is not wholly terrible for teen mental health, study says
News April 22, 2025
Google is trying to get college students hooked on AI with a free year of Gemini Advanced
Google is trying to get college students hooked on AI with a free year of Gemini Advanced
AI Gemini Google Tech April 19, 2025
Times CatalogTimes Catalog
Follow US
© 2025 Times Catalog
  • About
  • Contact
  • Privacy Policy and Disclaimer
Welcome Back!

Sign in to your account

Lost your password?