By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Times CatalogTimes CatalogTimes Catalog
  • Home
  • Tech
    • Google
    • Microsoft
    • YouTube
    • Twitter
  • News
  • How To
  • Bookmarks
Search
Technology
  • Meta
Others
  • Apple
  • WhatsApp
  • Elon Musk
  • Threads
  • About
  • Contact
  • Privacy Policy and Disclaimer
© 2025 Times Catalog
Reading: Google says its new AI models can identify emotions — and that has experts worried
Share
Notification
Font ResizerAa
Font ResizerAa
Times CatalogTimes Catalog
Search
  • News
  • How To
  • Tech
    • AI
    • Apple
    • Microsoft
    • Google
    • ChatGPT
    • Gemini
    • YouTube
    • Twitter
  • Coming Soon
Follow US
  • About
  • Contact
  • Privacy Policy and Disclaimer
© 2025 Times Catalog
Times Catalog > Blog > Tech > AI > Google says its new AI models can identify emotions — and that has experts worried
AI

Google says its new AI models can identify emotions — and that has experts worried

Debra Massey
Last updated: December 6, 2024 12:06 pm
Debra Massey
Share
7 Min Read
Google says its new AI models can identify emotions — and that has experts worried
SHARE

Google has unveiled its latest artificial intelligence innovation, the PaliGemma 2 model family, which introduces a controversial feature: the ability to “identify” emotions in images. This cutting-edge technology, announced on Thursday, goes beyond mere object recognition to generate detailed captions and analyze scenes, including identifying actions, emotions, and the overall narrative of a photo.

Contents
The Promise of PaliGemma 2The Shaky Science Behind Emotion DetectionBias and Ethical ConcernsA Tool for Harm?Google’s DefenseThe Bigger Picture

In a blog post, Google touted PaliGemma 2 as a leap forward in AI’s ability to generate contextually relevant image descriptions. However, the feature that claims to interpret emotions has ignited concerns among AI ethicists and researchers, who warn about the risks of misrepresentation, bias, and misuse of such technology.

The Promise of PaliGemma 2

According to Google, PaliGemma 2 models excel in tasks like generating nuanced captions and answering questions about people in images. The company emphasizes that this isn’t just object detection—it’s storytelling through AI, aiming to capture the intricacies of human emotions and activities.

But while this capability may sound groundbreaking, Google acknowledges that PaliGemma 2 doesn’t come with built-in emotion recognition. Instead, developers must fine-tune the model for specific use cases.

This has done little to calm critics. Experts argue that the concept of emotion detection itself is fraught with scientific and ethical challenges. Sandra Wachter, professor of data ethics and AI at Oxford Internet Institute, compared the idea of “reading” emotions to asking a Magic 8 Ball for guidance.

“I find it deeply troubling to assume that AI can accurately discern people’s emotions,”

Google says its new AI models can identify emotions — and that has experts worried
Google says that PaliGemma 2 is based on its Gemma open model set, specifically its Gemma 2 series.
Image Credits:Google

The Shaky Science Behind Emotion Detection

Emotion recognition is not new territory for AI. For years, tech companies and startups have tried to decode human emotions for applications ranging from sales training to driver safety. Much of this work traces back to the theories of psychologist Paul Ekman, who proposed six universal human emotions—anger, surprise, disgust, enjoyment, fear, and sadness.

While Ekman’s framework provided a foundation, subsequent research has poked significant holes in his theory. Studies have shown that cultural, personal, and situational factors heavily influence how people express emotions, making universal detection nearly impossible.

“Emotion detection doesn’t work in a general sense because emotions are complex, deeply personal experiences,” explained Mike Cook, an AI researcher at Queen Mary University. “It’s tempting to believe that we can infer emotions from facial expressions, but even humans can misinterpret each other. AI is no different—and often worse.”

Cook pointed out that while some systems might identify generic emotional cues in specific cases, the broader application remains unreliable and inherently flawed.

Bias and Ethical Concerns

One of the most concerning issues with emotion-detecting AI is bias. Models often reflect the assumptions of their designers, and their predictions can be skewed by the data they’re trained on. For instance, a 2020 MIT study revealed that face-analyzing algorithms were prone to assigning more negative emotions to Black faces than white ones.

Even Google’s PaliGemma 2, which the company claims underwent “extensive testing,” raises red flags. While Google reported “low levels of toxicity and profanity” compared to industry benchmarks, the company has been vague about the exact tests performed or the benchmarks used.

The only disclosed benchmark, FairFace, has itself faced criticism for limited representation. With only a handful of racial groups included, FairFace might not be sufficient to detect or mitigate bias in global, real-world applications.

“Emotion interpretation is deeply subjective,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute. “It’s shaped by personal and cultural contexts. Even humans can’t reliably infer emotions from facial expressions alone, let alone AI.”

A Tool for Harm?

The stakes of emotion detection technology are high, particularly when it comes to its potential misuse. Regulators in Europe have already taken steps to limit the deployment of such systems in sensitive areas. Under the EU’s proposed AI Act, schools and employers would be prohibited from using emotion detection technology, though law enforcement agencies are notably exempt.

Critics worry that making models like PaliGemma 2 widely available—such as through platforms like Hugging Face—opens the door to abuse.

“If this so-called emotion detection is based on pseudoscience, it could lead to real-world harm, particularly for marginalized groups,” warned Khlaaf. “Imagine its use in law enforcement, hiring, border control, or even granting loans. The implications are chilling.”

The potential for discrimination is especially troubling given the lack of consensus on what constitutes an emotion or how it can be universally expressed. The fear is that flawed assumptions baked into AI systems will amplify existing inequalities, not resolve them.

Google’s Defense

In response to criticism, a Google spokesperson emphasized that the company has conducted rigorous ethical and safety evaluations for PaliGemma 2. These include tests related to content safety and child safety, as well as efforts to minimize “representational harms.”

Still, experts like Sandra Wachter remain unconvinced.

“Responsible innovation isn’t just about running tests,” Wachter argued. “It’s about considering the consequences of your technology from day one and throughout its lifecycle. Models like this could lead to a dystopian future where your perceived emotions decide whether you get a job, a loan, or university admission.”

The Bigger Picture

Google’s foray into emotion detection highlights the broader challenges of aligning AI capabilities with societal values. While AI has the potential to enrich our understanding of human behavior, its limitations and biases can easily cause harm when misapplied.

As AI systems like PaliGemma 2 continue to push the boundaries of what’s possible, it’s imperative that developers, regulators, and users remain vigilant. Without robust oversight, the line between innovation and exploitation can blur, leaving individuals—and their emotions—vulnerable to misinterpretation and misuse.

For now, the debate rages on. Is emotion-detecting AI a step toward greater empathy or a dangerous overreach of technology? Time, and careful regulation, will tell.

You Might Also Like

ChatGPT search is growing quickly in Europe, OpenAI data suggests

Google is trying to get college students hooked on AI with a free year of Gemini Advanced

ChatGPT will now use its ‘memory’ to personalize web searches

ChatGPT is referring to users by their names unprompted, and some find it ‘creepy’

OpenAI’s new reasoning AI models hallucinate more

Share This Article
Facebook Twitter Pinterest Whatsapp Whatsapp Copy Link
What do you think?
Love0
Happy0
Sad0
Sleepy0
Angry0
Previous Article OpenAI is charging $200 a month for an exclusive version of its o1 ‘reasoning’ model OpenAI is charging $200 a month for an exclusive version of its o1 ‘reasoning’ model
Next Article Google’s latest feature drop includes new Gemini extensions and accessibility features Google’s latest feature drop includes new Gemini extensions and accessibility features
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

144FollowersLike
23FollowersFollow
237FollowersPin
19FollowersFollow

Latest News

Logitech’s MX Creative Console now supports Figma and Adobe Lightroom
Logitech’s MX Creative Console now supports Figma and Adobe Lightroom
Apps News Tech April 23, 2025
Samsung resumes its troubled One UI 7 rollout
Samsung resumes its troubled One UI 7 rollout
Google News Samsung Tech April 23, 2025
Google Messages starts rolling out sensitive content warnings for nude images
Google Messages starts rolling out sensitive content warnings for nude images
Apps News Tech April 22, 2025
Vivo wants its new smartphone to replace your camera
Vivo wants its new smartphone to replace your camera
News Tech April 22, 2025
Times CatalogTimes Catalog
Follow US
© 2025 Times Catalog
  • About
  • Contact
  • Privacy Policy and Disclaimer
Welcome Back!

Sign in to your account

Lost your password?