In the world of technology, few topics generate as much buzz as AI. But what happens when the conversation turns to Gen Z, a generation that’s always under the microscope of mainstream media? Recent surveys reveal that Gen Z has a complex and, at times, contradictory relationship with AI.
A new study from Samsung, which surveyed over 5,000 Gen Zers across France, Germany, Korea, the U.K., and the U.S., paints a picture of a generation deeply intertwined with AI. Nearly 70% of respondents consider AI an indispensable tool for a wide range of activities—from work-related tasks like summarizing documents and conducting research to personal pursuits such as brainstorming and finding inspiration.
However, this enthusiasm is tempered by significant reservations. An earlier report from EduBirdie, a professional essay-writing service, reveals that more than a third of Gen Zers who use AI tools like OpenAI’s ChatGPT at work experience feelings of guilt. Their concerns? That relying on AI could dull their critical thinking skills and stifle creativity.
Of course, these findings should be viewed with a healthy dose of skepticism. Samsung, a major player in AI development and sales, has a vested interest in promoting a positive image of the technology. Similarly, EduBirdie, whose core business competes with AI writing tools, might benefit from fueling anxieties about AI’s impact. After all, AI platforms like ChatGPT are becoming direct competitors in the academic space.
Yet, there’s something to be said about Gen Z’s cautious approach to AI. Unlike previous generations, they appear more aware of the potential downsides of technology. A separate study by the National Society of High School Scholars found that 55% of Gen Zers believe AI will have a more negative than positive impact on society over the next decade. A similar percentage expressed concerns about AI’s potential to erode personal privacy.
And Gen Z’s opinions are not to be taken lightly. According to a report from NielsenIQ, this generation is on track to become the wealthiest ever, with their spending power projected to reach $12 trillion by 2030, surpassing that of baby boomers by 2029.
For AI startups, which often spend as much as 50% of their revenue on hosting, computing, and software, winning over Gen Z could be crucial. Addressing their concerns might be a smart move—if it’s even possible. With the myriad of technical, ethical, and legal challenges AI presents, it remains to be seen whether their fears can be truly alleviated. But one thing’s for sure: it doesn’t hurt to try.
AI News Roundup
OpenAI Teams Up with Condé Nast: OpenAI has struck a deal with Condé Nast, the publisher of iconic outlets like The New Yorker, Vogue, and Wired. The agreement will allow OpenAI to feature stories from these publications in its AI-powered chatbot platform, ChatGPT, and its experimental search engine, SearchGPT. Additionally, Condé Nast’s content will be used to train OpenAI’s AI models.
AI’s Thirst for Water: The AI boom is driving up demand for data centers, which in turn is leading to increased water consumption. In Virginia, home to the world’s largest concentration of data centers, water usage surged by nearly two-thirds between 2019 and 2023, according to the Financial Times.
Reviews are in for Gemini Live and Advanced Voice Mode: Google and OpenAI have both rolled out new AI-powered, voice-centric chat experiences this month—Google’s Gemini Live and OpenAI’s Advanced Voice Mode. These features offer realistic voices and allow users to interrupt the bot at any time, providing a more natural conversation flow.
Trump Shares AI-Generated Taylor Swift Memes: In a controversial move, former President Donald Trump posted a series of AI-generated memes on Truth Social that depicted Taylor Swift and her fans endorsing his candidacy. As legislation surrounding AI-generated content in political campaigns gains traction, this incident highlights the growing concerns over the misuse of AI in politics.
SB 1047 Sparks Debate: The California bill known as SB 1047, designed to preempt AI-related disasters, continues to stir controversy. Congresswoman Nancy Pelosi recently criticized the bill, describing it as “well-intentioned” but “ill-informed,” adding to the ongoing debate over the best way to regulate AI.
Research Paper of the Week
The transformer model, first introduced by Google researchers in 2017, has become the backbone of modern generative AI. This architecture powers some of the most advanced AI models, including OpenAI’s video-generating Sora, the latest Stable Diffusion model, and Flux. It also forms the foundation of text-generating models like Anthropic’s Claude and Meta’s Llama.
In a recent blog post, Google Research revealed that it’s now using transformers to enhance YouTube Music recommendations. The system analyzes user actions, such as track interruptions and playback duration, along with other metadata, to suggest related tracks. According to Google, this approach has significantly reduced the music skip rate and increased the time users spend listening—an undeniable win for the tech giant.
Model of the Week
This week’s standout AI model is OpenAI’s GPT-4o, which now offers the ability to be fine-tuned on custom datasets. On Tuesday, OpenAI announced that developers can now tailor the model to follow domain-specific instructions and adjust the tone of its responses. While fine-tuning isn’t a cure-all, OpenAI’s blog post emphasizes the substantial impact it can have on model performance.
Grab Bag
In the latest chapter of the ongoing legal battle over AI, Anthropic faces a new class-action lawsuit. A group of authors and journalists filed a complaint in federal court this week, accusing the company of training its AI chatbot Claude on pirated e-books and articles, which they claim amounts to “large-scale theft.”
The lawsuit alleges that Anthropic used The Pile, a dataset that includes a vast collection of pirated e-books known as Books3, to train Claude. The plaintiffs are seeking damages and a permanent injunction to prevent Anthropic from using their copyrighted works without permission.
As the debate over AI’s use of copyrighted material intensifies, this case underscores the broader challenges of balancing innovation with intellectual property rights.