It’s caused by an error that it’s currently fixing, a spokesperson said.
In a surprising and unsettling development, Meta has confirmed that Instagram is grappling with a technical glitch that has been inundating users’ feeds with violent and sexually explicit Reels. This unexpected surge of inappropriate content has left many users disturbed, prompting the tech giant to issue an apology and assure the public that they are working diligently to resolve the issue.
Meta Responds to Widespread Concerns
In a statement to CNBC, a Meta spokesperson acknowledged the issue, saying: “We are fixing an error that caused some users to see content in their Instagram Reels feed that should not have been recommended. We apologize for the mistake.” While the company did not disclose the exact nature of the error, it has sparked significant backlash across social media platforms.
Users have flooded sites like Reddit and X (formerly Twitter) to share their experiences and ask others if they were facing similar issues. Reports describe Reels feeds being overtaken by content showcasing school shootings, violent assaults, stabbings, beheadings, and explicit sexual material. Disturbingly, some users even reported encountering videos depicting graphic acts of violence and sexual assault.
Sensitive Content Controls Failing to Shield Users
One of the most alarming aspects of this glitch is that Instagram’s Sensitive Content Control, designed to filter out distressing and explicit material, appeared to be ineffective. Even users who had enabled the strictest content settings reported seeing back-to-back videos of gore, nudity, and extreme violence. What’s more, users claimed they continued to encounter this content even after repeatedly selecting the “Not Interested” option on such Reels.
Social media algorithms typically personalize content based on users’ interactions, showing them videos similar to what they like, comment on, or watch. However, this glitch seemingly bypassed those patterns, pushing highly disturbing content to users who had never engaged with anything remotely similar. For many, it felt like Instagram’s safeguards were completely disabled, exposing them to content that should never have made it onto the platform in the first place.
Violation of Instagram’s Community Guidelines
Instagram’s policies explicitly prohibit certain types of content to maintain a safe online environment. According to their guidelines, Meta removes the most extreme graphic content and places warning labels on other sensitive material to allow users to decide whether they want to view it. The platform’s rules also ban real photographs and videos of nudity, sexual activity, and severe violence.
Given the nature of the content surfacing during this glitch, it appears that not only were Instagram’s recommendation algorithms malfunctioning, but its content moderation systems may have also failed to catch and remove harmful videos before they reached users’ feeds.
The Bigger Picture: Trust and Platform Safety
This incident raises broader concerns about content moderation and platform safety. Instagram, with over 2 billion monthly active users, plays a massive role in the digital landscape. People use the platform to connect with friends, discover new interests, and build communities — but when the algorithm malfunctions, it can expose users to traumatic content that can harm mental well-being.
For creators and brands, the glitch could also undermine trust in Instagram as a reliable space for content sharing and marketing. Influencers, small businesses, and everyday users alike rely on Instagram’s ability to curate a safe and positive user experience. When that trust is broken, it can have lasting repercussions for the platform’s reputation.
Meta’s Path to Redemption
While Meta has apologized and committed to fixing the error, many users are calling for greater transparency and stronger protections against similar incidents in the future. Users want to know exactly what went wrong, how the glitch occurred, and what measures Meta is putting in place to prevent it from happening again.
Addressing this issue may require not just a quick technical fix but a broader reassessment of Instagram’s content moderation systems, algorithmic oversight, and user reporting tools. Platforms of Instagram’s scale must prioritize user safety, balancing content discovery with robust safeguards against harmful material.
In the meantime, users are advised to double-check their content settings, make use of Instagram’s reporting features, and take breaks from the platform if they encounter distressing content. Meta, for its part, must act swiftly and decisively — not only to correct this glitch but to restore user confidence in Instagram as a safe space for digital expression and community building.
Let’s hope this serves as a wake-up call for the tech industry to reinforce content moderation practices and prioritize user safety above all else.
What do you think about this incident? Have you experienced inappropriate Reels on your feed? Let’s discuss it in the comments.