Meta’s latest AI-powered Ray-Ban smart glasses come with a discreet camera, capable of taking photos not only when prompted by the wearer but also when activated by specific AI-triggered keywords, like “look.” This raises significant concerns about privacy, as the glasses can capture a vast array of images—both intentional and inadvertent. However, what remains more troubling is Meta’s refusal to clarify how these images are used, specifically whether they are leveraged to train the company’s AI models.
When asked directly if Meta intends to use photos from Ray-Ban Meta glasses to train its AI, the company offered no definitive answer.
“That’s not something we typically share externally,” added Meta spokesperson Mimi Huggins, who also participated in the call. When pressed for further clarification, Huggins responded, “we’re not saying either way.”
This lack of transparency is particularly alarming, given the passive nature of the Ray-Ban Meta’s new AI-powered features. These glasses are equipped with the capability to capture numerous photos without the user necessarily being aware, raising substantial questions about privacy. Last week, TechCrunch reported that Meta is preparing to roll out a real-time video feature for the smart glasses, enabling them to stream a series of images—or effectively live video—directly into a multimodal AI model. This allows the AI to answer questions about the user’s surroundings in real time, with minimal delay.
Photos You May Not Know You’re Taking
The problem is not just the volume of images being taken, but that users may not even be fully conscious of capturing them. For instance, imagine asking the smart glasses to help you choose an outfit by scanning your closet. The glasses would take numerous photos of your entire room, including everything in it, and upload them to a cloud-based AI model for processing. This happens with little to no user intervention, raising a critical question: What happens to those images after they’ve served their purpose?
Meta’s stance on this issue is a deafening silence.
Given the sensitivity of this information, you would expect the tech giant to reassure users by stating outright that these photos would remain private, stored locally on the device, or only temporarily used for specific functions. Instead, Meta’s non-committal responses leave open the unsettling possibility that these personal, and potentially intimate, images might be used in ways that users did not anticipate.
A Troubling History with User Data
Meta has already made it clear that it trains its AI models using publicly available data from platforms like Instagram and Facebook. The company has taken a broad approach to defining “publicly available data,” asserting that anything posted on these platforms can be used for AI training purposes. While that policy has raised eyebrows, it’s something that users of social media might grudgingly accept, given the public nature of their posts.
However, what you see through your Ray-Ban smart glasses is not “publicly available” in the same sense. The images captured by these devices often reflect your personal surroundings, your home, or other private spaces—and should warrant a higher standard of privacy.
Despite the potential concerns, Meta is not providing clear answers. While the company has stopped short of confirming that it uses Ray-Ban Meta camera footage for AI training, its refusal to deny it outright leaves consumers in a precarious position.
A Contrast in Data Policies
Interestingly, other companies working in the AI space have set more definitive boundaries regarding user data. Anthropic, for instance, states that it never trains its AI models on a customer’s inputs or outputs. Similarly, OpenAI, the organization behind ChatGPT, has committed to not using API inputs or outputs for AI training without user consent.
This contrast only serves to highlight Meta’s opacity. While companies like Anthropic and OpenAI are establishing firm policies that prioritize user privacy, Meta seems content with keeping users in the dark.
The Larger Implications
Wearing the Ray-Ban Meta glasses means wearing a camera that can constantly take pictures—whether or not you’re actively aware of it. As the public saw with Google Glass, this kind of surveillance creates discomfort, not only for the wearer but also for people in their vicinity. It’s reasonable to expect Meta to provide clear assurances that the images captured by these smart glasses will remain private and won’t be repurposed for AI training without explicit user consent.
But that’s not the reassurance Meta is offering. Instead, the company’s vague responses do little to quell privacy concerns.
While the advent of AI-powered wearables like Ray-Ban Meta opens exciting possibilities for integrating technology into daily life, it also ushers in complex questions about privacy, data ownership, and consent. With a camera on your face and AI behind the scenes, the lines between public and private spaces are blurring more than ever.
Until Meta clarifies its stance on how these personal images are used, the uneasy question remains: Are we unwittingly feeding the AI of tomorrow with the snapshots of our lives today?
We’ve reached out to Meta for further comment, and will update the story if the company provides additional clarification.