Meta Changes Its Approach To AI Labels On Photos After Backlash

Meta announced it is changing its approach to its "Made with AI" labels on photos after it incorrectly identified photos taken by photographers as AI-generated....
Meta Changes Its Approach To AI Labels On Photos After Backlash
Written by Matt Milano

Meta announced it is changing its approach to its “Made with AI” labels on photos after it incorrectly identified photos taken by photographers as AI-generated.

Labeling AI content has become a growing concern for online platforms, as well as regulators, as AI-generated content has become so realistic that it could easily be used to create false narratives. Meta announced in April plans to label AI content with a “Made with AI” label. Unfortunately, it’s algorithm for identifying AI content had some issues, with photos taken by human photographers being improperly labeled.

The company says it has made changes to address the issue.

We want people to know when they see posts that have been made with AI. Earlier this year, we announced a new approach for labeling AI-generated content. An important part of this approach relies on industry standard indicators that other companies include in content created using their tools, which help us assess whether something is created using AI.

Like others across the industry, we’ve found that our labels based on these indicators weren’t always aligned with people’s expectations and didn’t always provide enough context. For example, some content that included minor modifications using AI, such as retouching tools, included industry standard indicators that were then labeled “Made with AI.” While we work with companies across the industry to improve the process so our labeling approach better matches our intent, we’re updating the “Made with AI” label to “AI info” across our apps, which people can click for more information.

According to CNET, photographer Pete Souza said cropping tools appear to be one of the culprits. Because such tools add information to images, it seems that Meta’s algorithm was incorrectly identifying that added information and taking it as an indication the images were AI-generated.

The entire issue demonstrates the growing challenges associated with correctly identifying AI-generated content. For years, experts have warned about the potential havoc deepfakes could cause, impacting everything from people’s personal lives to business to politics.

Interestingly, OpenAI shuttered its own AI-content detection tool in early 2024, saying at the time that such tools don’t work:

While some (including OpenAI) have released tools that purport to detect AI-generated content, none of these have proven to reliably distinguish between AI-generated and human-generated content.

It remains to be seen if Meta will be able to reliably identify AI-generated images, or if it will suffer the same issues that led OpenAI to throw in the towel.

Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us