In the AI Era, Transparency Is the New Currency on Social Media

In the AI Era, Transparency Is the New Currency on Social Media

Navigating the rise of AI-generated content will take radical honesty.

EXPERT OPINION BY JASON MITCHELL, CEO, MOVEMENT STRATEGY @MOVEMENTS

Whether you like it or not, AI-generated content is inevitable. The benefits for content production and efficiency are too great to ignore for cost-conscious brands. So, the focus should be on how to use it more thoughtfully. Especially since, as we’re discovering in real time, AI-generated content also comes with some pretty serious downsides. 

For one, if AI becomes the default for everything, feeds will feel more synthetic, mirroring the overly manicured tone and aesthetic that have become AI’s hallmark. When too much content feels machine-generated, the platforms start to lose texture. Social media only works because it feels alive. If it starts to feel automated at scale, people will disengage or shift toward smaller, more private spaces where content feels more human. The health of the platforms depends on human energy. If that erodes, so does the value.

But also, trust will become more fragile. Users already question what’s real. The more AI-generated content floods feeds, the higher people’s levels of skepticism becomes. That doesn’t mean engagement disappears, but it becomes more transactional. Less emotional with decreases in loyalty.

Brands should take this seriously—those brands that protect trust will come up on top every single time. Those that treat AI like a volume machine will burn out their audience. It’s that straightforward.

How Anthropic's Claude AI Became a Co-Founder

That’s why brands have a responsibility to disclose when they are using AI-generated content on social, especially when any content could reasonably be mistaken for something real. If you’re creating AI humans, AI testimonials, AI environments that look like lived experiences, that needs context. The more realistic AI gets, the more disclosure matters. So, brands need to lead the way in embracing a little radical transparency—protecting the trust that they’ve worked so hard to build. 

Now, not every use of AI needs a huge disclaimer. If AI is being used behind the scenes for editing, brainstorming, or production support, that’s different. But when AI changes the reality of what’s being presented, brands should say so. To most brands, this probably sounds like a risk, but transparency comes with its own benefits.

For example, H&M’s recent use of AI is noteworthy because they didn’t sneak it in, hoping audiences wouldn’t notice. Instead, the company has been very public about its AI journey, working with models and agencies to build “digital twins” of about 30 real models, for use in social and advertising campaigns. As importantly, they did it with consent and labeling so audiences aren’t misled, as well as working out compensation deals while allowing models to retain ownership of their likenesses. And while that doesn’t mean everyone liked it, it was a step toward ethical transparency.


© Inc.com