Friday, 16 August 2024

AI disclaimers: What marketing leaders need to know

AI-generated content has become pervasive on social media in a relatively short time—creating a lot of gray area when it comes to brands using AI technology responsibly.

Some platforms, like Meta, have proposed AI content disclaimers. In May 2024, the company began labeling posts they detected were AI-generated with a “made with AI” tag. Considering a recent Q2 2024 Sprout Pulse Survey found that 94% of consumers believe all AI content should be disclosed, this AI disclaimer seemed like an apt solution.

Stat call-out card that says 94% of consumers believe all AI-generated content should be disclosed

But there were unexpected roadblocks. Artists and creators claimed the label misidentified their original work as AI-generated. Marketers who only used AI Photoshop tools for light retouching claimed the label was misleading. Meta eventually clarified the use case of AI disclaimers and created more nuanced, creator-selected labels.

Key questions still hang in the air. Who is responsible for enforcing the ethical use of AI? Do platforms or marketers bear the responsibility of consumer transparency?

In this guide, we weigh in on the growing debate around AI disclaimers, and break down how platforms and brands currently approach them.

The growing debate around AI disclaimers

While almost all consumers agree AI content should be disclosed, they’re split on who should do the disclosing. The Q2 2024 Sprout Pulse Survey found that 33% believe it’s brands’ responsibility while 29% believe it’s up to social networks. Another 17% think brands, networks and social media management platforms are all responsible.

According to digital marketing consultant Evangeline Sarney, this divide is caused by the relative infancy of AI-generated content and the ambiguity surrounding it. “First, we need to consider what we are defining as AI content. If Adobe Generative Fill was used to add water droplets to an existing image, is disclosure necessary? With the backlash that many companies have faced from AI-generated campaigns, it’s easy to see why they’d hesitate to disclose. AI content isn’t the norm, and there aren’t clear guidelines. There isn’t a one-size-fits-all approach to labeling that will work for every scenario.”

What governing bodies say

Sarney’s point is underscored by the fact that the US Federal Communications Commision (FCC) has doled out AI disclosure requirements for certain advertisements, but has yet to release guidance for AI-generated content on social media. Some states have introduced their own legislation to protect consumer privacy in the absence of federal regulation.

Abroad, it’s a different story. The European Commission formally introduced the EU AI Act in August 2024, which aims to stop the spread of misinformation and calls upon creators of generative AI models to introduce disclosures.

The act says: “Deployers of generative AI systems that generate or manipulate image, audio or video content constituting deep fakes must visibly disclose that the content has been artificially generated or manipulated. Deployers of an AI system that generates or manipulates text published with the purpose of informing the public on matters of public interest must also disclose that the text has been artificially generated or manipulated.”

However, the AI Act stipulates that content reviewed by humans and that humans hold editorial responsibility for does not need to be disclosed. The act also categorizes the risk of AI content, and seems to focus most heavily on “unacceptable” and “high-risk” scenarios (i.e., exploitation, negatively impacting people’s safety and privacy, individual policing).

While this act could be a step toward universal AI disclosure standards, it still leaves a lot of room for interpretation and needs further clarification—especially for marketers and brands.

Consumers’ ethical concerns

Where legislation falls short, consumer expectations (and concerns) can guide brand content creation. For example, the Q2 2024 Sprout Pulse Survey found that 80% of consumers agree that AI-generated content will lead to misinformation on social, while another 46% are less likely to buy from a brand that posts AI content. These two stats could be correlated, according to Sarney.

Stat call-out card that says 46% of consumers are less likely to buy from a brand that posts AI content

“Consumers don’t want to feel they are being lied to, or like a brand is trying to hide something. If an image is generated with AI—and clearly looks like it—but isn’t disclosed, a consumer may question it. To maintain trust and authenticity, brands should build out frameworks for what needs to be disclosed and when.”

She also urges marketers to think critically about why they’re using AI. Is it to further their creative capabilities and speed up manual processes?

Sarney recalled a recent incident where a lifestyle magazine that had previously been criticized for their lack of diversity created an AI-generated BIPOC staff member. “Their Instagram account was flooded with negative feedback questioning why the company couldn’t just hire a real POC. Commenters called out the shrinking number of jobs for the BIPOC community within the fashion industry and many wondered why—instead of making a fake fashion editor—the company didn’t just hire one.”

There are many use cases that fit under the AI-generated content umbrella, and what makes sense to disclose will vary depending on your brand, industry and risk to the public. But, in general, brands should stay clear of creating AI-generated humans (especially to represent children, the BIPOC community and disabled people) without specifically disclosing they’ve done so and their purpose. They should almost always avoid creating AI content about current events, or that’s heavily inspired by others’ intellectual property. These areas are where the greatest AI risks for brand health—and, more importantly, public safety.

How different networks handle AI disclaimers

Amid the growing debate about AI disclaimers and the surge of AI-generated content overall, social networks are taking steps to stifle the spread of misinformation and maintain trust in their platforms. Primarily, by making it easier for creators to clearly label their content as AI-altered. Here are the ways each network is currently tackling AI disclaimers, and what that means for brands.

Meta

As mentioned, Meta changed their AI disclaimer label in July 2024 to better align with expectations of consumers and brands alike. They describe their new “AI info” label in their blog post: “While we work with companies across the industry to improve the process so our labeling approach better matches our intent, we’re updating the ‘Made with AI’ label to ‘AI info’ across our apps, which people can click for more information.”

The company has begun adding these labels to content when they detect industry standard AI image indicators or when people disclose they’re uploading AI-generated content. When users click the label, they are able to see how AI could’ve been used to create the image or video.

The AI info disclaimer at the top of an AI-generated Facebook image
Source: Meta

YouTube

YouTube unveiled a tool in their Creator Studio to make it easy for creators to self-select when their video has been meaningfully altered with generative AI, or is synthetic and seems real. Creators are required to disclose AI-generated content when it’s so realistic a person could easily mistake it for a real person, place or event, according to YouTube’s Community Guidelines.

As YouTube describes, “Labels will appear within the video description, and if content is related to sensitive topics like health, news, elections or finance, we will also display a label on the video itself in the player window.”

An AI disclaimer that says "altered or synthetic content," on top of a YouTube Short
Source: YouTube

While YouTube mandates creators self-disclose when they’ve used altered or synthetic content in their videos, they may also apply the label in cases where this disclosure hasn’t occurred, especially when the content discusses the sensitive topics mentioned above.

TikTok

TikTok’s creator label for AI content allows users to disclose when posts are completely AI-generated or significantly AI-edited. The label makes it easier for creators to comply with TikTok’s Community Guidelines’ synthetic media policy, which they introduced in 2023.

The policy requires people to label AI-generated posts that contain realistic images, audio or video, in order to help viewers contextualize the video and prevent the potential spread of misleading content.

A screen explaining TikTok's AI content labelling guidance side-by-side with another screen showing a TikTok video with an AI disclaimer that reads "Creator labeled as AI-generated."
Source: TikTok

If creators don’t self-disclose AI-generated content, ​​TikTok may automatically apply an “AI-generated” label to content the platform suspects was edited or created with AI.

LinkedIn

In May 2024, LinkedIn partnered with the Coalition for Content Provenance and Authenticity (C2PA) to develop technical standards for clarifying the origins of digital content, including AI-generated content. Rather than strictly labeling content as AI-generated—like most platforms have done—LinkedIn’s approach would see all content labeled.

The platform explains, “Image and video content that is cryptographically signed using C2PA Content Credentials will be noted with the C2PA icon. Clicking on this label will display the content credential and available metadata, such as content source (e.g., camera model noted or AI tool noted to have been used to generate all or part of the image), and issued by, to and on information.”

But it should be noted that this verification only works if your content already contains C2PA credentials. If not, it’s best to disclose AI-generated content in your caption, if that aligns with your brand guidelines.

AI disclaimer examples from 3 brands

With most platforms starting to offer AI disclaimer labels, it’s not as important how you disclose AI-generated content (i.e., using their labels)—just that you do. Whether it’s in the caption, or a watermark on an image or video. Not only to remain compliant with community guidelines (and prevent your content from being flagged or deleted), but also to maintain trust with your followers.

Here are three brands who create AI-generated content, and how they opt to disclose it.

Meta

On Instagram, the platform identifies their AI-generated images and videos by including the hashtag #ImaginedwithAI in their captions and an “Imagined with AI” watermark in the lower left corner of their photos.

The company also tells a story about the use of AI in their captions, and encourages their followers to try specific prompts in their Meta AI platform (like “culinary mashups,” pictured in this post).

An Instagram image posted by Meta of AI-generated scenes interpreting food names as real scenes (like bread boats)

MANGO

The Spanish fashion retailer MANGO unveiled their first completely AI-generated campaign on LinkedIn. Their statement was less disclosure-focused, instead emphasizing the technological advancements that made the campaign possible. In their post caption, the brand explained why they decided to create an entirely AI-generated campaign, and how it impacts their business strategy.

A LinkedIn post from MANGO that explains their entirely AI-generated campaign and a photo from the campaign

Toys“R”Us

Toy store Toys“R”Us recently unveiled a one-minute video about their company’s origin story that was entirely created by AI. The brand claims the video is the first ever brand film created with OpenAI Sora technology, which they explained in their YouTube caption and press release.

An AI-generated film created for ToysRUs about their brand's conception

Since the film’s launch at the Venice Film Festival, Toys “R” Us has promoted its AI origins—proving that disclosures can be potent opportunities for creating brand buzz. Even if AI-generated content stirs up negative sentiment, Toys “R” Us is proof that (sometimes) all press is good press.

Disclose at your audience’s discretion

As AI-generated content becomes more prevalent on social media, brands need to navigate the balance between innovation and transparency. That includes creating brand guidelines that define when AI disclaimers are necessary. While platforms are implementing individual policies and some governing agencies are stepping in, the bulk of the responsibility still falls on brands.

When deciding when it’s appropriate for your brand to make AI disclosures, think of your audience. Disclosures are essential for maintaining credibility when AI significantly manipulates reality or involves sensitive topics. However, minor enhancements may not require explicit labeling.

By understanding these nuances, you can use AI responsibly and in a way that furthers your team’s bandwidth and creativity (rather than creating a brand crisis).

Looking for more ways you can ethically weave AI into your team’s workflows? Read how CMOs are using AI in their marketing strategies.

The post AI disclaimers: What marketing leaders need to know appeared first on Sprout Social.



from Sprout Social https://ift.tt/X01sBKf
via IFTTT

No comments:

Post a Comment