Tuesday, 26 September 2023

How to craft an effective AI use policy for marketing

Technology, like art, stirs emotions and sparks ideas and discussions. The emergence of artificial intelligence (AI) in marketing is no exception. While millions are enthusiastic about embracing AI to achieve greater speed and agility within their organizations, there are others who remain skeptical—pretty common in the early phases of tech adoption cycles.

In fact, the pattern mirrors the early days of cloud computing when the technology felt like unchartered territory. Most companies were uncertain of the groundbreaking tech—concerned about data security and compliance requirements. Others jumped on the bandwagon without truly understanding migration complexities or associated costs. Yet today, cloud computing is ubiquitous. It has evolved into a transformative force, from facilitating remote work to streaming entertainment.

As technology advances at breakneck speed and leaders recognize AI’s value for business innovation and competitiveness, crafting an organization-wide AI use policy has become very important. In this article, we shed light on why time is of the essence for establishing a well-defined internal AI usage framework and the important elements leaders should factor into it.

Please note: The information provided in this article does not, and is not intended to, constitute formal legal advice. Please review our full disclaimer before reading any further.

Why organizations need an AI use policy

Marketers are already investing in AI to increase efficiency. In fact, The State of Social Report 2023 shows 96% of leaders believe AI and machine learning (ML) capabilities can help them improve decision-making processes significantly. Another 93% also aim to increase AI investments to scale customer care functions in the coming three years. Brands actively adopting AI tools are likely going to have a greater advantage over those who are hesitant.

A data visualization call out card stating that 96% of business leaders believe artificial intelligence and machine learning can significantly improve decision making.

Given this steep upward trajectory in AI adoption, it is equally necessary to address the risks brands face when there are no clear internal AI use guidelines set. To effectively manage these risks, a company’s AI use policy should center around three key elements:

Vendor risks

Before integrating any AI vendors into your workflow, it is important for your company’s IT and legal compliance teams to conduct a thorough vetting process. This is to ensure vendors adhere to stringent regulations, comply with open-source licenses and appropriately maintain their technology.

Sprout’s Director, Associate General Counsel, Michael Rispin, provides his insights on the subject. “Whenever a company says they have an AI feature, you must ask them—How are you powering that? What is the foundational layer?”

It’s also crucial to pay careful attention to the terms and conditions (T&C) as the situation is unique in the case of AI vendors. “You will need to take a close look at not only the terms and conditions of your AI vendor but also any third-party AI they are using to power their solution because you’ll be subject to the T&Cs of both of them. For example, Zoom uses OpenAI to help power its AI capabilities,” he adds.

Mitigate these risks by ensuring close collaboration between legal teams, functional managers and your IT teams so they choose the appropriate AI tools for employees and ensure vendors are closely vetted.

AI input risks

Generative AI tools accelerate several functions such as copywriting, design and even coding. Many employees are already using free AI tools as collaborators to create more impactful content or to work more efficiently. Yet, one of the biggest threats to intellectual property (IP) rights arises from inputting data into AI tools without realizing the consequences, as a Samsung employee realized only too late.

“They (Samsung) might have lost a major legal protection for that piece of information,” Rispin says regarding Samsung’s recent data leak. “When you put something into ChatGPT, you’re sending the data outside the company. Doing that means it’s technically not a secret anymore and this can endanger a company’s intellectual property rights,” he cautions.

Educating employees about the associated risks and clearly defined use cases for AI-generated content helps alleviate this problem. Plus, it securely enhances operational efficiency across the organization.

AI output risks

Similar to input risks, output from AI tools poses a serious threat if they are used without checking for accuracy or plagiarism.

To gain a deeper understanding of this issue, it is important to delve into the mechanics of AI tools powered by generative pre-trained models (GPT). These tools rely on large language models (LLMs) that are frequently trained on publicly available internet content, including books, dissertations and artwork. In some cases, this means they’ve accessed proprietary data or potentially illegal sources on the dark web.

These AI models learn and generate content by analyzing patterns in the vast amount of data they consume daily, making it highly likely that their output is not entirely original. Neglecting to detect plagiarism poses a huge risk to a brand’s reputation, also leading to legal consequences, if an employee uses that data.

In fact, there is an active lawsuit filed by Sarah Silverman against ChatGPT for ingesting and providing summaries from her book even though it’s not free to the public. Other well-known authors like George RR Martin and John Grisham too, are suing parent company, OpenAI, over copyright infringement. Considering these instances and future repercussions, the U.S. Federal Trade Commission has set a precedent by forcing companies to delete their AI data gathered through unscrupulous means.

Another major problem with generative AI like ChatGPT is that it uses old data, leading to inaccurate output. If there was a recent change in areas you’re researching using AI, there is a high probability that the tool would have overlooked that information as it wouldn’t have had time to incorporate the new data. Since these models take time to train themselves on new information, they may overlook the newly added information. This is harder to detect than something wholly inaccurate.

To meet these challenge, you should have an internal AI use framework that specifies scenarios where plagiarism and accuracy checks are necessary when using generative AI. This approach is especially helpful when scaling AI use and integrating it into the larger organization as well.

As in all things innovative, there are risks that exist. But they can be navigated safely through a thoughtful, intentional approach.

What marketing leaders should advocate for in an AI use policy

As AI tools evolve and become more intuitive, a comprehensive AI use policy will ensure accountability and responsibility across the board. Even the Federal Trade Commission (FTC) has minced no words, cautioning AI vendors to practice ethical marketing in a bid to stop them from overpromising capabilities.

Now is the time for leaders to initiate a foundational framework for strategically integrating AI into their tech stack. Here are some practical factors to consider.

A data visualization card that lists what marketing leaders should advocate for in an AI use policy. The list includes accountability and governance, planned implementation, clear use cases, intellectual property rights and disclosure details.

Accountability and governance

Your corporate AI use policy must clearly describe the roles and responsibilities of individuals or teams entrusted with AI governance and accountability in the company. Responsibilities should include implementing regular audits to ensure AI systems are compliant with all licenses and deliver on their intended objectives. It’s also important to revisit the policy frequently so you’re up-to-date with new developments in the industry, including legislation and laws that may be applicable.

The AI policy should also serve as a guide to educate employees, explaining the risks of inputting personal, confidential or proprietary information into an AI tool. It should also discuss the risks of using AI outputs unwisely, such as verbatim publishing AI outputs, relying on AI for advice on complex topics, or failing to sufficiently review AI outputs for plagiarism.

Planned implementation

A smart way to mitigate data privacy and copyright risks is to introduce AI tools across the organization in a phased manner. As Rispin puts it, “We need to be more intentional, more careful about how we use AI. You want to make sure when you do roll it out, you do it periodically in a limited fashion and observe what you’re trying to do.” Implementing AI gradually in a controlled environment enables you to monitor usage and proactively manage hiccups, enabling a smoother implementation on a wider scale later on.

This is especially important as AI tools also provide brand insights vital for cross-organizational teams like customer experience and product marketing. By introducing AI strategically, you can extend its efficiencies to these multi-functional teams safely while addressing roadblocks more effectively.

Clear use cases

Your internal AI use policy should list all the licensed AI tools approved for use. Clearly define the purpose and scope of using them, citing specific use cases. For example, documenting examples of what tasks are low risk or high and which should be completely avoided.

Low-risk tasks that are not likely to harm your brand may look like the social media team using generative AI to draft more engaging posts or captions. Or, customer service teams using AI-assisted copy for more personalized responses.

In a similar vein, the AI use policy should specify high-risk examples where the use of generative AI should be restricted, such as giving legal or marketing advice, client communications, product presentations or the production of marketing assets containing confidential information.

“You want to think twice about rolling it out to people whose job is to deal with information that you could never share externally, like your client team or engineering team. But you shouldn’t just do all or nothing. That’s a waste because marketing teams, even legal teams and success teams, a lot of back office functions basically—their productivity can be accelerated by using AI tools like ChatGPT,” Rispin explains.

Intellectual property rights

Considering the growing capacity of generative AI and the need to produce complex content quickly, your company’s AI use policy should clearly address the threat to intellectual property rights. This is critical because the use of generative AI to develop external-facing material, such as reports and inventions, may mean the assets cannot be copyrighted or patented.

“Let’s say you’ve published a valuable industry report for three consecutive years and in the fourth year decide to produce the report using generative AI. In such a scenario, you have no scope of having a copyright on that new report because it’s been produced without any major human involvement. The same would be true for AI-generated art or software code,” Rispin notes.

Another consideration is using enterprise-level generative AI accounts with the company as the admin and the employees as users. This lets the company control important privacy and information-sharing settings that decrease legal risk. For example, disabling certain types of information sharing with ChatGPT will decrease the risk of losing valuable intellectual property rights.

Disclosure details

Similarly, your AI use policy must ensure marketers disclose they’re using AI-generated content to external audiences. The European Commission considers this a very important aspect of the responsible and ethical use of generative AI. In the US, the AI Disclosure Act of 2023 Bill further cemented this requirement, maintaining any output from AI must include a disclaimer. This legislation tasks the FTC with enforcement.

Social media platforms like Instagram are already implementing ways to inform users of content generated by AI through labels and watermarks. Google’s generative AI tool, Imagen, also now embeds digital watermarks on AI-generated copy and images using SynthID. The technology embeds watermarks directly into image pixels, making them detectable for identification but imperceptible to the human eye. This means labels cannot be altered even with added filters or altered colors.

Integrate AI strategically and safely

The growing adoption of AI in marketing is undeniable, as are the potential risks and brand safety concerns that arise in the absence of well-defined guidelines. Use these practical tips to build an effective AI use policy that enables you to strategically and securely harness the benefits of AI tools for smarter workflows and intelligent decision-making.

Learn more about how marketing leaders worldwide are approaching AI and ML to drive business impact.

 

DISCLAIMER

The information provided in this article does not, and is not intended to, constitute formal legal advice; all information, content, points and materials are for general informational purposes. Information on this website may not constitute the most up-to-date legal or other information. Incorporation of any guidelines provided in this article does not guarantee that your legal risk is reduced. Readers of this article should contact their legal team or attorney to obtain advice with respect to any particular legal matter and should refrain from acting on the basis of information on this article without first seeking independent legal advice. Use of, and access to, this article or any of the links or resources contained within the site do not create an attorney-client relationship between the reader, user or browser and any contributors. The views expressed by any contributors to this article are their own and do not reflect the views of Sprout Social. All liability with respect to actions taken or not taken based on the contents of this article are hereby expressly disclaimed.

The post How to craft an effective AI use policy for marketing appeared first on Sprout Social.



from Sprout Social https://ift.tt/eS2NB3i
via IFTTT

No comments:

Post a Comment