Join CIPR
Side profile of a white and black humanoid robot holding scales of justice
PhonlamaiPhoto / iStock
TECHNOLOGY
Tuesday 6th June 2023

Navigating the legal complexities of generative AI

How can PR professionals responsibly manage the legal risks of artificial intelligence tools?

In the evolving landscape of generative AI tools, PR professionals must adapt professionally and navigate a number of legal risks. Understanding how to responsibly manage these risks is crucial for any professional considering using AI. With this in mind, here are some key points to consider: 

1.      Uncertainty in regulatory frameworks  

We must keep in mind that the legislative and judicial response to AI regulation remains a grey area. Regulation will struggle to keep pace with specific technological advancements but there will eventually be action – discussions are still ongoing with the UK set to host the first major global summit on AI safety. The current situation, however, is one of uncertainty.  

2.      User liability  

One of the main challenges PR professionals will face in their use of generative AI is the potential for copyright infringement. Since models are trained with massive datasets which include image and text data that are protected by copyright, publishing AI-generated content could mean breaching copyright laws. Indeed, AI platforms do not guarantee that the generated content will be non-infringing on third-party intellectual property rights.  

Practitioners may also need to be aware of specific legal developments depending on their geography as differences exist in foundational copyright frameworks between the UK, EU and US. Examples include the discussions on the EU’s Artificial Intelligence Act and the outcome of the Andersen v. Stability AI case

3.      The minefield of personal data 

The General Data Protection Regulation (GDPR) sets out strict norms for the protection of personal data, presenting another challenge for PR professionals. AI platforms may share personal data from their training datasets, which would open professional users to GDPR breaches in their use of AI-generated content.  

4.      Built-in biases and limitations   

As acknowledged by OpenAI’s CEO Sam Altman, AI models such as GPT 3.5 (which powers the free version of ChatGPT) contain inherent biases which can affect the nature and reliability of generated content. It is also crucial to consider that many of these models do have knowledge cut-offs that impact the accuracy of generated content. GPT 3.5’s knowledge cut-off is September 2021 which means that it does not have access to any information on events that occurred after that date. This is to be paired with their known propensity to “hallucinate” fake information, which could lead professionals to legal claims from corporates for publishing reputation-damaging misleading information.  

AI undoubtedly presents a number of challenges to the industry as well as many opportunities. However, underlying these is the very active evolution of the legal and regulatory landscape. PR professionals must use AI-generated content with caution and an understanding and appreciation of this. 

CIPR members can call the CIPR Business and Legal Helpline for further advice. 

Yanis Fekar is currently pursuing a BSc in politics and international relations at UCL. He is also vice president of the UCL United Nations Association and an intern at the CIPR.

A head and shoulders photo of Yanis Fekar looking at the camera and smiling.