Deepfakes and synthetic media

The advent of artificial intelligence (AI) has brought profound advancements to numerous industries, from healthcare to entertainment. However, with rapid innovation comes a darker undercurrent: deepfakes.

These AI-generated synthetic media creations convincingly mimic real individuals with startling accuracy. While they can be entertaining and beneficial in controlled contexts, their misuse presents significant legal and ethical challenges.

Understanding the impact of deepfakes

Deepfake technology has been a boon for the entertainment industry, enabling stunning cinematic feats. Films like Avatar and Indiana Jones and the Dial of Destiny owe much of their success to these and similar earlier technologies. Indeed, without them, Carrie Fisher would not have thrilled audiences in Star Wars: The Rise of Skywalker, which was produced 3 years after her death.

However, as the tech has become more sophisticated and more readily available, deepfakes have also become tools for misinformation, harassment, and fraud.

In high-profile instances, deepfakes have been used to spread political disinformation, such as occurred in the 2023 Slovakian elections by means of AI-generated audio clips that falsely implicated a candidate in election fraud.

The misuse of deepfakes to harass individuals, particularly through image-based sexual abuse, where victims’ likenesses are superimposed onto explicit content without their consent, is another very serious concern.

Seeking redress

Victims of deepfakes often face difficulties when pursuing legal action. Civil claims in the UK can be hindered by the absence of explicit image or personality rights. Remedies may instead have to rely on indirect avenues such as data protection laws, privacy claims, or defamation.

Data Protection and Privacy Claims

Under the GDPR and UK Data Protection Act 2018, personal data—including images and voices—is protected. However, the technical nature of deepfakes can complicate legal claims. If personal data is not directly used to create the deepfake, a victim may struggle to establish a breach. Exemptions for personal or household activities, may also shield individuals who produce non-commercial deepfakes. Nevertheless, case law in both the UK and EU suggests limits to such shields and a willingness by the courts to narrow exemptions to ensure that data protection laws remain relevant and effective. In Fairhurst v Woodard, County Court (Oxford), 12 October 2021, for example, the UK County Court found that a homeowner’s use of video surveillance could still fall within the scope of GDPR regulation.

Misuse of private information offers another potential remedy. In McKennitt v Ash [2006] EWCA Civ 1714, the Court of Appeal clarified that question in a case of misuse of private information is whether the information is private, not whether it is true or false. This may enable a claim for material which, by its very nature, is fake or false. Nonetheless, where a deepfake portrays entirely fictional events, establishing its private nature may prove challenging.

Defamation and Malicious Falsehood

Deepfakes that damage a victim’s reputation may be actionable as defamation, provided the harm is serious. Alternatively, claims of malicious falsehood may apply where a victim suffers financial loss from false statements. Copyright and passing off laws may also be relevant, depending on how the deepfake was created and the content it portrays.

What quickly becomes clear when considering all these options is the need for a proper, technical understanding of the deepfake material concerned. Where a legal claim is successful, victims may obtain remedies such as takedown orders, injunctions, or damages.

Criminal Protections

In the UK, criminal laws may provide additional recourse for some victims, although these laws generally rely on authorities taking proactive action.

The Protection from Harassment Act 1997 and Online Safety Act 2023 criminalise behaviours intended to intimidate or distress victims. The latter specifically prohibits sharing, or threatening to share, intimate deepfakes. However, the law primarily addresses dissemination, leaving gaps in preventive measures for the creation of harmful deepfakes.

Beyond sexual content, malicious deepfakes intended to cause psychological, emotional, or reputational harm may fall for action under section 179 of the Online Safety Act, which replaced provisions in the Malicious Communications Act 1988. This prohibits the sending of false communications that result in non-trivial harm.

EU Proactive Regulation

The EU has adopted a forward-looking approach to deepfake regulation.

Regulation (EU) 2024/1689 (the EU AI Act) requires transparency in AI-generated content under Article 50. Providers and deployers of AI deepfake systems must, respectively, label certain synthetic outputs in a machine-readable format and disclose certain manipulations. Non-compliance can result in steep penalties, with fines reaching €30 million or 6% of global annual turnover.

Regulation (EU) 2022/2065 (the Digital Services Act) complements the AI Act by holding online platforms accountable for hosting harmful deepfake content. It mandates prompt removal of illegal content and provides oversight for large platforms.

Additionally, the EU’s Code of Practice on Disinformation encourages voluntary measures, such as detection and labelling of deepfakes.

Recommendations for victims and industry

While attempts are being made in the UK and the EU to require platforms to act proactively on harmful deepfakes, both victims and AI system providers must still respond quickly in order to address deepfakes effectively.

Among the considerations for victims are:

  1. Engaging with platforms: If a harmful deepfake appears online, this may be reported to the social media platform or content-hosting site concerned for swift removal. Many such sites have takedown mechanisms for harmful or illegal content.
  2. Exploring legal avenues: Depending on the harm, civil claims or criminal complaints under harassment, data protection, or defamation laws might be pursued. Complaints to the police or other regulators may also be appropriate.
  3. Acting quickly: While it may be embarrassing or distressing to talk about, prompt action can limit harm and restrict content dissemination.

Providers, deployers and hosts of AI systems and deepfake content should:

  1. Ensure compliance: Adhering to transparency and governance requirements under the EU AI Act, GDPR and related instruments.
  2. Incorporate safeguards: Use watermarking or AI detection tools to label and identify potentially harmful synthetic content.
  3. Adopt ethical practices: Prioritising principles for fair data processing, data minimisation, and consent.

Conclusion

Deepfakes exemplify the dual-edged nature of AI. While they enable creative and commercial breakthroughs, they also highlight the urgent need for ethical and legal boundaries.

Robust laws like the EU AI Act and Digital Services Act set a high standard for addressing these challenges, but individual actions and corporate responsibility remain critical.

For victims, swift engagement with platforms and legal remedies can mitigate harm, while organisations must ensure compliance and adopt ethical AI practices to build a safer digital landscape.


In the fourth video in my 6-part series, “Artificial Intelligence: Navigating the Legal Frontier”, I consider the legal and ethical challenges posed by deepfake technology, explore remedies for victims and discuss regulatory approaches to deepfakes in the UK and EU. Join me for a tour of deepfake tech in this video.

Paul Schwartfeger on 13 November 2024