AI Video Generation and Deepfakes: The Real Ethical and Legal Issues in 2024
A grounded look at deepfake legislation, the controversy over training data consent, synthetic media disclosure laws, and what creators actually need to know about legally using AI video tools.
The ethical conversation around AI video generation often gets muddled between two very different problems: the niche but serious harm caused by non-consensual deepfakes, and the general concern about AI-generated content affecting employment and authenticity norms. Both matter, but they require different frameworks and have different legal implications for creators using these tools. Here’s a clear-eyed breakdown of where the real issues are.
The Deepfake Problem Is Specific and Well-Documented
The term “deepfake” now covers a spectrum from photorealistic impersonations of public figures published as misinformation, to non-consensual intimate imagery (NCII) — synthetic explicit content using a real person’s likeness without consent. The latter is the most harmful application and the one that has driven legislative action.
In 2024, legislation moved faster than at any point in the short history of synthetic media. In the US, the DEFIANCE Act (Disrupt Explicit Forged Images and Non-Consensual Edits) was signed into law in July 2024, creating a federal civil cause of action for victims of non-consensual intimate deepfake images. A separate federal bill, the NO FAKES Act, would establish rights regarding digital replicas of individuals’ voices and likenesses, drawing from analogous right-of-publicity laws that have long existed at the state level.
At the state level, California passed AB 2602 in 2024, requiring written contractual consent specifically for AI-generated digital replicas of actors, and AB 1836, which restricts posthumous digital likeness use without estate approval. These laws have enforceable consequences for commercial productions in California, which covers most of the US entertainment industry.
The UK included provisions on synthetic intimate images in the Online Safety Act 2023, and the EU’s AI Act (provisionally agreed December 2023, entering into force 2024-2025) includes requirements around transparency labelling for AI-generated content.
What Training Data Consent Actually Means
One of the most contested ethical questions in AI video is whether models trained on copyrighted video have legal exposure. Several ongoing lawsuits in the US are directly relevant:
Getty Images v. Stability AI (filed January 2023) alleges that Stability AI used millions of licensed Getty images without permission to train their diffusion models. This case has not resolved but has shaped how subsequent model developers think about dataset provenance.
The New York Times v. OpenAI (filed December 2023) alleges copyright infringement in text model training, but the legal theories being tested — fair use vs. commercial gain from unlicensed training — apply directly to video model training as well.
The position of major AI video companies differs. Runway has stated its models are trained on licensed or public domain content, though the specifics of those licenses have not been published. Pika and most smaller companies have disclosed less. OpenAI has said Sora was trained on “licensed and publicly available video.”
For practical purposes: using AI video tools to generate original content doesn’t expose individual creators to these training data disputes, which exist between model companies and rights holders. You are a downstream user, not a party to those disputes.
The Disclosure Question
Should you disclose that a video you publish is AI-generated? This is currently a mix of legal requirement, platform policy, and ethical norm.
Platform policies:
- YouTube requires creators to disclose AI-generated content using realistic depictions of real events, fictional footage that could be mistaken for real events, or depictions of real people saying or doing things they haven’t done. Non-disclosure can result in removal.
- TikTok applies a similar policy and reserves the right to add labels automatically to AI-generated content it detects.
- Meta (Facebook, Instagram) uses Stable Diffusion’s C2PA (Content Credentials) metadata to detect and label AI-generated images and has expanded this to video.
Content Credentials (C2PA): The Coalition for Content Provenance and Authenticity (C2PA) is an open technical standard for embedding metadata into content files indicating creation origin. Adobe’s Content Authenticity Initiative (CAI) and Microsoft are among the major backers. Runway was the first major AI video platform to support C2PA metadata in generated clips, starting in 2024 — clips generated in Runway contain embedded credentials that identify Runway as the generation source.
This matters because the industry is moving toward a model where authenticity is signaled through metadata rather than visual examination, and early-adopter platforms are embedding this standard now.
The Employment Question
The most economically significant concern in AI video is the displacement of lower-end production roles. The evidence through 2024 is mixed: there has been clear displacement of some stock footage use-cases (clients who previously licensed stock video now generate it), and some entry-level B-roll and social media production work has shifted. At the same time, AI video has also expanded the market — individual creators who previously couldn’t afford any video production can now produce reasonable quality content.
The Writers Guild of America’s 2023 contract negotiations with AMPTP included provisions on AI use in screenwriting, establishing that AI cannot replace human writers and any AI-generated material in scripts must be disclosed. SAG-AFTRA reached a similar agreement with a different structure, including compensation and consent mechanisms for synthetic replicas.
These agreements establish precedent in unionised entertainment but don’t govern the broader creator economy.
Practical Guidance for Creators
Don’t generate videos using real people’s likenesses without consent. The legal risk is real and growing. Even where current laws don’t clearly prohibit it, the trajectory of legislation is toward stronger protections, and commercial platforms (Runway, Pika, Sora) actively filter prompts involving real named individuals.
Disclose AI generation when required by the platform you’re publishing on. The standards are still forming, but YouTube and TikTok have enforceable disclosure requirements for certain categories of AI-generated content. Err on the side of disclosure.
For commercial clients, clarify licensing. Most AI video platforms grant commercial rights at paid tiers, but check the specific platform’s terms. Runway’s terms, for example, grant users a broad commercial license for generated content, but retain the right to use your generations for platform improvement. For high-value commercial work, read the relevant section of the terms of service.
Be aware of C2PA metadata. If you’re distributing content generated by Runway or another C2PA-compliant platform, the file may contain embedded metadata identifying its origin. This is generally beneficial for transparency but may matter in contexts where you don’t want the generation tool publicly associated with the final product.
The Harder Questions
The questions that don’t have clean answers yet:
What constitutes consent for an AI avatar? If a public figure has recorded content for commercial use, does that implicitly allow training an AI model on their likeness for future content? Current law says no, but enforcement is inconsistent.
At what fidelity does generated content require disclosure? A clearly stylised animation doesn’t need a disclaimer. Photorealistic footage of an event that didn’t happen probably does. The middle ground is genuinely unclear.
Who is liable when AI-generated misinformation causes harm? The person who generates it? The platform that hosts it? The model company that made the tool? These questions will likely be resolved through litigation rather than pre-emptive regulation.
For individual creators using commercial tools legitimately for creative and commercial content, the immediate practical risks are limited. The ethical obligations — around disclosure, consent, and honesty about what you’ve produced — are clearer than the legal requirements in most jurisdictions.
Related Articles
AI Video Platform Updates: What Changed in Early 2025
A roundup of the most significant feature releases and pricing changes from Runway, Pika, Kling, Sora, and Luma AI in the first months of 2025.
AI Video in 2024: The Models and Research That Changed Everything
A review of the most significant AI video research and product releases in 2024 — from OpenAI Sora and Runway Gen-3 to Google's VideoPoet and Meta's Movie Gen.