Skip to main content

Regulation • Updated April 26, 2026

EU AI Act and C2PA: What Article 50 Requires for AI Content

Deadline: Article 50 of the EU AI Act becomes enforceable on August 2, 2026. The European Commission Code of Practice on Transparency, which interprets the technical detail of the marking obligation, was still being finalized as of April 2026. This article reflects the Regulation text and the published draft Code as of that date.

Quick Reference

QuestionShort answer
Does the EU AI Act mandate C2PA?No, but C2PA is the named example in the Code of Practice
Effective dateAugust 2, 2026
Maximum fine15M EUR or 3% of global turnover, whichever is higher
Who is in scopeAny AI provider or deployer whose output reaches EU users
What gets markedSynthetic audio, image, video, and text outputs
Marking standardMachine-readable, effective, interoperable, robust, reliable

The EU AI Act, formally Regulation (EU) 2024/1689, is the European Union's law governing artificial intelligence. Article 50 sets transparency obligations that take effect on August 2, 2026, and one of those obligations is specific to AI-generated content: outputs from generative AI systems must be marked in a machine-readable format detectable as artificially generated. The technical mechanism the European Commission's draft Code of Practice on Transparency names by example is C2PA Content Credentials. Article 50 takes effect on August 2, 2026. Penalties for non-compliance reach 15 million EUR or 3 percent of worldwide annual turnover, whichever is higher.

What Article 50 of the EU AI Act actually requires

Article 50 imposes transparency obligations on providers and deployers of certain AI systems. The clause that matters most for content provenance is Article 50(2): providers of generative AI systems must ensure that synthetic outputs (audio, image, video, or text) are marked in a machine-readable format and detectable as artificially generated or manipulated. The marking must be effective, interoperable, robust, and reliable, as far as is technically feasible.

Those four adjectives carry specific meaning under the Code of Practice on Transparency that the European Commission has been developing alongside the Regulation:

  • Effective: the marking must actually identify the content as AI-generated to a verifying party
  • Interoperable: any compliant verifier must be able to read the marking, not only the provider's own tool
  • Robust: the marking should resist common transformations such as format conversion or minor edits
  • Reliable: the marking should be tamper-evident, so that a verifier can detect whether it has been altered or forged

Article 50 also covers other transparency duties: deployers of emotion recognition systems must inform users, deployers generating deepfakes must disclose them, and providers of chatbot-style AI must make the artificial nature of the system clear to the person interacting with it. The marking obligation in 50(2) is the one with direct technical consequences for content credentials.

Does the EU AI Act mandate C2PA specifically?

No. The EU AI Act is technology-neutral and does not name C2PA in its operative articles. The Regulation describes the marking standard in functional terms (machine-readable, effective, interoperable, robust, reliable) and leaves the technical implementation to standards bodies and to the Commission's Code of Practice.

In practice, C2PA Content Credentials are the marking technology favored by the Commission's draft Code of Practice on Transparency. The Code lists C2PA as an example of a technical solution that satisfies all four criteria, alongside complementary signals like Google's SynthID watermarking. C2PA is an open standard already deployed by Adobe, OpenAI, and Google, with cryptographic signatures that make tampering detectable.

In short: the law does not require C2PA, but the regulatory ecosystem points firmly at it. Providers who adopt C2PA match the most concrete official guidance available on what compliant marking looks like.

Why C2PA, not just any watermark

Plain pixel watermarks (visible logos or invisible perceptual marks) are not interoperable in the Article 50 sense. Each provider would invent its own watermark, no third party could verify a competitor's mark, and there is no cryptographic chain of trust. C2PA solves this by defining a shared file format (JUMBF), a shared claim schema (JSON-LD), and a shared signature format (COSE). One verifier can read all of them. See what is inside a C2PA manifest for the technical detail.

Who must comply with Article 50?

Article 50 reaches two categories of organizations: providers of generative AI systems (the companies that build and offer the model) and deployers (companies that use the AI to generate content shown to people). Both bear obligations, but the marking duty in 50(2) sits with providers.

Geographic scope is broad. Article 2 of the Regulation places any provider in scope if it puts an AI system on the EU market, if its outputs are used in the EU, or if its users are located in the EU. A US-based provider like OpenAI, an Israeli image-generation startup, or a Japanese camera manufacturer is in scope as soon as European users encounter their content.

There are limited exceptions. Article 50(2) does not apply when the AI performs an assistive editing function that does not substantially alter the input, or when the use is authorized by law for the purpose of detecting, preventing, investigating, or prosecuting criminal offenses. Standard generative use cases (an AI tool producing an image, a text-to-video system, a voice synthesis service) fall under the obligation.

When does Article 50 become enforceable?

Article 50 becomes enforceable on August 2, 2026, two years after the EU AI Act entered into force on August 1, 2024. The same date applies to the general transparency rules and to obligations on general-purpose AI models. The Regulation has a staggered timeline overall:

  • February 2, 2025: Prohibitions on banned practices took effect
  • August 2, 2025: General-purpose AI model provisions began applying for new models
  • August 2, 2026: General transparency obligations including Article 50 become enforceable
  • August 2, 2027: High-risk AI system rules take effect

How C2PA Content Credentials satisfy Article 50

C2PA embeds a cryptographically signed manifest into image, video, audio, and document files. The manifest declares which AI system generated or modified the content, when the content was produced, and which organization signed the claim. Any C2PA-compliant verifier (including C2PA Viewer) can read this manifest and confirm the signature against a published trust list.

Mapping this to the four Article 50 criteria:

  • Effective: a valid C2PA manifest containing a `c2pa.created` action signed by an AI provider unambiguously marks the content as AI-generated
  • Interoperable: C2PA is an open standard maintained by the Coalition for Content Provenance and Authenticity. Any compliant tool reads any compliant manifest
  • Robust: manifests survive most format conversions and re-encodings handled by C2PA-aware tools. Stripping is still possible through naive re-saves, which is why the Code of Practice encourages pairing C2PA with invisible watermarks
  • Reliable: the manifest is signed using COSE with a certificate chain verifiable against the C2PA Trust List. Tampering breaks the signature

Several major AI providers have already deployed C2PA in production. Adobe Firefly, OpenAI DALL-E 3, OpenAI Sora, and Google Imagen all embed manifests in their outputs. Midjourney is the most prominent generative AI tool that does not, and it faces direct regulatory exposure under Article 50 unless its approach changes before August 2026. See which AI tools support C2PA for the current status of each major platform.

Penalties for non-compliance with Article 50

Non-compliance with Article 50 is sanctioned under Article 99 of the EU AI Act. The graduated penalty structure works as follows:

  • Use of banned AI practices: up to 35 million EUR or 7 percent of global annual turnover
  • Other violations including transparency failures under Article 50: up to 15 million EUR or 3 percent of global annual turnover
  • Supplying incorrect information to authorities: up to 7.5 million EUR or 1 percent

In each case the higher of the two amounts applies. Enforcement runs through national supervisory authorities in each Member State, coordinated by the EU AI Office. For SMEs, the Regulation directs authorities to apply fines proportionately. For a global AI provider with multi-billion-euro revenue, 3 percent of turnover is the binding number, not the 15 million EUR floor.

How to verify your AI tool's outputs are compliant

The fastest way to check whether an AI provider satisfies Article 50's marking requirement today is to inspect a real output from the tool. The verification flow is:

  1. Generate a sample output. Use the AI tool to produce an image, audio file, or video. Download it without re-saving through another editor, since some editors strip metadata.
  2. Drop the file into C2PA Viewer. All processing happens client-side. The tool will display whether a manifest is present, who signed it, and what the signature claims.
  3. Confirm a manifest is present and signed by the expected organization. Adobe should sign Firefly outputs, OpenAI should sign DALL-E 3 and Sora outputs, Google should sign Imagen outputs.
  4. Confirm the manifest indicates AI generation. The `claim_generator` field should name the AI tool, and the manifest should include a `c2pa.created` action attributed to the AI system rather than to a human.
  5. Confirm the certificate is on the C2PA Trust List. A valid manifest signed by an unrecognized certificate would not satisfy the interoperability criterion.

If a manifest is missing, malformed, or signed by an unverifiable certificate, the output does not meet Article 50's standard for machine-readable AI marking. See how to verify a C2PA file for a step-by-step walk-through.

Practical Article 50 compliance checklist

For AI providers and deployers preparing for the August 2, 2026 deadline:

  1. Inventory in-scope systems. Identify every AI system you provide or deploy that generates or substantially modifies content reaching EU users.
  2. Audit current marking behavior. Generate sample outputs and inspect them. Note which formats already carry C2PA manifests and which do not.
  3. Plan implementation. Where marking is missing, scope work to add C2PA signing to the output pipeline. Most modern image and video toolchains have C2PA SDK bindings.
  4. Register a signing certificate. Obtain a certificate from a CA recognized by the C2PA Trust List, or use a self-signed certificate registered with the appropriate trust list maintainer.
  5. Document in your model card or system card. The Article 50 obligation requires effective and reliable marking. Documentation that explains how your marking works supports compliance and audit responses.
  6. Provide a verification path. Users should be able to verify your marking. Linking to a public verifier such as C2PA Viewer or to your own embedded verifier satisfies this.
  7. Track the Code of Practice. The European Commission is finalizing the Code of Practice on Transparency through 2026. Final guidance may add concrete obligations beyond the Regulation text.

Frequently Asked Questions

Does the EU AI Act require C2PA?

No, the EU AI Act does not name C2PA explicitly. Article 50(2) requires AI-generated content to be marked in a machine-readable format that is effective, interoperable, robust, and reliable. C2PA Content Credentials are the leading technical mechanism that satisfies all four criteria, and the European Commission Code of Practice on Transparency lists C2PA as an example of compliant marking.

When does Article 50 of the EU AI Act take effect?

Article 50 becomes enforceable on August 2, 2026, two years after the EU AI Act entered into force. This date applies to general transparency obligations and to general-purpose AI model rules.

What are the penalties for not complying with Article 50?

Under Article 99 of the EU AI Act, transparency violations can be sanctioned with administrative fines up to 15 million EUR or 3 percent of total worldwide annual turnover for the preceding financial year, whichever is higher. National supervisory authorities enforce the penalties.

Who must comply with Article 50?

Article 50 applies to providers of generative AI systems and to deployers using AI to produce content shown to people in the EU. The scope is extraterritorial. A US-based AI provider serving EU users is in scope regardless of where the provider is incorporated.

What counts as machine-readable marking under Article 50?

The Article 50(2) standard is that marking must be effective, interoperable, robust, and reliable as far as technically feasible. The European Commission Code of Practice on Transparency interprets this as cryptographic provenance metadata that travels with the file and can be verified by any compliant tool. C2PA Content Credentials meet this bar.

Does the EU AI Act apply to AI tools that are not based in Europe?

Yes. The EU AI Act applies extraterritorially. Article 2 places any provider in scope if its AI system is placed on the EU market or its output is used in the EU. OpenAI, Google, Anthropic, Adobe, and any other non-EU provider serving EU users falls under Article 50 obligations.

Verify Any AI Tool's Article 50 Readiness

Drop any AI-generated file into C2PA Viewer to see whether it carries a valid manifest, who signed it, and whether the marking would satisfy Article 50's machine-readable detection requirement. All processing happens in your browser. The file never leaves your device.

Open the Inspector →