Generative AI providers, start preparing now to comply with Article 50(2) of the AI Regulation (AI ACT) using Label.ai and KeeeX!

The rise of generative AI has led to a surge in deepfakes and a crisis of trust in information.

Starting in August 2026 (plus 3–6 months depending on the European Omnibus Directive), providers of generative AI will be required to systematically mark their content (images, audio, video, text, source code), make it detectable, and provide a public detector on their website. Violations could result in fines of up to €15 million or 3% of global revenue.

From the INCYBER Forum (FIC), we are proud to announce the launch of the AI Transparency Compliance Group (AITCG), an alliance of sovereign stakeholders preparing a turnkey solution to meet these obligations:

👉 invisible and robust watermarking integrated directly into content with Label4.ai

👉 secure metadata ensuring integrity and traceability with KeeeX

👉 a watermark detector and forensic analysis with Label4.ai

We are the only two French private-sector players involved from the outset in the AI Office’s work on the Code of Practice for Article 50, the final version of which is expected in June. As contributors to its drafting, we are already incorporating the foundational requirements of the future framework.