Human-created, AI-assisted, or AI-generated? The EU AI Act sets new transparency expectations for AI-generated content.
The EU AI Act
Set to take full effect in August 2026, the EU Artificial Intelligence (AI) Act is the European Union’s first comprehensive effort to regulate AI. It includes new transparency and accountability expectations for AI-generated content, especially when the output is public-facing or presented as factual.
At a high level, the Act requires transparency from two groups:
- Providers: These are companies that build AI systems (e.g., OpenAI or Google). Under the Act, providers are responsible for designing their tools so AI-generated content can be technically detected, such as through machine-readable watermarks or metadata. These signals are meant for software platforms, not human readers.
- Deployers: Deployers are businesses, agencies, and professionals who use AI tools and share the output publicly. If you publish AI-generated content that reaches users in the EU, you may be responsible for providing clear, visible disclosure to human audiences.
What the Act Doesn’t Mean
The Act doesn’t ban AI tools, require disclosure for every use of AI, or prohibit AI-assisted drafting. Instead, it focuses on transparency and accountability when AI-generated content is presented to the public without clear human responsibility or editorial control. It’s primarily aimed at professional, commercial, and organizational use – not everyday experimentation (turning yourself into an action figuretrend to insert? I don’t know!)
What AI-Generated Content Requires a Label?
Not all AI use requires a label. The Act focuses on content that could reasonably be mistaken for reality or an authority.
Deepfakes are the clearest example. AI-generated images, audio, or video that appear to depict a real person, place, or event must be clearly labeled as artificially generated.
Users must also be informed when they are interacting with an AI system, unless it would be obvious to a reasonable person. Chatbots fall into this category, and many AI bots already include visible notices.
Then there’s a more nuanced use case affecting B2B content creators: public-facing, informational text that may influence decisions. AI-generated text that is presented as factual or authoritative – such as news-style content, health guidance, financial information, or policy commentary – may require disclosure when it’s published without meaningful human oversight.
The key phrase here is “meaningful human oversight.”
The Role of Human Review
This is where the Act becomes especially relevant for professional content teams.
The EU Act recognizes that human editorial control changes the nature of the work. When AI-generated text undergoes substantial human review – fact-checking, rewriting, judgement calls, and final approval – responsibility shifts to the human author or their organization. In those cases, content is generally treated as human-led, not machine-generated.
Regulators are still clarifying the details in a Code of Practice, but early guidance suggests “substantial human review” does not mean light proofreading or simple grammar cleanup. It implies real editorial responsibility.
This isn’t a loophole or a workaround. It’s a deliberate distinction: AI can assist in content creation, but humans remain accountable for what’s published.
For agencies and corporate communications teams, this reinforces the value of established editorial practices. Clear ownership, documented review processes, and accountability for accuracy all matter more in an AI-assisted environment. In fact, the editorial process itself becomes as important as the final output. It’s also why we may start to see terms like “human-reviewed” or “editor-verified” discussed more openly – or even added to published works.
Why the EU AI Act Matters in the U.S. (and Beyond)
Starting in August 2026, failing to disclose AI-generated content can lead to significant financial penalties. Because the Act applies based on where content is consumed – not where it’s created – U.S.-based agencies and organizations may be affected if their work reaches EU audiences.
More broadly, this law may become a blueprint for other governments to follow. Even without similar laws, public expectations around transparency, accountability, and trust are moving in the same direction.
Content creators can take practical steps now to prepare: documenting editorial standards, formalizing review workflows, and clarifying ownership of final content. That may add an extra step to the process – but it also elevates the value of thoughtful, human editorial work. As AI becomes more regulated, professional editing may add even more value as aeven shift from a “nice to have” to a legal safeguard.
Full disclosure: This article was developed with the assistance of AI tools, and reviewed, edited and approved by a human editorial team.
AI-assisted. Human-reviewed.