Fuze: EU AI Act compliance for AI agents
An open-source SDK and platform that lets AI-agent deployers prove compliance with the EU AI Act from runtime data, not policy documents. Classify risk, instrument the agent stack, and generate evidence mapped to every Article. Enforcement starts 2 August 2026.

Fuze: Prove compliance.
EU AI Act compliance platform for AI-agent deployers. In active development.

The problem
The EU AI Act enters enforcement on 2 August 2026, with fines of up to €35M or 7% of global turnover for the worst categories of non-compliance. A growing list of obligations applies to anyone deploying an AI agent in the EU, not just the providers who train the models.
Most existing compliance tooling stops at policy documents. That works for procurement; it does not work for an auditor. Article 12 of the Act requires record-keeping that spans the full system lifespan; Article 13 asks for transparency that a non-expert can act on; Article 14 asks for human oversight including mitigation of automation bias; Article 15 asks for accuracy, robustness and cybersecurity. None of this can be satisfied by a PDF.
Evidence, not assurances. Runtime evidence vs. checkbox compliance.
What Fuze is
Fuze instruments the agent stacks teams already run, and emits the evidence the EU AI Act actually requires.
- Classify your risk. Walk through Annex III, GPAI, and the prohibited-practices list to land on the right risk tier, and the obligations that come with it.
- Instrument with the open-source SDK. A few lines around your existing agent code emit structured runtime traces: tool calls, model decisions, human-in-the-loop checkpoints, data lineage.
- Generate compliance evidence from runtime data. Reports map directly onto each binding Article: Article 9 risk management, Article 12 record-keeping, Article 13 transparency, Article 14 human oversight, Article 15 robustness.
- Open source at the core. The SDK and the schemas are public. The hosted platform sits on top.
A decorator on the boundary of an agent tool is enough to start producing Article 12 records:
import { trace, oversight } from "@fuze-ai/sdk";
class CreditAgent {
@trace("credit.assess")
@oversight({ threshold: 0.8, route: "human" })
async assess(application: Application): Promise<Decision> {
return this.model.score(application);
}
}
from fuze import trace, oversight
@trace("credit.assess")
@oversight(threshold=0.8, route="human")
async def assess(application: Application) -> Decision:
return await model.score(application)
Why now
There are two windows.
Until August 2026, deployers can adopt Fuze as a free early-access product, build the runtime spine of their compliance posture, and walk in to enforcement with an audit trail rather than a folder.
After August 2026, the cost of not having that evidence becomes the EU AI Act's enforcement schedule.
What's next
- Production-grade SDK in Python and TypeScript, drop-in for the major agent frameworks.
- Article-by-Article mapping for high-risk Annex III deployers (recruitment, education, credit, public services).
- Tenant-scoped evidence locker with retention and export.
- Sector packs: financial services, recruitment, edtech.
Built for the EU, from the EU. More at fuze-ai.tech.