What is the EU AI Act?
The EU AI Act is the world's first comprehensive law regulating artificial intelligence. The European Parliament adopted the law in March 2024. The rules take effect in phases between 2024 and 2027.
The law has three goals: ensuring the safety of AI systems, protecting fundamental rights and keeping innovation possible. The EU seeks a balance - strict enough to protect citizens, flexible enough not to paralyze European businesses.
For business owners in the Netherlands, this is relevant legislation. If you use AI in your business - whether that's ChatGPT, an AI chatbot on your website or an automated decision-making system - you need to know which rules apply to you.
When Does It Take Effect?
The EU AI Act takes effect in phases:
- February 2025: Prohibited AI practices (unacceptable risk) are banned immediately
- August 2025: Rules for general purpose AI models (like GPT-4, Claude) take effect
- August 2026: Most other obligations, including high-risk AI systems
- August 2027: Rules for high-risk AI systems that fall under existing EU product legislation
So you still have time to prepare, but the bans on the most severe category are already active.
The Risk Classification System
The core of the EU AI Act is a classification into four risk levels. The higher the risk, the stricter the rules.
Unacceptable risk (prohibited)
These AI applications are banned immediately in the EU:
- Social scoring by governments (like China's point system)
- Emotion recognition in the workplace and in education
- Biometric categorization based on sensitive characteristics (race, religion, sexual orientation)
- Untargeted scraping of faces from the internet or security cameras for facial recognition databases
- AI that manipulates vulnerable groups (children, elderly, people with disabilities)
- Predictive policing based solely on profiling
High risk
AI systems that can have a major impact on people. Examples:
- AI in recruitment and selection (CV screening, evaluating job interviews)
- Credit scoring and insurance assessment
- AI in education (grading exams, determining access)
- AI in the justice system (recidivism prediction, sentencing advice)
- AI in medical devices and diagnostics
- Biometric identification (facial recognition)
- AI in critical infrastructure (energy, water, transport)
For high-risk systems, strict requirements apply: risk assessment, technical documentation, human oversight, transparency and accuracy testing.
Limited risk
AI systems that interact with people without being high risk. Think of chatbots, AI-generated content and emotion recognition systems that don't fall under "prohibited."
The main obligation here is transparency: you must tell people they're communicating with AI. If a customer chats with an AI bot on your website, it must be clear that it's not a human.
Minimal risk
Most AI applications fall here: spam filters, recommendation systems, translation tools, AI assistants for internal use. No specific obligations under the EU AI Act, although existing laws still apply (GDPR, consumer law).
What Should You Do as a Business Owner?
The actions depend on how you use AI. Here's a practical checklist:
Step 1: Inventory your AI usage
Make a list of all AI tools and systems your business uses. Think broader than just ChatGPT:
- Chatbots on your website
- AI features in your CRM, ERP or HR software
- Automated email campaigns with AI personalization
- AI-driven analytics and reporting
- Recruiting tools with AI screening
- Content generation with AI
Step 2: Classify the risk
Determine which risk category each AI application falls into. Most SME applications fall into "minimal" or "limited" risk. If you use AI for recruitment, credit assessment or medical purposes, you're probably in "high risk."
Step 3: Arrange transparency
This applies to virtually everyone. If customers or employees interact with AI: - State that they're communicating with AI (chatbot, AI assistant) - Label AI-generated content as such - Document which AI tools you use and for what purpose
Step 4: High-risk obligations (if relevant)
If you use high-risk AI systems: - Conduct a risk assessment (conformity assessment) - Prepare technical documentation - Set up human oversight - Ensure logging and traceability - Implement a quality management system - Register the system in the EU database (mandatory for high-risk)
Transparency Obligations
Transparency is the thread running through the EU AI Act. Regardless of risk category, these rules apply:
Report AI interaction - If a person communicates with an AI system, it must be clear that it's AI. A chatbot on your website must be recognizable as a chatbot, not as a human.
Label AI-generated content - Text, images, audio or video created by AI must be identifiable as such. This applies especially to deepfakes and synthetic media, but also to AI-generated marketing texts in certain contexts.
Duty to inform - For high-risk systems, the people affected by them must know that AI plays a role in the decision. An applicant must know that AI screened their CV.
The transparency obligations aren't meant to discourage AI use. They ensure people can respond in an informed way. Someone who knows a chatbot is AI asks different questions than someone who thinks they're talking to a human.
General Purpose AI (GPAI)
The EU AI Act has specific rules for general purpose AI models - the large language models like GPT-4, Claude and Gemini. These rules target the model makers (OpenAI, Anthropic, Google), not the businesses that use them.
What model makers must do: - Publish technical documentation - Establish copyright policy and describe training datasets - Comply with the EU Copyright Directive - Cooperate with downstream providers
For models with "systemic risk" (the most powerful models), extra requirements apply: red teaming, incident monitoring, cybersecurity and energy consumption reporting.
As a user of these models (via ChatGPT, Claude or an AI Agent), you're primarily responsible for transparency toward your customers and employees, not for the technical compliance of the model itself.
Fines
The EU AI Act imposes fines comparable to the GDPR:
- Prohibited AI practices: up to 35 million euros or 7% of global turnover
- Other violations: up to 15 million euros or 3% of global turnover
- Providing incorrect information to regulators: up to 7.5 million euros or 1% of global turnover
For SMEs and startups, adjusted, proportional fines apply. The EU wants to prevent small businesses from being disproportionately affected.
Enforcement will be organized nationally. In the Netherlands, a yet-to-be-designated regulator will handle this, probably the Data Protection Authority (which also enforces the GDPR) or a newly established authority.
The Relationship with the GDPR
The EU AI Act doesn't replace the GDPR - they complement each other. If your AI system processes personal data, both laws apply simultaneously.
In practice, this means: - GDPR obligations remain (processing register, data processing agreements, data subject rights) - The EU AI Act adds AI-specific requirements (transparency, risk assessment, human oversight) - For high-risk AI with personal data, you must conduct a Data Protection Impact Assessment (DPIA)
The overlap makes sense. An AI system that screens job applications processes personal data (GDPR) and is a high-risk AI system (AI Act). Both laws apply.
What Does This Mean for AI Agents?
AI Agents - continuously running AI systems that are reachable via multiple channels - fall in most cases under "limited risk." The transparency obligation is the most important: customers communicating with an AI Agent must know it's AI.
Points of attention for businesses with an AI Agent:
- Clear identification: the AI Agent must not impersonate a human
- Data isolation: the agent only processes data for which you have consent
- Logging: store interactions for audit purposes
- Human oversight: ensure a human can intervene if the agent acts incorrectly
- Information to customers: mention in your privacy policy that you use AI
If the AI Agent makes decisions that directly impact people (credit advice, medical advice, legal advice), the classification shifts to "high risk" and the stricter requirements apply.
How aiagent.nl Handles This
At aiagent.nl, compliance is built into our approach:
- EU hosting: data is processed and stored in Europe (eu-central-1)
- Data isolation: each customer runs in their own dedicated server
- Transparency: AI Agents identify themselves as AI, not as human
- Logging: all interactions are logged for audit and quality control
- Anthropic models: Claude complies as a GPAI model with the model obligations
- No prohibited applications: we don't build social scoring, emotion recognition or manipulative systems
The EU AI Act isn't an obstacle - it's a quality mark. Companies that are compliant show that they use AI responsibly. In a market where trust in AI is growing, that's a competitive advantage.
Have questions about how the EU AI Act applies to your AI usage? We're happy to help. Get in touch via aiagent.nl.
