The EU AI Act is real. It's coming. And for the first time in a long time, European regulators are moving faster than most businesses are paying attention.
The problem: almost everything written about the EU AI Act is written for enterprise legal teams. This post is for you — an Amazon or Bol.com seller who uses AI tools, probably doesn't have a legal department, and needs to know what actually matters for your business.
What the EU AI Act Actually Is
The EU AI Act is the world's first comprehensive law regulating artificial intelligence systems. It was passed by the European Parliament in March 2024 and becomes enforceable in stages, with full enforcement on August 2, 2026.
The Act classifies AI systems into risk categories — from "minimal risk" (almost everything, no obligations) to "unacceptable risk" (banned outright). The risk category your AI tools fall into determines what you must do, if anything.
The Risk Categories, Explained Simply
Minimal Risk — No Obligations
Most AI applications fall here. If you're using AI for internal tasks that don't make decisions about individuals, you have no specific obligations under the AI Act. This includes:
- AI tools you use internally for your own operations (inventory forecasting, pricing analysis, demand planning)
- AI-powered productivity tools (writing assistants, transcription tools)
- Most automation tools that handle logistical rather than customer-facing decisions
Limited Risk — Transparency Obligations
If your AI interacts directly with consumers, you must disclose that they're interacting with AI. This includes:
- AI chatbots that customers interact with on your website
- Automated customer service responses that don't disclose they're automated
- AI-generated content marketed as human-created
The obligation: You must tell users they're interacting with an AI system. That's it.
High Risk — Significant Obligations
This is where it gets serious. High-risk AI systems are those that make or meaningfully influence decisions about individuals. The EU defines this broadly. For e-commerce sellers, this could include:
- AI systems that make automated hiring or employment decisions
- AI that evaluates creditworthiness of individuals
- AI in safety components of products (less relevant for typical sellers)
The obligations are substantial: Registration in an EU database, risk assessments, data governance requirements, human oversight measures, transparency documentation, and ongoing monitoring.
Unacceptable Risk — Banned
AI systems that manipulate human behavior in deceptive ways, exploit vulnerabilities, or enable social scoring by governments. Not relevant to normal e-commerce operations.
What Risk Category Does the Average Seller Fall Into?
Most small and medium businesses using AI tools: Your current AI tools almost certainly fall into the "minimal risk" or "limited risk" category. You likely have no significant legal obligations — but you should verify this and document your assessment.
The AI Tools You're Probably Using Right Now
| Tool / Use Case | Risk Level | Your Obligation |
|---|---|---|
| ChatGPT / Claude for writing product descriptions | Minimal | None — internal content creation tool |
| AI chatbot on your website answering customer questions | Limited | Disclose it's AI — "This is an AI assistant" |
| Automated repricing (rules-based) | Minimal | None — automated decision-making on pricing (not about individuals) |
| AI agent handling customer service messages | Limited | Disclose it's AI in the response |
| Inventory demand forecasting AI | Minimal | None — internal business decision tool |
| AI screening job applicants for warehouse roles | High Risk | Significant — documentation, oversight, potential registration required |
| AI system making credit decisions about sellers | High Risk | Significant — would require full compliance assessment |
What You Actually Need to Do Before August 2026
Here's the honest answer: for most small and medium businesses, the EU AI Act requires less than you probably think. Here's the minimum checklist:
Step 1: Inventory Your AI Tools
Make a list of every AI system you use in your business. Include: what it does, whose data it processes, and whether it makes decisions about individuals (customers, employees, suppliers).
Step 2: Categorize Each Tool
Using the table above, categorize each tool as minimal, limited, or high risk. If any tool might be high risk, consult a lawyer — high-risk obligations are significant and non-compliance fines can reach €30 million or 6% of global annual turnover, whichever is higher.
Step 3: Fix Your Chatbot Disclosure
If you have an AI chatbot on your website, add a disclosure. Something as simple as "This is an AI assistant. For complex issues, you can reach us at [email]." The penalty for not disclosing is reputational, not financial — but it's an easy fix.
Step 4: Document Your Assessment
Write down what you checked, what you found, and what you concluded. This document is your evidence that you took the Act seriously. If a regulator ever asks, you can show your assessment. This is not a legal document — it's a practical one.
What About GDPR?
GDPR and the EU AI Act are separate regulations, but they overlap. GDPR applies to all personal data processing. The AI Act adds specific requirements on top of GDPR for AI systems.
For most sellers: if you're GDPR-compliant (and if you're selling to EU customers and have a privacy policy, you should be), the AI Act doesn't add much unless you're using high-risk AI systems.
If you don't have a GDPR privacy policy on your website, fix that first. It's more urgent than the AI Act for most sellers.
The Real Risk for SMB Sellers
Here's the honest assessment: for 95% of small and medium businesses, the EU AI Act is a non-event from a legal compliance perspective. Your AI tools fall into minimal or limited risk, and your obligations are straightforward or non-existent.
The real risk is operational:
- If a regulator investigates your AI chatbot and it's not disclosed as AI, you face reputational damage and potential enforcement
- If you're using AI in ways that affect employment decisions without proper documentation, the risk is higher
- If you ignore the Act entirely and a complaint is filed, you'll have no documentation to show you took it seriously
The practical risk for most sellers: Not the €30M fines (you won't hit those thresholds). The risk is: a competitor files a complaint about your undisclosed AI chatbot, or a customer claims an AI system made a decision about them without transparency. Having a documented AI Act assessment on file shows good faith. Not having one shows negligence.
Where the AI Act Gets Relevant for Automation Sellers
If you're building or selling automation systems for other businesses — or if you're working with an automation agency — the AI Act becomes more relevant:
- If you build AI-powered automation for clients, you may need to register certain systems in the EU database
- Your client contracts should include AI Act compliance representations
- Documentation requirements for the AI systems you build become part of your deliverable
The Bottom Line
August 2, 2026 is coming. For most small and medium businesses: the EU AI Act is manageable. Do a 30-minute inventory of your AI tools, fix your chatbot disclosures if you have them, and document your assessment. Total time: 2–4 hours.
If you're using AI in high-risk ways — automated employment decisions, creditworthiness assessments — you need a lawyer, not a blog post.
Want a practical AI Act compliance assessment for your automation setup?
Book a free 30-minute call. We'll walk through your AI tool inventory and tell you what actually needs to change before August 2026.
Book a Free Discovery Call →Continue reading: Does the EU AI Act Apply to Your Amazon or Bol.com Business? (Quick Quiz) — a 2-minute self-assessment to determine your specific risk level.