Glossary:EU AI Act
9 min.

What is the EU AI Act?
The EU AI Act (officially: Regulation (EU) 2024/1689) is the world's first comprehensive AI law. It sets binding requirements for how artificial intelligence may be developed, distributed, and used within the European Union. The goal: AI systems should be safe, transparent, and compliant with fundamental rights — without unnecessarily blocking innovation.
The regulation came into effect on August 1, 2024 (EU Commission, 2024). For high-risk AI systems, the Act will be fully applicable starting from August 2, 2026. Companies using AI — from simple chatbot recommendations to automated credit decisions — must prove that their systems are compliant by then.
Branchly already meets the requirements of the EU AI Act today: Hosting in European data centers, full data sovereignty in the EU, transparent AI processes. No risk for your business, no last-minute actions required.
How does the EU AI Act work? The Risk Classification System
The Act classifies all AI systems into four risk levels. This classification determines what obligations you have as a provider or operator.
Level 1: Unacceptable Risk (Prohibited)
AI systems that violate fundamental rights are simply prohibited. This includes:
Social scoring systems by authorities
Biometric real-time surveillance in public spaces (with narrowly defined exceptions)
Manipulation through subliminal techniques that harm the user
AI that exploits vulnerabilities of specific groups of people
These prohibitions have been in place since February 2, 2025.
Level 2: High Risk (Stringent Requirements)
High-risk AI includes systems that make critical decisions about people. The full application of these rules starts on August 2, 2026. Typical examples:
AI in credit granting, insurance, or personnel selection
Biometric identification
AI in critical infrastructure (energy, water, transportation)
Education and professional qualification decisions
Systems that support law enforcement or administration of justice
Providers of high-risk AI must, among other things: establish a risk management system, document training data, ensure human oversight, guarantee technical robustness, and register with an EU database.
Level 3: Limited Risk (Transparency Obligations)
AI systems with limited risk are primarily subject to transparency requirements. This particularly involves AI chatbots and generative AI: Users must know that they are interacting with an AI system — not a human. Deepfakes must be labeled as such.
This level is the relevant category for most companies using website AI. Branchly automatically communicates transparently that visitors are interacting with an AI system. This requirement is already embedded in the product.
Level 4: Minimal or No Risk (No Special Requirements)
The vast majority of all AI applications fall into this category: AI-powered spam filters, recommendation algorithms, simple image editing. No specific obligations from the AI Act apply to these systems.
EU AI Act vs. GDPR: What is new?
The AI Act complements the GDPR — it does not replace it. Both apply simultaneously. Here are the key differences:
Feature | GDPR | EU AI Act |
|---|---|---|
Subject | Processing personal data | Development and use of AI systems |
In force since | May 2018 | August 2024 |
Complete application | Immediately (2018) | Phased until August 2026 |
Maximum penalty (Tier 1) | 4% of global annual revenue | 7% of global annual revenue |
Targets | Companies processing data | Providers, operators, and importers of AI |
Enforcement | Data protection authorities (national) | AI market surveillance authorities (national) + EU AI Office |
Cumulative applicability | Applies in parallel to the AI Act | May come into play in addition to GDPR |
Burden for SMEs | Known compliance practices | New, complex, high one-time costs |
As of March 2026, a total of around 5.8 billion euros in GDPR fines had been imposed (BuiltinEU, 2025). The maximum penalty under the EU AI Act is 7% of annual revenue, almost double the GDPR maximum of 4% — a clear signal of how seriously the EU takes this regulatory area.
Fines and Penalties: The Three Tiers
The fine system of the EU AI Act is tiered — depending on the severity of the violation (Art. 99 EU AI Act, via matproof.com, 2025):
Tier 1 — Prohibited AI Practices:
Up to 35 million euros or 7% of global annual revenue (whichever is higher). Applies to violations of the prohibitions from Art. 5 — meaning genuine violations of fundamental rights.
Tier 2 — Violations of High-Risk Requirements:
Up to 15 million euros or 3% of global annual revenue. Relates to errors in risk management systems, documentation, technical robustness, or transparency regarding high-risk AI.
Tier 3 — False Statements to Authorities:
Up to 7.5 million euros or 1.5% of global annual revenue. Applies to misinformation in registrations or communications with authorities.
Adjusted upper limits apply to SMEs and startups — the Act explicitly states that smaller companies are treated proportionately. Nevertheless: Ignoring is not a strategy.
Timeline: When does what apply?
Date | What comes into effect |
|---|---|
August 1, 2024 | EU AI Act comes into effect |
February 2, 2025 | Prohibitions for unacceptable risk apply |
August 2, 2025 | Governance regulations and general AI models (GPAI) |
August 2, 2026 | High-risk AI fully applicable (Art. 6 para. 2 systems) |
August 2, 2027 | High-risk AI in existing products (Art. 6 para. 1) |
The State of Readiness: Concerning
The numbers are clear: A large part of European companies is not yet ready.
According to a survey by Littler (November 2025) among European employers, only 18% indicate that they are "very well prepared" — while 20% are completely unprepared (Littler, 2025).
In Germany, the picture is similar: 69% of German companies want concrete help in implementing the AI Act, as shown by a Bitkom survey (Bitkom, July 2025). This is hardly surprising, as the compliance costs for high-risk AI are significant: According to the EU Commission and Eurochambres, companies must expect 319,000 to 600,000 euros just to make a single high-risk AI system compliant (Eurochambres, 2026).
Additionally, the national enforcement structure is still being established: By March 2026, only 8 of 27 EU member states had designated a national contact point for the AI Act (EPRS, March 2026). This does not change the applicability of the regulation — but delays legal certainty for companies.
For branchly customers, this effort is largely taken care of: The platform is hosted on Azure EU servers, all processes are documented and transparent, and AI interactions are processed in compliance with GDPR on EU infrastructure.
EU AI Act in practice: Three Industries
E-Commerce
An online shop using an AI chatbot for product advice typically falls into the "limited risk" level. The requirement: Users must know they are speaking with AI. Branchly ensures this automatically — every conversation clarifies that it is an AI system. In contrast, using AI for automated price discrimination or personalized manipulation without consent moves into much riskier terrain.
Practical Tip: Check whether your recommendation engine or chatbot makes personal decisions — or simply informs. The difference determines the risk class. Hundreds of European e-commerce companies have already taken this path with branchly and its over 40 million processed AI sessions (source: branchly, 2026).
Tourism
For tourism companies, AI chatbots are typically low-threshold systems: They answer questions, provide recommendations, support bookings. This falls under "limited" or "minimal" risk. The challenge lies less in risk classification than in documentation: What data flows into the AI? Are guest data processed outside the EU?
Branchly supports 101 languages natively — tourism websites can serve international guests in their native language without transferring data to third countries. The widget interaction rate is 5–10% (industry average: 0.5–1%) — which corresponds to a tenfold difference (source: branchly, 2026).
Financial Services
Here it gets serious: AI involved in credit decisions, risk assessments, or fraud detection falls under high risk. The requirements are strict — risk management system, documentation of training data, human oversight, registration. Using AI only for FAQ bots or lead qualification operates in the "limited risk" area and meets transparency obligations.
Branchly serves 11 million users in this environment — including financial service providers like IKB, which rely on EU-compliant AI communication. The platform is available from 499 euros/month (Starter, 1,000 sessions) and covers the transparency obligations of the AI Act without additional effort.
Related Terms
GDPR-compliant AI
AI Chatbot
Conversational AI
Natural Language Processing (NLP)
WCAG
RAG (Retrieval-Augmented Generation)
Frequently Asked Questions
What is the EU AI Act in simple terms?
The EU AI Act is the world's first comprehensive AI law. It stipulates what requirements AI systems in the EU must meet — depending on the risk they pose to humans. Prohibited practices such as social scoring have been banned since February 2025. High-risk AI must meet strict requirements starting in August 2026. All other AI systems are mainly subject to transparency obligations.
When does the EU AI Act apply to my company?
That depends on what type of AI you are using. Bans have been in effect since February 2, 2025. High-risk systems (such as AI in lending or personnel selection) must be fully compliant by August 2, 2026. For AI chatbots and generative AI — meaning most website applications — transparency obligations have been gradually introduced.
Is my AI chatbot a high-risk system under the EU AI Act?
In most cases: no. Chatbots for product advice, FAQs, support, or tourism information typically fall into the category of "limited risk" — they must identify themselves as AI but do not need to demonstrate a risk management system. High risk arises only when AI systems make decisions about people: credit, employment, law enforcement, education. If you’re unsure: have your use case reviewed by a lawyer with expertise in the AI Act.
What penalties exist for violations of the EU AI Act?
The fine system has three levels. Violations of prohibited AI practices (Tier 1): up to €35 million or 7% of global annual revenue. Violations of high-risk requirements (Tier 2): up to €15 million or 3%. False representations to authorities (Tier 3): up to €7.5 million or 1.5%. For comparison: The GDPR maximum is 4% — the AI Act exceeds that (matproof.com, 2025).
How does the EU AI Act differ from the GDPR?
The GDPR regulates how personal data is processed. The EU AI Act regulates how AI systems can be developed and deployed. Both regulations apply simultaneously and can overlap — for example, when an AI system uses personal data for decisions. The good news: Those who operate in compliance with the GDPR have already completed many of the necessary foundations.
What does transparency obligation mean for AI chatbots specifically?
Users must be able to recognize that they are interacting with an AI system — not a real person. This means in practice: a clear indication at the start of the conversation (e.g., "You are chatting with our AI assistant"). Deepfakes and synthetic media must also be labeled as AI-generated. branchly communicates this transparency as a standard in every interaction.
Does the EU AI Act also apply to companies outside the EU?
Yes. Similar to the GDPR, the AI Act follows the market principle: anyone offering AI systems in the EU or whose outputs are used in the EU is subject to the Act — regardless of the company's location. This has practical consequences: US AI providers without EU representation must appoint representatives.
How well prepared are European companies for the EU AI Act?
Poorly, according to surveys. According to Littler (November 2025), only 18% of European employers are "very well prepared" — 20% are not prepared at all (Littler, 2025). In Germany, 69% of companies report needing specific help with implementation (Bitkom, 2025). Those using AI from a compliant-by-design provider like branchly significantly reduce this effort.
What does EU AI Act compliance cost for SMEs?
For high-risk AI systems, the costs are significant: According to the EU Commission and Eurochambres, companies estimate €319,000 to €600,000 for a single system (Eurochambres, 2026). For AI systems with limited risk — such as typical chatbots and website AI — the expenses are significantly lower: Transparency obligations, documentation, and clear labeling as AI are sufficient in most cases.
How can I check whether my AI is compliant with the EU AI Act?
Start with a risk classification: Does your AI make automated decisions about people in sensitive areas (credit, work, education, law enforcement)? If so, you are in the high-risk area. If not, check whether you fulfill the transparency obligations: Do users know they are interacting with AI? Is data processed only in the EU? An EU AI Act-compliant provider like branchly can handle much of this assessment for you — hosting on Azure EU, documented processes, GDPR compliance included.





