Why the EU AI Act Matters for Every Website and Online Store
The European Union’s Artificial Intelligence Act — officially Regulation (EU) 2024/1689 — entered into force on August 1, 2024. It is the world’s first comprehensive legal framework specifically designed to regulate artificial intelligence. And if you run a website or an online store that serves European users, it almost certainly applies to you.
This is not a distant regulation reserved for Big Tech. If you use a chatbot, a product recommendation engine, AI-powered search, automated pricing, fraud detection, or even AI-generated marketing copy, you have obligations under this law.
Let’s break down exactly what those obligations are and, more importantly, how to meet them.
Understanding the Risk-Based Framework
The entire architecture of the EU AI Act is built on a four-tier risk classification system. Your compliance obligations depend entirely on where your AI systems fall within this framework.
The Four Risk Tiers at a Glance
| Risk Level | Description | Examples in Websites / E-commerce | Key Obligations |
|---|---|---|---|
| Unacceptable | AI systems that pose a clear threat to fundamental rights. Banned outright. | Subliminal manipulation techniques, social scoring of customers, real-time biometric identification in public spaces | Total prohibition — do not deploy |
| High | AI systems used in sensitive areas listed in Annex III of the Act | AI-based credit scoring at checkout, biometric identity verification for age-gating, AI hiring tools on career pages | Conformity assessments, risk management systems, human oversight, data governance, logging, transparency, CE marking |
| Limited | AI systems that interact with humans or generate/manipulate content | Chatbots, AI customer service agents, AI-generated product descriptions, deepfake-style marketing videos | Transparency obligations — users must be told they are interacting with AI or viewing AI-generated content |
| Minimal | All other AI systems | Spam filters, basic product recommendation engines, inventory optimization, standard analytics | No specific obligations (voluntary codes of conduct encouraged) |
For the majority of website owners and online store operators, the most relevant categories are limited risk and minimal risk. But don’t breathe too easy — the limited-risk transparency requirements are binding, and getting them wrong carries severe penalties.
Key Timelines You Cannot Afford to Miss
The EU AI Act does not hit all at once. It rolls out in phases. Here are the dates that matter most:
- February 2, 2025: Prohibitions on unacceptable-risk AI systems take effect. AI literacy obligations begin.
- August 2, 2025: Rules for general-purpose AI (GPAI) models apply. This affects anyone using foundational models like GPT-4, Claude, or Gemini via APIs.
- August 2, 2026: The majority of the Act’s provisions become enforceable, including all high-risk and limited-risk obligations.
- August 2, 2027: Extended deadline for certain high-risk AI systems embedded in products already regulated under existing EU sectoral legislation.
The first milestone — February 2025 — has already passed. If you have not yet assessed your AI systems against the prohibited practices list and started building AI literacy within your organization, you are already behind.
Practical Compliance Steps for Websites
Step 1: Inventory Every AI System You Use
This is where most businesses stumble. AI is now embedded in tools you might not even think of as “artificial intelligence.” Conduct a thorough audit.
Here is a starter checklist for a typical website or e-commerce platform:
- Chatbots and virtual assistants (Tidio, Zendesk AI, Intercom Fin, custom GPT-based bots)
- Product recommendation engines (Nosto, Clerk.io, Algolia Recommend, built-in PrestaShop/WooCommerce modules)
- Search functionality (Algolia, Elasticsearch with ML ranking, Doofinder)
- Dynamic pricing tools (Prisync, Competera, custom algorithms)
- Fraud detection (Signifyd, Riskified, Stripe Radar)
- Content generation (ChatGPT, Jasper, Copy.ai for product descriptions or blog posts)
- Email personalization (Klaviyo, Mailchimp AI features)
- Analytics and A/B testing (Google Analytics 4 predictive audiences, Optimizely)
- Ad targeting and bidding (Google Performance Max, Meta Advantage+)
- Image generation or editing (DALL-E, Midjourney, Adobe Firefly for product visuals)
Document each system with its provider, purpose, data inputs, data outputs, and the risk tier you believe it falls under.
Step 2: Classify Each System by Risk
Using the table above and the detailed criteria in Annexes I and III of the Act, assign a risk classification to each AI tool. When in doubt, consult a specialist. At Lueur Externe, we have been helping e-commerce businesses and website operators navigate complex regulatory landscapes since 2003, and the AI Act is one of the most significant new challenges we assist clients with.
A few classification tips:
- A chatbot that simply answers FAQs = limited risk (transparency needed).
- An AI system that determines whether to grant or deny consumer credit at checkout = high risk (full conformity assessment needed).
- A recommendation engine that suggests related products = minimal risk (no binding obligations, but good practice to be transparent).
- An AI system that uses subliminal techniques beyond a person’s consciousness to materially distort behavior = unacceptable risk (prohibited).
Step 3: Implement Transparency Obligations
For most websites and online stores, the transparency obligations under Article 50 are the single most important compliance requirement.
Here is what you must do:
For chatbots and AI assistants:
Users must be informed, clearly and before or at the start of the interaction, that they are communicating with an AI system — unless this is obvious from the circumstances.
A practical implementation might look like this in your chatbot’s greeting:
<div class="chatbot-disclaimer">
<p>👋 Hi! I'm an AI-powered assistant. You're chatting with an
automated system, not a human. I can help with product questions,
order tracking, and returns. Would you like to speak with a
human agent instead?</p>
</div>
For AI-generated content:
If you use AI to generate product descriptions, blog posts, marketing emails, or images, you must disclose that the content is AI-generated. The Act specifically requires that outputs of AI systems that generate synthetic audio, video, text, or images be marked as artificially generated or manipulated.
Practical approaches include:
- Adding a small label such as “This description was generated with AI assistance” at the bottom of product pages.
- Including metadata tags (C2PA or similar provenance standards) in AI-generated images.
- Adding a disclosure in your editorial policy for blog content.
For deepfakes and synthetic media:
If you use AI to generate realistic video or audio (for marketing, for instance), disclosure is mandatory with no exceptions other than clearly artistic or satirical use.
Step 4: Establish AI Literacy (Article 4)
Article 4 of the AI Act requires that providers and deployers of AI systems ensure their staff and anyone handling AI on their behalf has a sufficient level of AI literacy. This obligation applied from February 2, 2025.
What does this look like in practice?
- Train your marketing team on the basics of how the AI tools they use work.
- Ensure your developers understand the risk classification framework.
- Document training sessions and materials.
- For smaller teams, even a well-documented internal guide counts.
This does not mean everyone needs a PhD in machine learning. It means people operating AI tools should understand what those tools do, their limitations, and the legal requirements surrounding them.
Step 5: Update Your Legal Documentation
Your existing legal pages — privacy policy, terms of service, cookie policy — likely need updates to reflect your use of AI. Consider:
- Adding an AI Transparency Policy or an AI section to your privacy policy.
- Describing the AI systems you use, their purpose, and the type of decisions they influence.
- Explaining user rights, including the right to request human intervention for significant AI-assisted decisions.
- Updating your cookie consent mechanism if AI tools process personal data (this intersects with GDPR — Article 10 of the AI Act specifically addresses data governance).
Step 6: Address GPAI Model Obligations If You Use Foundation Models
If your website or store uses general-purpose AI models (like OpenAI’s GPT-4, Anthropic’s Claude, Google’s Gemini, or Meta’s LLaMA) — whether through APIs, plugins, or embedded services — you need to be aware of the GPAI provisions that took effect in August 2025.
As a deployer (not provider) of these models, your primary obligations are:
- Ensure you comply with the terms of use set by the GPAI provider.
- Implement the transparency measures required for the specific use case (chatbot, content generation, etc.).
- If you fine-tune or substantially modify a GPAI model, you may become a provider under the Act, which carries significantly heavier obligations.
Penalties: Why This Is Not Optional
The EU AI Act enforcement regime is modeled after GDPR — and it is even more aggressive in some respects.
| Violation Type | Maximum Fine |
|---|---|
| Deploying a prohibited AI system | €35 million or 7% of global annual turnover (whichever is higher) |
| Non-compliance with high-risk requirements | €15 million or 3% of global annual turnover |
| Supplying incorrect information to authorities | €7.5 million or 1.5% of global annual turnover |
| SME and startup provision | Fines are capped at the lower percentage thresholds listed above |
For small and medium-sized enterprises, the proportionality principle applies, and fines will be adjusted. But “adjusted” does not mean “trivial.” A mid-sized e-commerce business doing €5 million in annual revenue could face fines of up to €350,000 even under the SME provisions.
Enforcement will be handled by national competent authorities in each EU member state, coordinated by the newly established EU AI Office in Brussels.
Special Considerations for E-Commerce Platforms
PrestaShop and WooCommerce Stores
If you operate an online store on PrestaShop or WooCommerce, many of your AI touchpoints come from third-party modules and plugins. This creates a shared responsibility dynamic:
- The module/plugin developer is typically the provider of the AI system.
- You, the store operator, are the deployer.
- As a deployer, you must ensure that the AI system is used in accordance with the instructions provided, that transparency obligations are met, and that human oversight is maintained where required.
Before installing any new AI-powered module, request documentation from the provider about their EU AI Act compliance status. Ask specifically:
- What risk classification does this AI system fall under?
- Has a conformity assessment been conducted (for high-risk systems)?
- What transparency measures are built in?
- What data does the system process, and how?
Lueur Externe, as a certified PrestaShop expert agency with deep experience in both technical implementation and regulatory compliance, routinely advises clients on vetting AI modules against EU AI Act requirements — a step that many store owners overlook until it is too late.
Marketplaces and Multi-Vendor Platforms
If you operate a marketplace, your exposure is higher. You may be considered a deployer for every AI system used across your platform, including those operated by third-party sellers. Your terms and conditions for sellers should explicitly address AI Act compliance, requiring sellers to:
- Disclose any AI systems they use through your platform.
- Comply with transparency and risk management obligations.
- Provide you with necessary documentation upon request.
The Intersection with GDPR and Other Regulations
The EU AI Act does not exist in a vacuum. It intersects with — and adds to — existing regulations:
- GDPR (Regulation 2016/679): AI systems processing personal data must comply with both the AI Act and GDPR. Article 22 of GDPR (automated decision-making) and AI Act high-risk requirements can apply simultaneously.
- Digital Services Act (DSA): Online platforms’ algorithmic recommendation systems face obligations under both the DSA (transparency in recommender systems) and the AI Act.
- Product Safety Regulation: AI embedded in consumer products (smart home devices sold in your store, for example) must comply with both product safety rules and the AI Act.
- Consumer Protection Directives: The Unfair Commercial Practices Directive already prohibits misleading practices — AI-powered dark patterns could violate both this directive and the AI Act simultaneously.
This layered regulatory environment is precisely why expert guidance matters. Getting one regulation right while violating another is not compliance — it is a liability waiting to happen.
A Practical Compliance Checklist
Here is a consolidated checklist you can use today:
- Complete an AI systems inventory across your entire digital estate
- Classify each system by risk tier (unacceptable, high, limited, minimal)
- Immediately cease any AI practices that fall under the prohibited category
- Implement transparency disclosures for all chatbots and AI-generated content
- Train staff on AI literacy and document the training
- Update privacy policies and terms of service to reflect AI use
- Review third-party AI module/plugin compliance documentation
- Establish a human oversight mechanism for AI-assisted decisions that significantly affect users
- Implement logging and record-keeping for high-risk AI systems
- Schedule regular compliance reviews (at minimum, annually)
- Monitor guidance from your national AI authority and the EU AI Office
Conclusion: Compliance Is a Competitive Advantage
The EU AI Act is not going away, and enforcement will only intensify as national authorities build capacity. But compliance is not just about avoiding fines. Transparent, responsible use of AI builds trust with your customers — and trust converts.
Businesses that proactively meet their AI Act obligations will differentiate themselves from competitors who scramble to catch up after the first enforcement actions make headlines.
The regulation is complex, the timelines are tight, and the technical-legal overlap is significant. This is not a project for a generic template downloaded from the internet.
If you operate a website or online store and want to ensure full compliance with the EU AI Act — while continuing to leverage AI for growth — contact the team at Lueur Externe. With over two decades of experience at the intersection of web technology, e-commerce, and regulatory compliance, Lueur Externe provides the precise expertise needed to turn this regulatory challenge into a business advantage.