AI in Gambling: A race between improvement chances, regulatory compliance and potential liability

Hambach & Hambach Rechtsanwälte

Hambach & Hambach Rechtsanwälte
Haimhauser Str. 1
D - 80802 München
Tel.: +49 89 389975–50
E-Mail: info@timelaw.de
Article by Dr. Stefanie Fuchs-Raicher, Senior Partner & Christina Kirichenko, Senior Associate

In the current digital landscape, Artificial Intelligence (“AI”) deployment comes with bold promises: more streamlined and therefore more efficient operations, real-time risk management, smarter decision-making, deeply personalized user experiences, measurement of CRM-measures and much, much more. But with these gains in efficiency come rising regulatory expectations and increasing risks of liability. As the gambling industry races towards technological optimization, regulators are catching up fast — and enforcement is no longer theoretical. AI governance is therefore now no longer optional. It is imperative from a licensing, legal and reputational point of view.

I. Tech failures trigger enforcement

Dr. Stefanie Fuchs, Senior Partner at Hambach & Hambach.
Dr. Stefanie Fuchs, Senior Partner at Hambach & Hambach.
Nowadays, technical failures are no longer just minor issues. They may become multimillion-dollar liabilities with little to no warning. In France, Unibet’s local operator was fined € 800,000 after a software malfunction allowed self-excluded users to access the platform. This was recently echoed by the Australian regulator, which imposed a 1 million AUD fine on Unibet’s local operator for the same underlying technical failure. In the UK, Bet365 was ordered to pay over 500,000 GBP for shortcomings in its responsible gambling software. And Gamesys was hit with a 6 million GBP penalty over failures in anti-money laundering (“AML”) and combating terrorist financing (“CTF”) procedures and player protection controls.Globally, gambling regulators issued over 184 million USD in fines in 2024 — a striking reminder that enforcement is ramping up, particularly where technology is involved. However, the real threat is multidimensional. Operators leveraging AI-driven systems face not just one, but multiple layers of regulatory exposure — including gambling supervision, the soon-to-be-enforced EU AI Act (with fines of up to €35 million or 7% of global turnover), the well-known GDPR, AML/CTF-regulatory-regimes and, depending on the jurisdiction, additional regulations.In this regulatory environment, a single system failure can trigger parallel enforcement actions across multiple legal frameworks — from gambling oversight to AML/CTF-requirements to data protection as well as to the (upcoming) AI governance. Systems must now be transparent, auditable, and compliant by design. Now, the accountability does not only rest with the system itself or its developer, but ultimately (also) with the operator/deployer.

II. AI in Gambling Operations

Christina Kirichenko, Senior Associate at Hambach & Hambach.
Christina Kirichenko, Senior Associate at Hambach & Hambach.
AI-systems may generally perform a range of operational and compliance-related functions within the gambling sector. In the gambling sector, AI systems may be used for a range of operational, behavioural, and compliance functions — including the biometric identity-verification of players, risk-scoring for players, player segmentation for targeted marketing, early detection of problematic gambling behaviour (which is a regulatory requirement according to sec. 6i par. 1 Inter-State-Treaty on Gambling (“ISTG”) 2021), automated transaction monitoring for AML/CTF-purposes, dynamic game adaptation based on player ability, and AI-driven customer service via chatbots, and many other use cases.While AI-systems may be designed to support regulatory compliance and player protection, their outputs may actually trigger the opposite and inadvertently create a compliance risk.AI-decisions are probabilistic, shaped by training data and algorithmic assumptions — and often lack transparency. As a result, even technically accurate predictions may lead to overreach, misclassification, or legally problematic outcomes. Questions of explainability, procedural fairness, and the permissible scope of behavioural influence are central to ensuring that AI in gambling meets regulatory expectations and fundamental rights standards.Data integrity with regard to the training data, validation data, testing data and with regard to the input-date is of essence for accurate outputs and to avoid biases and feedback loops potentially leading to a prohibited discrimination or other interference with fundamental rights of natural persons. Especially the fragmentation of data sources in sports to be betted upon — especially in gambling scenarios involving fast-moving environments like, for example, motorsports – poses a fundamental risk to the reliability of AI system outputs for this reason. Here, AI tools may draw on multiple real-time data streams: live race telemetry, historical performance metrics, user betting behaviour, and third-party event feeds. The same applies to the fragmented data-sources used by online-gambling-providers e. g. regarding risk-assessments. These data sources may come from different systems, service-providers, vendors or jurisdictions — and are rarely fully synchronised. Wrong implementation may result in inconsistent inputs, flawed odds generation, incorrect risk signals and risk-assessments, etc. In such contexts, AI may not just underperform — it may mislead and may therefore be considered as unreliable by local regulators.

III. The EU-AI Act: A New Compliance Frontier

With the adoption of the EU Artificial Intelligence Act (EU) 2024/1689 (“AI-Act”), the liability risks get more serious. The latter triggers obligations across the complete AI lifecycle: from design, training, testing and validation to deployment and post-marketing oversight.The AI-Act, which partially already is applicable since February 2025 (general provisions, including the AI-literacy-duties, prohibited practices), partially becomes applicable on 2nd August 2025 (especially the provisions on AI-models with a general purpose (general purpose AI – “GPAI”) and the governance and sanctions-provisions), in most parts will become applicable from 2nd August 2026 onwards and with regard to the high-risk-AI-provisions and obligations connected to high-risk-AI on 2nd August 2027, introduces a tight regulatory framework based on risk exposure: as product safety regulation, the AI-Act follows a risk-based-approach, wherein AI-systems are categorised into three classes: prohibited, high-risk, and limited-risk (with several under-classifications in the latter risk-category).

1. PROHIBITED KI-PRACTICES

Prohibited systems are banned outright due to their unacceptable potential for harm, including social scoring or manipulative, psychologically exploitative systems. However, AI-systems deployed by online-gambling-providers are unlikely to fall in one of these categories, since the risk-scorings undertake for responsible gambling (“RG”) and AML/CTF-purposes are no social scoring within the meaning of Article 5 lit. c) AI-Act. The risk-scoring is neither suitable to result in a detrimental or unfavourable treatment of players in social contexts that are unrelated to the contexts, in which the data, on which the risk-scoring is based, was originally generated or collected, nor is it unjustified or disproportionate to the players’ social behaviour or the gravity thereof. This is a function wanted by law (see sec. 6i par. 1 Inter-State-Treaty on Gambling (“ISTG”) 2021) in order to secure player protection, wherefore a complete ban from the outset would contradict to this purpose of player protection, which is socially wanted.Anyway, an AI-use for the prohibited practices under infringement of Art. 5 AI-Act can result in sever administrative fines of up to an amount of € 35,000,000 or 7% of global turnover, whichever is higher.

2. HIGH-RISK-AI

More relevant for the gambling sector, however, is the high-risk classification. For gaming operators, the compliance-relevant AI-systems for financial scorings (as part of affordability-checks to assess a player’s economic capacity for the assessment of the eligibility for higher limits), are likely to be assessed by the competent regulatory authorities as falling into the high-risk category according to sec. 6 par. 2 in connection with no. 5 lit. (b) of Annex III of the AI-Act (unless the regulatory authorities take the view that an exception pursuant to Art. 6 par. 3 AI-Act is fulfilled, e. g. that the AI is only intended to perform a preparatory task for an affordability-assessment and no profiling is undertaken).Once classified as high-risk, the AI-system must comply with a comprehensive set of legal requirements. Under Articles 8 to 17 and 23 to 25 of the AI Act, high-risk systems must be developed within a robust risk management framework, incorporate accurate training data and must be accompanied by human oversight mechanisms, transparency protocols, and cybersecurity safeguards. Providers must produce and maintain comprehensive technical documentation, conduct conformity assessments, and register the systems in the EU AI database. But also deployers of high-risk AI systems do have comprehensive obligations such as the undertaking of appropriate technical and organisational measures to ensure they use high-risk-AI-systems in accordance with the instructions for use accompanying those systems, the ensuring of human oversight and the undertaking and maintaining of documentation on the use of such systems.Non-compliance may attract administrative fines of up to € 15 Million or 3% of global turnover, whichever is higher.

3. LIMITED RISK AI

Other AI-systems than financial-scoring-systems deployed by online-gambling-providers such as biometric identity-verification tools (which are explicitly excluded from the list of high-risk-AIs in annex III of the AI-Act), a risk-scoring of players, early warning systems for the early detection of problematic gambling behaviour (which is a regulatory requirement according to sec. 6i par. 1 ISTG 2021), automated interventions that enforce player protection measures (e. g. the sending of player protection emails or the determination of limits) without a meaningful human oversight, an automated transaction monitoring for AML/CTF-purposes (which is mandatory according to AML/CTF-regulations), player segmentation for targeted marketing-measures, dynamic game adaptation based on player behaviour, and AI-driven customer service via chatbots are likely to be classified as limited-risk systems within the meaning of the AI-Act.As such, they will face more moderate obligations—primarily related to the ensuring of a sufficient level of AI literacy of the staff and other persons dealing with the deployment and use of AI systems, transparency-requirements and disclosure. However, also a non-compliance with those more moderate obligations may entail administrative fines in the amount of up to € 15 Million or 3% of global turnover, whichever is higher.

IV. No stand-alone compliance-frontier

According to the new Eu AI-Act, AI-systems must be auditable and explainable by design. Risk logs should be maintained. Data inputs should be monitored for drift or bias. Outcomes must be traceable or even human-controlled, especially where decisions affect players’ rights or trigger monetary thresholds. In short: operators need a governance framework that treats AI as a regulated function.However, it is crucial to note that the AI-Act obligations do not stand alone: they must be fulfilled in conjunction with data protection requirements arising of the GDPR (and the national implementing acts) and existing AML/CTF-regulations as well as national gambling regulations, creating a multi-layered compliance landscape that demands a coordinated governance, technical due diligence, and ongoing legal oversight.It is especially important to note that the AI-Act – as product-safety-regulation – hardly contains any data protection provisions. The regulation of data protection remains with the GDPR, which remains unaffected by the AI-Act. In the context of an AI-deployment, e. g. for risk-scoring and RG-purposes or for the purpose of limit-determinations (financial scorings as part of the affordability checks), Art. 22 GDPR must be considered, which prohibits an automated decision-taking including a profiling based on an exclusively automated data processing. Alone for this reason, human oversight and control is required.Operators cannot delegate their regulatory responsibilities to third parties providing them with AI compliance-assisting systems. If such a system malfunctions, fails to identify a risk, or contributes to harm, it is the deployer — not the technology — whose liability, credibility and potentially even licence, is at stake.

V. Gaining Competitive Advantage

Forward-thinking operators are already conducting AI audits, mapping their systems, reviewing contracts with tech vendors, and training staff. They are aligning with AI risk frameworks and preparing for the AI-Act compliance. Most importantly, they’re recognising that AI is not just an efficiency tool. It is a regulated system that must be designed, deployed, and monitored with the same discipline as any critical compliance function.Operators should now focus on a series of concrete measures to ensure legal and operational readiness. This begins with mapping all AI systems for their qualification, whether they are a prohibited practices, high-risk or limited risk within the meaning of the AI-Act and continues with the development and implementation of compliance-procedures according to the categorization-result. The drafting of a complete AI-system-use and data-processing-inventory is imperative in this regard – as it is known from the GDPR-compliance requirements.Human oversight mechanisms must be reviewed to ensure that AI-supported decisions involving individuals are subject to meaningful human control – not only to comply with the requirements in case of the deployment of high-risk-AI, but also to ensure a GDPR-compliance (Art. 22 GDPR). Transparency obligations, much like under the GDPR again, must be operationalised through clear, accessible user-facing notices that explain for what purposes AI is used, what data is used for the input, what the AI does with this input-data, how it works, and how individuals are affected thereby. The requirement of staff-training – in case of the compliance with the AI-Act in order to ensure a sufficient AI-literacy – is not only known from the GDPR, but also from the AML/CTF-regulations and RG-requirements arising of the ISTG 2021.Cross-functional governance structures — bringing together legal, compliance, and technical functions — should be established to coordinate a proper implementation. Preparations should also include building capacities for conformity assessments. Hambach & Hambach may of course assist with these tasks of developing an AI-Act implementation-plan, AI-compliance procedures and once implemented conformity assessments. Hambach & Hambach may also support with its wider network, which e. g. also includes auditing and certification-bodies, which may audit and certify the AI-compliance, which may be a trust-building measure vis-à-vis players, suppliers, regulators and the broader public.

VI. Conclusion

AI in gambling is more than just a compliance challenge — it is a powerful market differentiator. Operators who proactively and correctly address compliance issues regarding their AI-systems will not only reduce regulatory risks but also gain competitive advantage. Embracing regulatory requirements, governance and best practices can open doors to new partnerships and customer segments that demand responsible innovation and will also be a customer-retention-measure. In an industry where speed and agility define success, being prepared is not about avoiding liability — it is about seizing the opportunity to lead the future of the whole gambling industry.