The EU AI Act is the world's first comprehensive horizontal AI regulation. Unlike sector-specific AI guidance from financial regulators (EBA, ESMA, EIOPA), the AI Act applies across all industries โ€” but its impact on financial services is disproportionately significant because so many core banking and fintech processes rely on AI systems that fall squarely into the "high-risk" category.

Credit scoring, fraud detection, insurance underwriting, AML transaction monitoring, algorithmic trading โ€” all of these involve AI systems that produce legal effects on individuals or significantly affect their access to essential services. That places them in the high-risk tier with the heaviest compliance burden. This guide covers the specific obligations that matter for compliance teams in financial services, fintech, and crypto โ€” not the academic overview, but the provisions that drive operational change.

The Four-Tier Risk Classification

The AI Act classifies AI systems into four risk tiers. Each tier carries different obligations. The classification drives everything that follows โ€” get it wrong, and you either over-invest in compliance for a minimal-risk system or, far worse, under-invest in compliance for a high-risk one.

Unacceptable Risk (Prohibited)

Article 5 prohibits AI systems that pose an unacceptable risk to fundamental rights. These prohibitions took effect on 2 February 2025. In financial services, the most relevant prohibitions are:

High Risk (Heavy Obligations)

High-risk AI systems are defined in Article 6 and Annex III. For financial services, the critical Annex III categories are:

Additionally, AI systems classified as high-risk under Annex I (i.e., systems that are safety components of products covered by EU harmonised legislation) may also apply to financial services where AI drives safety-critical functions in payment infrastructure.

Limited Risk (Transparency Obligations)

Article 50 imposes transparency obligations on certain AI systems regardless of their risk level:

Minimal Risk (No Specific Obligations)

AI systems not falling into any of the above categories are subject only to voluntary codes of conduct. Examples in financial services: internal analytics dashboards, document summarisation tools, non-decision-making market research AI.

Track EU AI Act implementation deadlines and guidance โ€” delegated acts, standards, and AI Office decisions monitored automatically.

Start free trial โ†’

High-Risk AI System Obligations

For compliance teams in financial services, the high-risk tier is where the operational burden lies. Articles 8-15 set out requirements that must be met before a high-risk AI system can be placed on the market or put into service. These apply from 2 August 2026.

Risk Management System (Article 9)

Providers of high-risk AI systems must establish, implement, document, and maintain a risk management system throughout the AI system's lifecycle. This is not a one-off assessment โ€” it requires continuous identification and analysis of known and reasonably foreseeable risks, estimation and evaluation of risks that may emerge when the system is used in accordance with its intended purpose, and adoption of appropriate risk management measures. For credit scoring systems, this means documenting the risk of discriminatory outcomes, assessing the impact of data drift on model accuracy, and implementing monitoring for bias in production.

Data and Data Governance (Article 10)

Training, validation, and testing datasets for high-risk AI systems must meet specific quality criteria. Article 10 requires that datasets be relevant, sufficiently representative, and as far as possible free of errors and complete. For financial AI, this creates specific tensions:

Technical Documentation (Article 11)

Before a high-risk AI system is placed on the market, its provider must draw up technical documentation demonstrating compliance with the Chapter 2 requirements. Annex IV specifies the minimum contents, including: a general description of the AI system, detailed description of the elements of the AI system and the process for its development, information about monitoring, functioning, and control, a description of the risk management system, and detailed information about training, validation, and testing data.

Record-Keeping and Logging (Article 12)

High-risk AI systems must be designed and developed with logging capabilities that enable tracing the system's operation throughout its lifecycle. Logs must be retained for a period appropriate to the system's intended purpose, and at least six months unless otherwise provided for in applicable Union or national law. For AML-related AI systems, longer retention may be required under AMLD6 record-keeping obligations.

Transparency and Information to Deployers (Article 13)

High-risk AI systems must be designed to be sufficiently transparent that deployers (the entities using the AI system โ€” i.e., the bank or fintech) can interpret the system's output and use it appropriately. Instructions for use must include the system's intended purpose, level of accuracy and robustness, any known or foreseeable risks, and specifications for input data. For financial services, this means credit scoring providers must explain to their bank clients exactly how the model works, its limitations, and the conditions under which it should not be relied upon.

Human Oversight (Article 14)

High-risk AI systems must be designed to allow effective oversight by natural persons during the period in which the system is in use. This includes the ability to fully understand the system's capacities and limitations, correctly interpret the system's output, decide not to use the system or disregard its output, and interrupt or stop the system's operation. For automated lending, this means maintaining a genuine human review mechanism โ€” not a rubber-stamp approval of automated decisions.

Accuracy, Robustness, and Cybersecurity (Article 15)

High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity. Providers must declare accuracy metrics and test results. Robustness against errors, faults, and adversarial attacks must be addressed. This is particularly relevant for financial AI systems that may be targets of adversarial manipulation โ€” credit application fraud designed to game scoring models, for instance.

GPAI Model Obligations

General-purpose AI (GPAI) models โ€” such as large language models โ€” face separate obligations under Articles 51-56 of the AI Act, applicable from 2 August 2025. These obligations fall on GPAI model providers, not deployers, but compliance teams in financial services should understand them because they affect the models underlying many deployed AI tools.

All GPAI Models

Every GPAI model provider must: maintain and provide technical documentation (including training and testing processes), provide information and documentation to downstream providers who integrate the GPAI model into their AI systems, establish a policy for complying with Union copyright law, and publish a sufficiently detailed summary of the training data used.

GPAI Models with Systemic Risk

GPAI models trained with a cumulative amount of compute exceeding 10^25 FLOPs (or designated by the European Commission based on other criteria) are classified as posing systemic risk. These face additional obligations: model evaluation including adversarial testing, assessment and mitigation of systemic risks, incident monitoring and reporting to the AI Office, and ensuring adequate cybersecurity protection. As of early 2026, this threshold captures approximately a dozen foundation models from providers including OpenAI, Google DeepMind, Anthropic, Meta, and Mistral.

Enforcement Timeline: Key Dates

The AI Act's phased implementation creates a staggered compliance timeline. Compliance teams should note these critical dates:

Date Milestone What Applies
1 August 2024 Entry into force Regulation published and entered into force
2 February 2025 Prohibited practices ban Article 5 prohibitions enforceable; AI literacy obligations (Article 4)
2 August 2025 GPAI obligations apply Chapter V obligations for GPAI model providers; rules on notified bodies
2 August 2026 High-risk AI system obligations Annex III high-risk systems โ€” full Chapter 2 requirements apply
2 August 2027 Annex I high-risk systems AI systems that are safety components of products under EU harmonised legislation

Enforcement and Penalties

The AI Act creates a multi-layered enforcement structure:

Interaction with Financial Services Regulation

The AI Act does not replace sector-specific regulation. Financial services firms must comply with both the AI Act and existing supervisory requirements. Key interactions:

EBA/ESMA/EIOPA AI Guidelines

The European Supervisory Authorities issued joint guidelines on AI use in financial services (2024), covering model risk management, explainability, and consumer protection. These guidelines remain in force alongside the AI Act. Where the AI Act sets minimum requirements, the ESA guidelines may impose additional sector-specific expectations. For example, the EBA's model risk management expectations for internal models used in capital calculations go beyond the AI Act's Article 9 risk management requirements.

MiCA and AI in Crypto

Crypto asset service providers (CASPs) using AI for trading, market-making, or portfolio management must comply with both MiCA conduct of business requirements and the AI Act. Where AI systems are used for automated crypto trading that affects consumers, they may qualify as high-risk under Annex III point 5(a) if they affect access to financial services.

DORA and AI Resilience

The Digital Operational Resilience Act (DORA) requires financial entities to manage ICT risk for all technology systems, including AI. AI systems used in critical financial functions must meet DORA's operational resilience requirements for testing, incident reporting, and third-party risk management โ€” in addition to the AI Act's accuracy and robustness requirements under Article 15.

Practical Steps for Compliance Teams

With the 2 August 2026 deadline for high-risk AI system compliance approaching, financial services compliance teams should be acting now. Here is a practical roadmap:

  1. AI system inventory: Catalogue every AI system in use or in development. Include vendor-provided AI tools, internally developed models, and AI components embedded in larger systems (e.g., fraud detection modules within core banking platforms).
  2. Risk classification assessment: Map each AI system to the four-tier classification. Pay particular attention to credit scoring, fraud detection, AML monitoring, and insurance pricing โ€” these are the most likely candidates for high-risk classification in financial services.
  3. Gap analysis against Chapter 2 requirements: For each high-risk system, assess current compliance against Articles 9-15. The most common gaps are: inadequate technical documentation, insufficient bias testing in training data, absence of meaningful human oversight mechanisms, and lack of logging and record-keeping infrastructure.
  4. Conformity assessment preparation: High-risk AI systems under Annex III require internal conformity assessment (most financial AI) or third-party conformity assessment (certain biometric systems). Prepare the technical documentation required under Annex IV and establish the quality management system required under Article 17.
  5. Vendor assessment: For AI systems procured from third parties, assess whether the vendor (as "provider") is meeting their AI Act obligations. The deployer (the financial institution) has separate obligations under Article 26 โ€” including verifying that input data is relevant and sufficiently representative, and monitoring the system in production.
  6. AI literacy programme: Article 4 requires providers and deployers to ensure their staff have sufficient AI literacy. For compliance teams, this means training on the AI Act's requirements, but also practical understanding of how AI systems in use function, their limitations, and their risk profiles.
  7. Ongoing monitoring framework: The AI Act requires continuous monitoring, not one-off compliance. Establish processes for monitoring AI system performance in production, detecting data drift, reviewing incident reports, and updating risk assessments as systems evolve.
"The AI Act is not a future problem โ€” the prohibited practices ban is already in force, GPAI obligations apply in August 2025, and high-risk system requirements hit in August 2026. Financial services firms that start their compliance programmes after the deadline will face both regulatory and competitive disadvantage."

What Comes Next

The AI Act framework continues to evolve. In 2026 and beyond, compliance teams should monitor:

For financial services compliance teams, the AI Act represents a fundamental shift from voluntary AI ethics frameworks to mandatory, enforceable regulation. The time for monitoring is past โ€” implementation must be underway. For broader context on how the AI Act fits into the EU's evolving regulatory landscape, see our coverage of DORA, MiCA, and GDPR fintech compliance.

Track EU AI Act developments automatically

RegPulse monitors AI Office publications, delegated acts, harmonised standards development, and national implementation โ€” delivering alerts relevant to financial services AI compliance as they happen.

Start free trial โ†’