The EU AI Act is the world's first comprehensive horizontal AI regulation. Unlike sector-specific AI guidance from financial regulators (EBA, ESMA, EIOPA), the AI Act applies across all industries โ but its impact on financial services is disproportionately significant because so many core banking and fintech processes rely on AI systems that fall squarely into the "high-risk" category.
Credit scoring, fraud detection, insurance underwriting, AML transaction monitoring, algorithmic trading โ all of these involve AI systems that produce legal effects on individuals or significantly affect their access to essential services. That places them in the high-risk tier with the heaviest compliance burden. This guide covers the specific obligations that matter for compliance teams in financial services, fintech, and crypto โ not the academic overview, but the provisions that drive operational change.
The Four-Tier Risk Classification
The AI Act classifies AI systems into four risk tiers. Each tier carries different obligations. The classification drives everything that follows โ get it wrong, and you either over-invest in compliance for a minimal-risk system or, far worse, under-invest in compliance for a high-risk one.
Unacceptable Risk (Prohibited)
Article 5 prohibits AI systems that pose an unacceptable risk to fundamental rights. These prohibitions took effect on 2 February 2025. In financial services, the most relevant prohibitions are:
- Social scoring by public authorities: AI systems that evaluate or classify natural persons based on social behaviour or personal characteristics, leading to detrimental or unfavourable treatment in contexts unrelated to the original data collection. While primarily targeting government social credit systems, any financial institution acting in a quasi-public capacity should assess whether loyalty scoring or cross-product behaviour analysis could be construed as social scoring.
- Emotion recognition in the workplace: AI systems that infer emotions of employees in the workplace (with limited exceptions for safety or medical purposes). Financial institutions using sentiment analysis tools in trading floors or compliance monitoring should review their scope.
- Real-time remote biometric identification in public spaces: Prohibited for law enforcement with narrow exceptions. Banks and fintechs using facial recognition for customer onboarding in physical branches are generally not caught by this provision (it targets public spaces), but the boundary is being tested.
High Risk (Heavy Obligations)
High-risk AI systems are defined in Article 6 and Annex III. For financial services, the critical Annex III categories are:
- Point 5(a) โ Creditworthiness assessment: AI systems used to evaluate the creditworthiness of natural persons or establish their credit score. This covers automated credit scoring, lending algorithms, buy-now-pay-later approval systems, and any AI-driven credit risk assessment for consumer finance.
- Point 5(b) โ Risk assessment and pricing in life and health insurance: AI systems used to assess risk and set premiums for life and health insurance policies for natural persons.
- Point 5(c) โ Fraud detection (certain contexts): AI systems intended to be used for evaluation of the reliability of evidence in investigations or for risk assessment in the prevention and detection of financial crime may qualify as high-risk under point 6(a) on law enforcement AI, depending on the operator's role.
Additionally, AI systems classified as high-risk under Annex I (i.e., systems that are safety components of products covered by EU harmonised legislation) may also apply to financial services where AI drives safety-critical functions in payment infrastructure.
Limited Risk (Transparency Obligations)
Article 50 imposes transparency obligations on certain AI systems regardless of their risk level:
- Chatbots: AI systems designed to interact with natural persons must disclose that the person is interacting with AI. This applies to customer service chatbots, robo-advisors, and AI-driven complaint handling in financial services.
- Deepfakes: AI-generated or manipulated content (audio, image, video, text) must be labelled as artificially generated.
- Emotion recognition and biometric categorisation: Where not prohibited, these systems must inform individuals that they are being subjected to such processing.
Minimal Risk (No Specific Obligations)
AI systems not falling into any of the above categories are subject only to voluntary codes of conduct. Examples in financial services: internal analytics dashboards, document summarisation tools, non-decision-making market research AI.
Track EU AI Act implementation deadlines and guidance โ delegated acts, standards, and AI Office decisions monitored automatically.
Start free trial โHigh-Risk AI System Obligations
For compliance teams in financial services, the high-risk tier is where the operational burden lies. Articles 8-15 set out requirements that must be met before a high-risk AI system can be placed on the market or put into service. These apply from 2 August 2026.
Risk Management System (Article 9)
Providers of high-risk AI systems must establish, implement, document, and maintain a risk management system throughout the AI system's lifecycle. This is not a one-off assessment โ it requires continuous identification and analysis of known and reasonably foreseeable risks, estimation and evaluation of risks that may emerge when the system is used in accordance with its intended purpose, and adoption of appropriate risk management measures. For credit scoring systems, this means documenting the risk of discriminatory outcomes, assessing the impact of data drift on model accuracy, and implementing monitoring for bias in production.
Data and Data Governance (Article 10)
Training, validation, and testing datasets for high-risk AI systems must meet specific quality criteria. Article 10 requires that datasets be relevant, sufficiently representative, and as far as possible free of errors and complete. For financial AI, this creates specific tensions:
- Historical bias: Credit scoring models trained on historical lending data may encode discriminatory patterns. The AI Act requires providers to examine training data for possible biases and implement appropriate measures to detect, prevent, and mitigate them.
- GDPR intersection: Processing personal data to identify bias (e.g., analysing protected characteristics to test for discriminatory outcomes) requires a lawful basis under GDPR. The AI Act's Article 10(5) permits processing of special category data specifically for bias monitoring, subject to appropriate safeguards โ but the practical implementation with GDPR is still being clarified through delegated acts.
Technical Documentation (Article 11)
Before a high-risk AI system is placed on the market, its provider must draw up technical documentation demonstrating compliance with the Chapter 2 requirements. Annex IV specifies the minimum contents, including: a general description of the AI system, detailed description of the elements of the AI system and the process for its development, information about monitoring, functioning, and control, a description of the risk management system, and detailed information about training, validation, and testing data.
Record-Keeping and Logging (Article 12)
High-risk AI systems must be designed and developed with logging capabilities that enable tracing the system's operation throughout its lifecycle. Logs must be retained for a period appropriate to the system's intended purpose, and at least six months unless otherwise provided for in applicable Union or national law. For AML-related AI systems, longer retention may be required under AMLD6 record-keeping obligations.
Transparency and Information to Deployers (Article 13)
High-risk AI systems must be designed to be sufficiently transparent that deployers (the entities using the AI system โ i.e., the bank or fintech) can interpret the system's output and use it appropriately. Instructions for use must include the system's intended purpose, level of accuracy and robustness, any known or foreseeable risks, and specifications for input data. For financial services, this means credit scoring providers must explain to their bank clients exactly how the model works, its limitations, and the conditions under which it should not be relied upon.
Human Oversight (Article 14)
High-risk AI systems must be designed to allow effective oversight by natural persons during the period in which the system is in use. This includes the ability to fully understand the system's capacities and limitations, correctly interpret the system's output, decide not to use the system or disregard its output, and interrupt or stop the system's operation. For automated lending, this means maintaining a genuine human review mechanism โ not a rubber-stamp approval of automated decisions.
Accuracy, Robustness, and Cybersecurity (Article 15)
High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity. Providers must declare accuracy metrics and test results. Robustness against errors, faults, and adversarial attacks must be addressed. This is particularly relevant for financial AI systems that may be targets of adversarial manipulation โ credit application fraud designed to game scoring models, for instance.
GPAI Model Obligations
General-purpose AI (GPAI) models โ such as large language models โ face separate obligations under Articles 51-56 of the AI Act, applicable from 2 August 2025. These obligations fall on GPAI model providers, not deployers, but compliance teams in financial services should understand them because they affect the models underlying many deployed AI tools.
All GPAI Models
Every GPAI model provider must: maintain and provide technical documentation (including training and testing processes), provide information and documentation to downstream providers who integrate the GPAI model into their AI systems, establish a policy for complying with Union copyright law, and publish a sufficiently detailed summary of the training data used.
GPAI Models with Systemic Risk
GPAI models trained with a cumulative amount of compute exceeding 10^25 FLOPs (or designated by the European Commission based on other criteria) are classified as posing systemic risk. These face additional obligations: model evaluation including adversarial testing, assessment and mitigation of systemic risks, incident monitoring and reporting to the AI Office, and ensuring adequate cybersecurity protection. As of early 2026, this threshold captures approximately a dozen foundation models from providers including OpenAI, Google DeepMind, Anthropic, Meta, and Mistral.
Enforcement Timeline: Key Dates
The AI Act's phased implementation creates a staggered compliance timeline. Compliance teams should note these critical dates:
| Date | Milestone | What Applies |
|---|---|---|
| 1 August 2024 | Entry into force | Regulation published and entered into force |
| 2 February 2025 | Prohibited practices ban | Article 5 prohibitions enforceable; AI literacy obligations (Article 4) |
| 2 August 2025 | GPAI obligations apply | Chapter V obligations for GPAI model providers; rules on notified bodies |
| 2 August 2026 | High-risk AI system obligations | Annex III high-risk systems โ full Chapter 2 requirements apply |
| 2 August 2027 | Annex I high-risk systems | AI systems that are safety components of products under EU harmonised legislation |
Enforcement and Penalties
The AI Act creates a multi-layered enforcement structure:
- European AI Office: Established within the European Commission, the AI Office is responsible for overseeing GPAI model compliance, coordinating enforcement across Member States, and developing guidelines and standards. It became operational in early 2024.
- National competent authorities: Each Member State must designate at least one national competent authority for market surveillance and enforcement. For financial services AI, the national competent authority may coordinate with financial supervisors (national central banks, financial market authorities).
- Fines: The penalty structure is severe:
- Prohibited AI practices: up to โฌ35 million or 7% of global annual turnover (whichever is higher)
- High-risk non-compliance: up to โฌ15 million or 3% of turnover
- Incorrect information to authorities: up to โฌ7.5 million or 1% of turnover
- For SMEs and startups, lower caps apply (the lesser amount, not the higher)
Interaction with Financial Services Regulation
The AI Act does not replace sector-specific regulation. Financial services firms must comply with both the AI Act and existing supervisory requirements. Key interactions:
EBA/ESMA/EIOPA AI Guidelines
The European Supervisory Authorities issued joint guidelines on AI use in financial services (2024), covering model risk management, explainability, and consumer protection. These guidelines remain in force alongside the AI Act. Where the AI Act sets minimum requirements, the ESA guidelines may impose additional sector-specific expectations. For example, the EBA's model risk management expectations for internal models used in capital calculations go beyond the AI Act's Article 9 risk management requirements.
MiCA and AI in Crypto
Crypto asset service providers (CASPs) using AI for trading, market-making, or portfolio management must comply with both MiCA conduct of business requirements and the AI Act. Where AI systems are used for automated crypto trading that affects consumers, they may qualify as high-risk under Annex III point 5(a) if they affect access to financial services.
DORA and AI Resilience
The Digital Operational Resilience Act (DORA) requires financial entities to manage ICT risk for all technology systems, including AI. AI systems used in critical financial functions must meet DORA's operational resilience requirements for testing, incident reporting, and third-party risk management โ in addition to the AI Act's accuracy and robustness requirements under Article 15.
Practical Steps for Compliance Teams
With the 2 August 2026 deadline for high-risk AI system compliance approaching, financial services compliance teams should be acting now. Here is a practical roadmap:
- AI system inventory: Catalogue every AI system in use or in development. Include vendor-provided AI tools, internally developed models, and AI components embedded in larger systems (e.g., fraud detection modules within core banking platforms).
- Risk classification assessment: Map each AI system to the four-tier classification. Pay particular attention to credit scoring, fraud detection, AML monitoring, and insurance pricing โ these are the most likely candidates for high-risk classification in financial services.
- Gap analysis against Chapter 2 requirements: For each high-risk system, assess current compliance against Articles 9-15. The most common gaps are: inadequate technical documentation, insufficient bias testing in training data, absence of meaningful human oversight mechanisms, and lack of logging and record-keeping infrastructure.
- Conformity assessment preparation: High-risk AI systems under Annex III require internal conformity assessment (most financial AI) or third-party conformity assessment (certain biometric systems). Prepare the technical documentation required under Annex IV and establish the quality management system required under Article 17.
- Vendor assessment: For AI systems procured from third parties, assess whether the vendor (as "provider") is meeting their AI Act obligations. The deployer (the financial institution) has separate obligations under Article 26 โ including verifying that input data is relevant and sufficiently representative, and monitoring the system in production.
- AI literacy programme: Article 4 requires providers and deployers to ensure their staff have sufficient AI literacy. For compliance teams, this means training on the AI Act's requirements, but also practical understanding of how AI systems in use function, their limitations, and their risk profiles.
- Ongoing monitoring framework: The AI Act requires continuous monitoring, not one-off compliance. Establish processes for monitoring AI system performance in production, detecting data drift, reviewing incident reports, and updating risk assessments as systems evolve.
"The AI Act is not a future problem โ the prohibited practices ban is already in force, GPAI obligations apply in August 2025, and high-risk system requirements hit in August 2026. Financial services firms that start their compliance programmes after the deadline will face both regulatory and competitive disadvantage."
What Comes Next
The AI Act framework continues to evolve. In 2026 and beyond, compliance teams should monitor:
- Delegated acts and implementing acts: The Commission is expected to adopt delegated acts specifying the criteria for GPAI model systemic risk classification, the benchmarks and methodologies for evaluating GPAI model capabilities, and detailed requirements for technical documentation.
- Harmonised standards: CEN and CENELEC are developing harmonised standards for AI Act compliance. Once adopted, these standards will create a presumption of conformity with the AI Act's requirements โ making them the practical compliance benchmark.
- AI Office guidance: The AI Office is publishing codes of practice, guidelines, and templates for GPAI model compliance. These are not legally binding but will shape enforcement expectations.
- National implementation: Member States are designating national competent authorities and establishing enforcement procedures. The approach may vary by jurisdiction, particularly regarding the interaction between AI regulators and financial supervisors.
For financial services compliance teams, the AI Act represents a fundamental shift from voluntary AI ethics frameworks to mandatory, enforceable regulation. The time for monitoring is past โ implementation must be underway. For broader context on how the AI Act fits into the EU's evolving regulatory landscape, see our coverage of DORA, MiCA, and GDPR fintech compliance.
Track EU AI Act developments automatically
RegPulse monitors AI Office publications, delegated acts, harmonised standards development, and national implementation โ delivering alerts relevant to financial services AI compliance as they happen.
Start free trial โ