Abstract
The integration of artificial intelligence (AI) into the area of Anti-Money Laundering and Countering the Financing of Terrorism (AML/CFT) marks an essential shift in how financial institutions and regulators approach the fight against financial crime. In the EU, in 2024, two major regulatory projects reached key milestones: the revision of the AML/CFT framework and the adoption of the AI Act. Although these legislative efforts progressed in parallel, there appears to have been limited coordination between the drafters, resulting in considerable inconsistencies and areas of ambiguity. Even such seemingly basic – yet crucial – issues as the consistent application of the definition of an AI system, the determination of risks possibly entailed by such a system, or the terminology concerning human oversight give rise to concerns. The present article has two primary objectives: first, to provide an in-depth analysis of the interplay between the AI-relevant provisions of the revised AML/CFT framework and the AI Act, highlighting instances of conceptual confusion and practical challenges that those implementing AI in the AML/CFT area may have to face; and second, to identify and examine the areas where the problematic interplay between the two regimes may negatively affect the scope and availability of the protective human rights standards enshrined in the AI Act.
Keywords
Introduction
2024 was a year of completion of two significant EU regulatory projects: the revision of the Anti-Money Laundering and Countering the Financing of Terrorism (AML/CFT) framework 1 and the adoption of the first in the world comprehensive set of rules governing artificial intelligence (AI) — the EU AI Act. 2 The obligations these frameworks establish for long have been subject of heated political debates, 3 and even though still some time will elapse before both frameworks become fully applicable, preparations for compliance by the industry have already commenced. 4 Prevalence of technology-driven solutions, intensifying competition among countries and corporations together with unprecedented public and private investments in innovation make AI relevant for nearly all areas of life, business and regulation. The financial sector is no exception in the trend towards greater use of AI. 5 Financial institutions are increasingly interested in applying AI not only to achieve commercial goals but also to meet legal obligations – including those arising from the AML/CFT framework. 6
Various – mainly machine learning-based – approaches encompassing different techniques – jointly referred to in this paper as ‘AI’ solutions — can be deployed to streamline the realisation of AML/CFT policy and thus contribute to the goal of combatting financial crime. 7 AI can be used for activities ranging from customer onboarding and transaction monitoring by obliged entities, to supporting regulatory bodies in assessing risk of money laundering, financing of terrorism and evasion of sanctions or in performing supervisory tasks. The common denominator among all these activities is the inherent need to process large volumes of structured and unstructured data with a view of identifying patterns, and transforming these data into actionable intelligence that is expected not only to help detect instances of money laundering (situations, where transactions involve proceeds of a criminal activity) but also predict when the assets may be used for illicit purposes (in particular terrorism financing).
While the harvesting of financial intelligence itself raises important constitutional and fundamental rights concerns, 8 the fate of the engagement of AI in this area seems sealed. Yet, the relevant regulatory landscape is far from clear. The rules applicable in the context of the use of AI for the AML/CFT purposes derive from multiple frameworks including, but not being limited to, the AML/CFT legal regime, personal data protection laws, regulations of the internal market and digital operational resilience in the financial sector, Member States’ national codes of criminal procedure and administrative laws, and – as regards the use of AI – soon the AI Act. The abundance of applicable laws as well as complex intertwinements of different frameworks may not only lead to hindering the technological potential but also have severe implications for individuals, whose fundamental rights and freedoms may be at stake when AI systems are applied. Ambiguous or open-ended provisions scattered across different legislative instruments may in practice water down the standards of protection enshrined in the European Charter of Fundamental Rights and several secondary law frameworks. 9 Whilst a comprehensive examination of all possibly applicable provisions to the use of AI for combatting financial crime is a subject suitable for a multi-volume publication, the focus of the present paper is the intersection of two concluded in 2024 EU regulatory endeavours, that is, the revised AML/CFT regime and the AI Act. As the analysis will demonstrate, although both legislative initiatives progressed in parallel and were finalised just two weeks apart, 10 there appears to have been limited coordination between the drafters, resulting in notable inconsistencies and areas of ambiguity. Even such seemingly basic issues as consistent application of the definition of an AI system, classification of the entailed by such system risk, or terminology concerning human oversight give rise to concerns.
In view of this, the paper aims at answering two questions: firstly, how the revised EU AML/CFT legal framework and the AI Act interact with each other when the possible application of AI for AML/CFT purposes is concerned and where the crossroads of these two regimes exhibit most significant pitfalls one may stumble upon when attempting to reconcile provisions of both regulatory instruments; and secondly, what could be the consequences of the identified ambiguities for individuals affected by AI measures applied in the area of AML/CFT.
To navigate the maze of the analysed regulatory landscape, using black letter law and policy documents analysis, the paper maps and discusses the most relevant upcoming rules applicable to the development and deployment of AI tools in the area of the AML/CFT under the revised EU AML/CFT regime and the EU AI Act.
The following sections of the paper firstly provide an overview of the existing and upcoming sources of EU rules in the area of AML/CFT (Section ‘The current and upcoming EU AML/CFT regulatory landscape’). Then, the paper explores where and how the new EU AML/CFT instruments allow for or anticipate the use of AI, considering currently available AI solutions (Section ‘AI governance in the new AML/CFT regime’). Next, a brief overview of the AI Act is provided (Section ‘The governance of AI in the EU: a brief overview of the AI Act’), followed by a discussion on the complexities of specific provisions of the AI Act, which relate to the AML/CFT matters and standards of protection for individuals which the AI Act enshrines (Section ‘Bridging the AI Act and AML/CFT Legal Framework from the perspective of fundamental rights protection’). In conclusion, the paper recapitulates the key observations on the intersection of the revised EU AML/CFT regime and the AI Act, and critically reflects on the consequences of any relevant ambiguities for those who may be affected by the AI-driven AML/CFT measures (Section ‘Conclusion’).
The current and upcoming EU AML/CFT regulatory landscape
The current framework and steps towards the reform
After more than thirty years of development – marked by efforts to align with global policy standards, in particular standards developed by the Financial Action Task Force (FATF), navigate the cross-pillar complexities of the EU’s institutional framework, and manage national constitutional and political constraints – the current AML/CFT regime has evolved into a complex patchwork of EU legislation. 11 Currently, the framework is composed of a number of acts with two directives constituting the main, but not exclusive source of EU rules in the AML/CFT area, namely the ‘4th Anti-Money Laundering Directive’ (4AMLD) 12 and the amending it ‘5th Anti-Money Laundering Directive’ (5AMLD). 13 The 4AMLD (as amended by 5AMLD), central in the AML/CFT architecture, designates the so-called obliged entities, that is, organisations and professions who are tasked with the ‘gate-keeping obligations’, assigns them customer due diligence and reporting tasks, requires Member States to establish Financial Intelligence Units (FIUs) to carry out analysis and assessment of suspicious transactions, and create an effective supervision and enforcement mechanism for the AML/CFT policy. The AML directives are complemented with other instruments, including 2019 Directive laying down rules facilitating the use of financial and other information for the prevention, detection, investigation or prosecution of certain criminal offences (‘Directive on Access to Financial and Other Information’) 14 and Regulation on controls on cash entering or leaving the Union (Regulation on cash controls) of 2018. 15 Finally, other integral components of the framework are instruments of a criminal law nature 16 to which belong Directive on combating money laundering by criminal law of 2018 17 and Directive on freezing and confiscation of instrumentalities and proceeds of crime of 2014. 18
The above shows the abundance and variety of the regulatory instruments, which together with often incoherent transposition of the directives, in particular 4AMLD and 5AMLD, to the national laws of the EU Member States have led to considerable divergences in the way the current framework is applied throughout the Union and revealed weaknesses in its enforcement. 19 To address these shortcomings, in May 2020, the European Commission published a Communication on an Action Plan for a comprehensive Union policy on preventing money laundering and terrorism financing, where it expressed its intention to table new legislative proposals on AML/CFT. 20 Pursuing the objective of creating a powerful and sustainable enforcement mechanism and eliminating regulatory gaps, in July 2021, the legislative package on the reform of the European AML/CFT framework was presented. The package encompassed: (i) Proposal for a Regulation on the prevention of the use of the financial system for the purposes of money laundering or terrorist financing, 21 (ii) Proposal for a Regulation establishing the Authority for AML/CFT, 22 (iii) Proposal for a Regulation of the European Parliament and of the Council on information accompanying transfers of funds and certain crypto-assets 23 and (iv) Proposal for a Directive of the European Parliament and of the Council on the mechanisms to be put in place by the Member States for the prevention of the use of the financial system for the purposes of money laundering or terrorist financing and repealing (4AMLD). 24
The Regulation on information accompanying transfers of funds and certain crypto-assets (known as ‘Travel Rule Regulation’) 25 was adopted in May 2023. The other three instruments: Regulation (EU) 2024/1624 on the prevention of the use of the financial system for the purposes of money laundering or terrorist financing (‘AML Regulation’), 26 Regulation (EU) 2024/1620 establishing the Authority for AML/CFT (‘AMLA Regulation’) 27 and Directive (EU) 2024/1640 on the mechanisms to be put in place by Member States for the prevention of the use of the financial system for the purposes of money laundering or terrorist financing (‘the 6th Anti-Money Laundering Directive’ [6AMLD]) 28 were adopted a year later. Together with these three, another instrument, namely Directive (EU) 2024/1654 amending (Directive on Access to Financial and Other Information) as regards access by competent authorities to centralised bank account (CBA) registries through the interconnection system and technical measures to facilitate the use of transaction records (‘Directive on Access to CBA Registries’) 29 was adopted, which albeit not included in the original AML/CFT package, supplemented the revised regime.
The rationale and logic behind the reformed AML/CFT framework
The underlying rationale of the reform was to make a coherent whole: to facilitate the prevention and detection of illicit financial flows and enhance coordination and cooperation among authorities of the Member States, especially through the establishment of a new EU agency for stringent AML/CFT measures enforcement by the AMLA Regulation serving as the centrepiece of the legislative package. 30 To prevent the emergence of further incongruities, which have allegedly been hampering efficiency of the AML/CFT policy thus far, the AML/CFT tasks of obliged entities were incorporated into a single directly applicable rulebook, that is, the AML Regulation. 31 The rules on information accompanying transfers of funds and certain crypto-assets were also introduced by a regulation. 32 In line with the primary law of the European Union, the regulations have general application, are binding in their entirety and directly applicable in all Member States without the need for the Member States to transpose them into the national legal frameworks. 33
The other two pieces of the new AML/CFT landscape are, however, directives – the 6AMLD and the Directive on Access to CBA Registries. Those require incorporation into national legal systems, in particular, by amending relevant national laws, regulations or administrative provisions and are binding as to the results, which they aim to achieve. 34 While it is understandable that directives can be best suitable to address pertinent organisational and institutional matters at the national level (such as those concerning national AML/CFT enforcement bodies or registries) leaving Member States sufficient room for accommodating inevitably country-specific circumstances, the discrepancies between Member States affecting effectiveness of the AML/CFT system may remain. One of the primary examples where this is likely to occur regards FIUs. 35 The 6AMLD continues to provide only the minimum level of harmonisation and, thus, to uphold diversity of these crucial AML/CFT actors, which has already been impairing their cross-border collaboration. 36 As will be discussed later, the reform of the AML/CFT framework tries to resolve this issue by assigning AMLA (under the AMLA Regulation) the task of coordination of the FIUs’ efforts and encouraging joint-analyses, but it is doubtful whether that will facilitate direct data sharing between Member States. 37
The reformed AML/CFT framework’s silent invitation to AI integration
The works on the reform of the AML/CFT and the adoption of the AI Act were carried out in parallel. Surprisingly, while the interest in leveraging technological innovation for facilitation of AML/CFT tasks and obligations has been growing as the discussions on AI were taking increasingly prominent position in political agendas, only a few explicit references to AI were made in the new EU AML/CFT regime. None of such references was present in the original proposals of the European Commission from July 2021. They were added later in the legislative process and integrated into the texts of the AML Regulation 38 and AMLA Regulation 39 published upon the agreements between the Parliament and the Council. Hence, Article 76(5) of the AML Regulation, envisages the possibility of involving ‘AI systems’ in performing due diligence measures by obliged entities, and Article 5(5)(i) of the AMLA Regulation mentions among AMLA’s tasks the developing and making available to FIUs ‘[AI] services’. Other components of the AML/CFT reform package do not directly address the possibility of the use of AI, but as will be discussed below, this does not mean that there is no practical and legal room for the deployment of AI in the policy areas, which they govern. Conversely, the scope of tasks and responsibilities of the AML/CFT actors prescribed by the new framework – along with the spirit of the reform, which places the effectiveness of the AML/CFT system at its core – and the broader narrative around innovation dominating current public discourse may implicitly encourage greater integration of cutting-edge technologies, including AI, to better achieve this objective. 40 With this in mind, the next section will explore how exactly the use of AI is addressed in the revised EU AML/CFT regime.
AI governance in the new AML/CFT regime
‘AI systems’ for AML/CFT compliance according to the AML Regulation: One provision, many doubts
The primary goal of the AML Regulation is to provide a single directly applicable rulebook for obliged entities and, thereby, create harmonised rules across the EU and eliminate the problem of divergences resulting from not directly applicable AML/CFT directives. 41 Besides beneficial ownership transparency requirements and measures to limit the misuse of anonymous instruments, the AML Regulation lays down the measures to be applied by obliged entities to prevent money laundering and terrorist financing. 42 This is an area which reveals a great potential for the engagement of AI solutions and seems to largely open up the market for such technology for the private stakeholders within the AML/CFT ecosystem. 43 The measures to prevent money laundering and terrorist financing, which obliged entities are compelled to apply, are specified in Chapter II (‘Internal Policies, Procedures and Controls of Obliged Entities’) (Articles 9–18) and Chapter III (Customer Due Diligence) (Articles 19–50) of the AML Regulation. The measures encompass: risk assessment of money laundering, terrorist financing and evasion of sanctions to which obliged entities are exposed, customer or ultimate beneficial owner identification during the onboarding, ongoing customer due diligence obligations, including realisation of obligations – by practitioners – referred to as ‘Know Your Customer’ (KYC), 44 monitoring of transactions in real time, and detection and red flagging of suspicious activities.
While the aforementioned provisions imposing the AML/CFT compliance tasks on the obliged entities remain rather technology neutral – that is, they refrain from specifying how any of these tasks should be performed technically – Article 76(5) of the AML Regulation is an exception, as it explicitly refers to the possibility of employing ‘AI systems’. 45 The provision permits obliged entities to ‘adopt decisions resulting (. . .) from processes involving AI systems’ 46 as defined in the AI Act. Such decisions are also allowed to be based on ‘automated processes, including profiling’ 47 as defined in the General Data Protection Regulation (GDPR). 48 Regrettably, there is some degree of inconsistency between Article 76(5) and – as it seems — corresponding therewith Recital 150 of the AML Regulation. The latter not only refers solely to ‘automated individual decision-making, including profiling’ as defined in the GDPR, failing to mention AI systems, but it also speaks of ‘[adopting] processes that enable automated individual decision-making, including profiling’ rather than ‘[adopting] decisions resulting from automated processes, including profiling’. 49 The recital, albeit nonbinding, should aid in interpreting the article by clarifying regulatory objectives. 50 Yet, these diverse formulations do not only exhibit inconsistent drafting, but, in fact, they invite a question of the intention of the legislator concerning the degree to which automated processes (whether involving AI systems or other forms of automation) can be permitted in the individual decision-making in the AML/CFT context. The recital appears to generally permit such decisions, whereas the binding provision – Article 76(5) of the AML Regulation – seems to limit the use of AI systems and automated processes to the processes preceding any AML/CFT-relevant decision made by the obliged entity.
According to Article 76(5) of the AML Regulation whenever the decision results from automation (whether it concerns the AI system or automated individual decision-making), it is necessary that three conditions are cumulatively met. Firstly, data processed by the systems at issue must be ‘limited to data obtained pursuant to Chapter III of [the AML Regulation]’. 51 This means that only information received in the course of performance of the due diligence measures can be taken into account, what leaves out the possibility of including, for instance, information received from an FIU in a form of feedback prescribed by Article 28 of the 6AMLD or instructions to monitor specific transactions or activities issued by an FIU in line with Article 25 of the 6AMLD. With a strict interpretation of this requirement also any information received in the framework of partnerships for information sharing under Article 75 of the AML Regulation would have to be excluded from the data set. This may give rise to practical problems due to the necessity of maintaining separate data sets and possibly using incomplete or decontextualised data both during the training and deploying of an AI tool, what may in turn negatively affect accuracy of the produced outcomes. 52
Secondly, ‘any decision to enter or refuse to enter into or maintain a business relationship with a customer or to carry out or refuse to carry out an occasional transaction for a customer, or to increase or decrease the extent of the customer due diligence measures (. . .) [must be] subject to meaningful human intervention to ensure the accuracy and appropriateness of such a decision’. 53 This requirement may suggest that there is an explicit permission in EU law to use AI systems for two – essential in the AML/CFT compliance – types of decisions, namely derisking and determining the type of applicable due diligence measures, that is, choosing between the regular, enhanced or simplified measures. The question is whether the catalogue of decisions referred to in Article 76(5)(b) of the AML Regulation is of an exhaustive nature, and the literal reading of the provision does not permit to unequivocally resolve this doubt. On the one hand, the literal interpretation implies that these two types of decisions are the only ones which require ‘meaningful human intervention’, without ruling out the possibility of using AI (without the compulsory standard of ‘meaningful human intervention’) for other types of decisions. On the other hand, Recital 150 seems to suggest the opposite, as it appears to put the emphasis on safeguarding the rights of persons subject to automated decisions regardless of their content. This shows yet another instance of inconsistency between Article 76(5) and Recital 150 of the AML Regulation, casting shadow on the possible explanatory use of the latter.
Moreover, as will be discussed further in Section ‘A “meaningful human intervention” versus “human oversight”’, the reference to ‘meaningful human intervention’ in the context of AI systems is already problematic in itself. Despite the explicit reference in Article 76(5) of the AML Regulation to the ‘AI systems as defined in [the AI Act]’, application of the AI Act may be questioned. The reason for this is that again the literal reading of the provision may suggest no room for AI system’s autonomy (which is an inherent feature of any AI system 54 ) and consequently the exclusion from the scope of the AI Act. However, should this not be the case, there is still some degree of uncertainty emerging from the phrase ‘meaningful human intervention’ used in the discussed provision. One can wonder whether it is intended to be synonymous with the notion of ‘human oversight’ used in the AI Act (most notably in Article 14 of the AI Act) or distinct from it. 55 This concern is further amplified by the fact that Article 75(4)(g) of the AML Regulation (the second instance, when the AML Regulation explicitly mentions ‘AI’), which acknowledges the possibility of ‘generating’ information ‘through the use of [AI], machine learning technologies or algorithms’, speaks of ‘processes (. . .) subject to adequate human oversight’. 56 Unlike Article 76(5), Article 75(4)(g) of the AML Regulation permits sharing of such generated information in the framework of a partnership for information sharing on the condition that an oversight has been exercised over its generation. The distinction between AI, machine learning and algorithms in the wording of the discussed provision, although striking for those seeking logic and consistency in the text of the AML Regulation, does not seem to be of any practical relevance. Article 75(4)(g) may, however, influence the interpretation of Article 76(5)(b) of the AML Regulation and suggest that the requirement of the ‘meaningful human intervention’ applicable to derisking and determining the applicable degree of due diligence measures is without prejudice to the employment of AI in other AML/CFT compliance processes.
Thirdly, Article 76(5)(c) of the AML Regulation stipulates that ‘the customer may obtain an explanation of the decision reached by the obliged entity, and may challenge that decision, except in relation to a [suspicious transaction report (STR)]’. 57 Unlike the previously discussed requirement, arguably limited to two types of decisions, this one appears to have a much broader scope of application. The provision allows the customer to receive explanation and to challenge any decision, with the only exception of a situation when such decision concerns the STR. This seems compatible with Recital 150 of the AML Regulation, which can be interpreted as allowing any processes that enable automated individual decision-making that serve the purpose of complying with the AML Regulation. Yet, despite fostering transparency and trust between a customer and financial institution, the practical implementation of this mechanism may prove problematic for two reasons. Firstly, it will inevitably have to face the well-known for most AI applications problem of explainability as a means of refuting the ‘black box’ tendency of machine learning, which is a technically challenging issue. 58 In essence, this would imply the necessity to understand and reveal the logic behind the specific outcome of the applied AI model and to provide insights regarding its inner working. 59 Secondly, even when there is no breach of tipping-off rules triggered by the STR submission at stake, it seems doubtful that the customer concerned will ever be able to receive a meaningful explanation, as there may always be a risk of jeopardising the policy objectives underlying the AML/CFT system. In other words, it would be rather naïve to believe that Article 76(5)(c) of the AML Regulation will unexpectedly bring a considerable change and lead to transparency of AML/CFT policies and procedures developed by obliged entities.
‘AI services’ under AMLA Regulation
The second component of the reformed AML/CFT framework is the AMLA Regulation. The creation of AMLA – the new EU body with its own legal personality – through the AMLA Regulation is a key element of the revised regime. This decentralised agency tasked with coordinating national AML/CFT authorities and ensuring the proper and consistent implementation of EU AML/CFT rules across the Union is expected to become ‘the new centre of gravity in AML policymaking and enforcement’. 60 The tasks assigned to AMLA demonstrate multidimensional character of the agency’s activities, as they encompass tasks concerning risks of money laundering and terrorism financing faced by the internal market, 61 tasks with respect to selected obliged entities, 62 tasks with respect to financial and non-financial supervisors, 63 as well as tasks regarding FIUs and their activities in Member States. 64 The latter, interestingly from the AI point of view, explicitly include the obligation to ‘develop and make available to FIUs tools and services to enhance their analysis capabilities, as well as IT and artificial intelligence services and tools for secure information sharing, including by hosting FIU.net’. 65 The corresponding recital – Recital 9 — brings clarity to the somewhat unfortunate wording of the provision, which, if read literally, could mistakenly imply that AI services should serve the purpose of information sharing. It explains instead that the AI services are meant to ‘enhance [FIUs’] data analysis capabilities’. 66
This seems consistent with the declaration that ‘[t]he conceptual design of the [AMLA’s] ecosystem also embraces the business needs of the FIU-pillar where FIU.net is expected to become a key component of AMLA’s architecture, integrated with state-of-the-art technologies supporting data analysis to the benefit of AMLA and the competent authorities’. 67 As regards FIU.net – a network of information exchange between FIUs, which, until the AMLA Regulation, was highly problematic from the regulatory architecture’s perspective 68 – it must be noted that the platform should be equipped, among other things, with functionality for information cross-matching that enables real-time analysis of large data sets available, for instance, to different FIUs or EU bodies. 69 While AMLA is still taking shape, and its overarching IT strategy – which may possibly provide more detail on the development of AI services and AMLA’s ambitions to accommodate the persisting difference between Member State’s FIUs’ models – has not yet been published, 70 the role and potential of AI within AMLA remain topics of high-level discussions. 71
With the gradual development and operationalisation of AMLA, it is to be seen whether, or what kind of ‘AI services’ (possibly besides those for the FIUs) this new EU body will provide. Nevertheless, considering that the AMLA Regulation does not provide any further details on the use of AI in the context of the activities of AMLA itself, and no relevant cross-reference is to be found, it should be expected that the standards set out in the AI Act should apply. It is worth noting that the application of the AI Act standards comes along with the enforcement mechanism, which in this case is to be exercised by the European Data Protection Supervisor (EDPS) acting as the market surveillance authority for the EU agencies that fall within the scope of the AI Act. 72 On this point, it is remarkable that the EDPS has a record of the use of its corrective powers in relation to FIU.net at the time when it was administered by Europol. 73 Thus, the supervision of the EDPS of AI services deployed by AMLA may open a new interesting chapter of the EDPS’ engagement with the matters related to the FIU.net.
Finally, it should be highlighted that unlike the AML Regulation, the AMLA Regulation does not refer to the definition of the AI system per AI Act leaving it open to discussion whether the ‘artificial intelligence services’ should be regarded services using AI systems (in the meaning of the AI Act) or whether, by the lack of an analogous to Article 76(1) of the AML Regulation reference, the legislator intended to differentiate such services from the AI systems under the AI Act. This is another instance of a missed chance for achieving consistency not only internally within the AML/CFT framework but also with the proceeded in parallel AI Act.
Unacknowledged room for AI in the remainder of the new EU AML/CFT framework
The other instruments of the new AML/CFT package, that is, the 6AMLD and the Travel Rule Regulation do not contain any explicit references to AI. However, given the scope of tasks and responsibilities these instruments establish, there also appears to be quite some room for the deployment of AI. To begin with the 6AMLD, it must be observed that it supplements the AML/CFT policy by addressing organisational and institutional aspects, which could not have been subjected to harmonisation through a regulation. The 6AMLD concerns the measures applicable to sectors exposed to money laundering and terrorist financing, at national level; the identification of money laundering and terrorist financing risks at the Union and national level; the requirements in relation to registration, identification, and checks on, senior management and beneficial owners of obliged entities; the set-up of and access to beneficial ownership and bank account registers and access to real estate information; responsibilities and tasks of FIUs and national AML/CFT supervisory bodies and cooperation between competent authorities and cooperation with authorities covered by other Union legal acts. 74 As was discussed in the context of AMLA in the preceding section, considering AI’s potential for streamlining processes which require analysis of vast datasets, there has been considerable interest in the use of such technologies for the execution of anti-fraud policies by public authorities. 75 With regard to FIUs, some have already admitted to the employment of AI, 76 and others, including in the EU, are considering it, for instance, the FIU in Germany. 77 At the global level, forums such as FATF or Egmont Group also eagerly explore technological possibilities for supporting FIUs and therefore it may only be a matter of time when more FIUs will commonly start using AI in their regular activities. 78 For the EU, the key question would be whether the tools deployed by FIUs across Member States would be the ‘AI services’ developed by AMLA, or nationally developed solutions, which could risk further reinforcing existing disparities between FIUs.
The 6AMLD also regulates national-level supervision of the AML/CFT efforts. In doing so, it follows the risk-based approach leaving competent national bodies room for decisions such as the allocation and prioritisation of resources, based on their assessment of money laundering or terrorism financing risks associated with specific sectors or types of entities. This domain also presents significant potential for the application of AI. 79 However, the 6AMLD does not provide for any rules on the deployment of AI in this context. Consequently, its deployment in the area of AML/CFT supervision is likely to be shaped primarily by the AI Act.
Finally, although not officially part of the revised AML/CFT package, the Directive on Access to CBA Registries is also worth noting. The old Directive on Access to Financial and Other Information laid down two types of measures. Firstly, measures to facilitate access to and the use of financial information and bank account information by competent authorities for the prevention, detection, investigation or prosecution of serious criminal offences; secondly, measures to facilitate access to law enforcement information by FIUs for the prevention and combating of money laundering, associate predicate offences and terrorist financing and measures to facilitate cooperation between FIUs. 80 With the amendment, one more type of measures will be added, namely ‘technical measures to facilitate the use of transaction records by competent authorities for the prevention, detection, investigation or prosecution of serious criminal offences’. 81 In connection with this change, the Directive on Access to CBA Registries added a legal definition of the term ‘transaction records’, which refers to ‘the details of operations which have been carried out during a defined period through a specified payment account (. . .), or the details of transfers of crypto-assets (. . .)’. 82 In practice, this will mean detailed logs which document the specifics of financial transactions and, therefore, provide insights into the flow of funds and can help trace illicit financial activities. Under the Directive on Access to CBA Registries obliged entities across the EU will have to reply to requests for transaction records in a standardised format. 83 This should enable competent authorities to more swiftly and effectively process incoming information and, thus, overcome the current challenges posed by the cumbersome analysis of inconsistently presented transaction records. Additionally, it could incentivise the development of AI tools to further streamline analytical processes.
While this last discussed instrument intends to harmonise formats of transaction records provided by relevant obliged entities to the competent (criminal justice system) authorities, similar considerations concern transaction records shared with the FIUs. The disparity of formats, in which obliged entities submit transaction records to the FIUs does not make these records readily useable for analysis. This, in addition to the institutional concerns discussed earlier, has been hampering the exchange of information among FIUs and the development of cross-border financial analyses. 84 This means that there may still be considerable room for action by AMLA and the European Commission to, respectively, develop and adopt implementing technical standards 85 – leaving, for the time being, certain degree of uncertainty.
It follows that even though neither the 6AMLD, nor the Directive on Access to CBA Registries, explicitly acknowledge the possibility of the use of AI, the aim to create greater coherence in the input received by competent authorities can be identified. This, in turn, can be seen as an indication that there is room and incentive for further automation of the analytical processes, which can be amplified by a positive development towards well-structured data sets.
The governance of AI in the EU: A brief overview of the AI Act
Before proceeding with a more detailed analysis of the legal aspects of the use of AI in the AML/CFT context, it is necessary to first provide a brief overview of the general EU framework for AI, namely the AI Act.
First and foremost, it is important to recognise that the AI Act represents a political compromise between competing visions on innovation, including the desire to foster European’s technological progress and competitiveness, and the European Union’s core values, particularly its commitment to protecting fundamental rights. As affirmed in Article 1(1) of the AI Act, the resulting text of the regulation therefore embodies an attempt to reconcile the efforts to ‘improve the functioning of the internal market’ and ‘promote the uptake of human-centric and trustworthy [AI]’. 86 Consequently, critical both for the market and effective protection of fundamental right – and, as it seems, not always consistent — choices had to be made in the legislative process. 87
The AI Act establishes rules for the entire lifecycle of AI systems using a risk-based approach. This means that the type and content of applicable rules must reflect the intensity and scope of the risks specific AI systems may generate to health, safety or fundamental rights – from the very beginning when the systems are designed, through the development phase, and up to their deployment. 88 Taking account of the intended purpose of an AI system, the AI Act distinguishes between four categories of risks: unacceptable, high, limited and minimal. With the goal of protecting individuals from detrimental consequences of the use of AI, the AI Act prohibits certain AI practices deemed harmful or unethical (Article 5), lays down detailed requirements for AI practices considered high-risk (Chapter III) and outlines transparency obligations for providers and deployers of technologies presenting limited risk. 89
The risk-based approach in relation to health and safety aligns the AI Act with the well-established in the Union regulatory regimes governing product safety. However, extending this approach to fundament rights – particularly when allowing for self-assessment – well exhibits the aforementioned inherent tensions within the AI Act and has, therefore, attracted considerable criticism due to the ‘mismatch between product safety means and human rights ends’. 90 While the AI Act renders access to the single market conditional on ex ante conformity declaration (for the high-risk AI systems), protection of human rights always requires context-sensitive proportionality assessment. 91 Product safety regulations – characterised by a pragmatic approach focused on the removal of market obstacles – set a minimum standard, whilst value-driven fundamental rights law demands the choice of the least intrusive, justifiable in the given circumstances, measure and provide substantive rights with enforcement mechanisms.
The AI Act seeks to combine these two approaches within the single legal instrument. It does so, for instance, by establishing a set of requirements aimed at averting harm AI that AI systems can cause to individuals. 92 High-risk AI systems must thus comply with these requirements to be permitted on the EU market, and therefore to be used also by obliged entities or FIUs operating in the Member States. 93
Bridging the AI Act and AML/CFT Legal Framework from the perspective of fundamental rights protection
AI Act’s ‘AI systems’ for AML/CFT purposes
To continue discussion on combatting financial crime with the use of AI, it is pertinent to illuminate the term ‘AI system’, which has been mentioned on several occasions – particularly in the part, which addressed the reference to ‘AI system’ in the AML Regulation. 94 Article 3 point 1 of the AI Act provides that AI system is ‘a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’. 95
The fact that technology may have the ability to generate yet unknown knowledge from large and complex datasets, such as invisible to human analysts’ fraudulent transactional patterns, makes AI systems a promising tool in the field of AML/CFT. The categories of output enumerated in the legal definition of an AI system do not seem to be exhaustive or necessarily mutually exclusive, but they may affect the reading of Article 76(5) of the AML Regulation and create further uncertainty (see discussion in Section ‘“AI systems” for AML/CFT compliance according to the AML Regulation: one provision, many doubts’). The interpretation of the word ‘decisions’ – used in the first sentence of Article 76(5) of the AML Regulation, and ‘decision’ in indent (b) is ambiguous in light of the definition of the AI system under the AI Act. In the first instance, it is unclear whether the legislator allows obliged entities to rely on AI-generated decisions, or merely to use AI-generated predictions or recommendations as input for human- (eg, AML/CFT compliance officer) made decisions. The former interpretation – closer to how Recital 150 of the AML Regulation could be read – suggests the possibility of fully automating AML/CFT processes, while the latter implies that human judgement may – or ought to – still play the dominant role in these processes. This distinction is not merely technical. Full automation could raise crucial fundamental rights concerns, such as bias resulting in discrimination, lack of transparency and accountability of AML/CFT processes, or contestability of their outcomes, and potentially infringe the obligation to enable human oversights enshrined in Article 14 of the AI Act.
As regards the word ‘decision’ referred to in Article 76(5)(b) AMLR, should it mean a decision in a sense of a type of the generated output by the AI system, then the requirement of the ‘meaningful human intervention’ would render this provision intrinsically contradictory. A logical reading of Article 3(1) of the AI Act requires to assume that the ‘varying levels of autonomy’ presume the existence of some level of autonomy. Therefore, if the ‘decision’ in Article 76(5)(b) AML Regulation was to mean a ‘decision’ under Article 3(1) of the AI Act, the ‘meaningful human intervention’ could exclude the discussed AI application from the scope of the definition of the AI system and, hence, the AI Act.
Another important feature of AI systems under the AI Act is their adaptiveness. Although it does not seem to be a compulsory characteristic of an AI system, in practice, it might be the system’s ability to adjust its behaviour during its use that would make it distinct from basic software or merely rule-based programs, which already automate certain processes in the area of AML/CFT. Based on the earlier discussion, the greatest potential for effectively and efficiently achieving the goals of the AML/CFT policy may lie in the self-learning capabilities of AI systems. This, however, comes along with a very fundamental both technical and ethical question on any AI system’s ability to identify illicit money flows (at least when the realisation of the AML policy goal is concerned) and, what seems even more problematic, ability to predict the occurrence of an illegal activity of terrorism financing (when the CFT part of the policy is concerned). 96
Prohibited AI practices in the area of AML/CFT
As explained in Section ‘The governance of AI in the EU: a brief overview of the AI Act’, the AI Act prohibits ‘AI practices’ that present an unacceptable risk. This implies that AI systems used to perform ‘prohibited AI practices’ are restricted from accessing the EU market and must not be used, for instance, by financial institutions or public bodies such as FIUs in the EU. The prohibited AI practices enumerated in Article 5(1) of the AI Act, include one – under indent (d) – that is of great relevance to the AML/CFT policy. The prohibition in question concerns ‘an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics’. 97 The corresponding Recital 42 clarifies that ‘AI systems using risk analytics to assess the likelihood of financial fraud by undertakings on the basis of suspicious transactions’ are not covered by this prohibition. Neither are ‘AI systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity’. 98
Although at first glance the AI crime prediction prohibition appears to be primarily of interest to law enforcement authorities tasked with crime prevention, the scope of the prohibition under Article 5(1)(d) of the AI Act may extend to other actors, including private ones. Such interpretation was confirmed by the European Commission in its Guidelines on Prohibited AI Practices. 99 The document explicitly states that the prohibition may apply to obliged entities that use AI to assess or predict a person’s risk of committing a crime when fulfilling obligations under the AML/CFT legal regime. 100 However, further, the Commission drew a somewhat convoluted distinction based not on the consideration of the character of the AML/CFT tasks performed by obliged entities but on the type of data used by the AI system. Accordingly, it explained that if the assessment or prediction of risk rely on data prescribed by law, which – as it seems for the Commission – a priori constitutes objective and verifiable facts, they are not considered a prohibited practice. Conversely, when the assessment or prediction is based on individual profiling or evaluation of personality traits and characteristics, they amount to a prohibited practice. The Commission’s reasoning appears to be based on an assumption that to fall outside the scope of the prohibition established by Article 5(1)(d) of the AI Act, it is sufficient that the obliged entity acts on the basis of the binding AML/CFT framework and that the AI system produces its outcome merely on the data the collection of which is prescribed by that framework. 101 In a similar manner, the Commission explains that if the AI system is used by a private entity to make risk assessments in the ‘ordinary course of business and with the aim of protecting their own private interest’ – including when the assessment relates to the risk of criminal offences – it also remains outside the scope of the prohibition. 102 This approach may certainly provide a convenient assurance of compliance with Article 5(1)(d) of the AI Act for the obliged entities, but, in fact, it exemplifies a rather simplistic understanding of the much more nuanced and multifaceted nature of the AML/CFT regime, as well as the complex role obliged entities play in its architecture. While the AML/CFT legal regime makes obliged entities ‘gatekeepers’ to the legit economy, and ‘frontliners’ in the financial crime battle field, 103 they are not entrusted public authority or powers for criminal law enforcement, as they remain profit-oriented institutions, whose economic interests cannot be entirely dissociated. Furthermore, it is doubtful and may be dangerous to assume that the data collected on the basis of the law can automatically warrant its objective and verifiable character. There is room for subjective assessment, for instance, when the ‘information on the nature of the customers’ business’ 104 is concerned. In addition, the modern AML/CFT framework adopts a risk-based rather than a rule-based approach, not only demanding a considerable degree of (pro)activity of the obliged entities but also making it challenging for them to rely solely on objective and verifiable facts in fulfilling the assigned tasks. Therefore, every AI AML/CFT compliance tool should undergo a careful assessment to confirm that it does not fall under the category of prohibited AI practice under Article 5(1)(d) of the AI Act.
Having said that, it must be observed that to fall under the prohibition established by Article 5(2)(d) of the AI Act, the AI system must assess or be based on the profiling of a ‘natural person’. Thus, as also reiterated in Recital 42, an AI system used for AML/CFT compliance may be permitted when assessing risks related to corporate entities, but it could be prohibited when used to assess risks related to individuals.
High-risk AI systems in the area of AML/CFT
The AI Act accepts presence of high-risk AI systems on the EU market, imposing an array of obligations on the stakeholders, including developers and deployers, applicable throughout the lifecycle of these systems. 105 In principle, the AI systems presenting high risk are divided into two categories: (1) systems that are products or safety components of products subjected to ex ante conformity assessment by existing EU law 106 and (2) stand-alone systems deployed in one of the specific areas exhaustively enumerated in Annex III to the AI Act. 107 Despite the system’s listing in Annex III, providers have some leeway to self-asses that their specific system ‘does not pose a significant risk of harm to (. . .) fundamental rights’, 108 except where the AI system performs profiling of natural persons. In practice, this may potentially allow the industry to evade the rules applicable to high-risk AI systems, once again revealling the tensions existing between the AI Act’s objectives and suggesting that the AI Act may be insufficiently focused on fundamental rights protection. 109
Among the areas included in Annex III, two appear to deserve attention in the context of discussion about AML/CFT. Firstly, the area of access to and enjoyment of essential private services and essential public services and benefits, and more specifically, the systems referred to in point 5(b) of Annex III to the AI Act are important. This point concerns high-risk ‘AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud’. 110
While AI systems used to assess creditworthiness are classified as high-risk and, therefore, subject to a range of obligations and requirements, AI systems for detecting financial fraud are not held to the same regulatory standard. This distinction is remarkable, as the difference between the purposes of credit assessment and fraud detection does not appear to convincingly justify the exclusion of the latter from the scope of the AI Act’s rules for high-risk AI systems. 111 In practice, both types of systems can significantly impact individuals, as they both entail risk of financial exclusion, which is exactly what point 5 of Annex III intends to prevent.
It must be noted that unlike Recital 42, which explicitly exempts ‘AI systems using risk analytics to assess the likelihood of financial fraud’ from the scope of the prohibition when such analytics concern undertakings and are performed on the basis of suspicious transactions, Point 5(b) of Annex III mentions only ‘the purpose of detecting financial fraud’ without any further specification. This makes it very broad and potentially susceptible to abuse. One cannot yet say how the exclusion of AI systems for detection of financial fraud from the category of high-risk AI system will affect the systems used for AML/CFT purposes as a whole, as the relationship between fraud detection and AML/CFT policy is not straightforward. While detecting financial fraud is certainly an essential component of the AML/CFT policy, it seems to be a significantly narrower type of activity within the financial sector. Against this backdrop, crucial questions arise concerning the rationale and practical consequences of casuistically referring to ‘fraud detection’ in Point 5(b) of Annex III while leaving out other closely related activities such as prevention of illicit money flows, which could encompass fraud prevention but also prevention of terrorism financing or evasion of sanctions.
The second area of Annex III that should be taken into consideration in the context of discussion on potentially high-risk AI systems in the realm of the AML/CFT is the one referred to in Point 6 of Annex III to the AI Act, which concerns law enforcement. 112 The present analysis focuses on the AML/CFT policy and does not intend to expand onto the point where the AML/CFT efforts prompt the commencement of the national criminal procedure. Yet, such efforts inevitably encompass activities carried out by the FIUs, which are primarily focused on the analysis of information on the suspicious transactions, often acting as quasi-law enforcement authorities. Whilst the problem of diversity of FIUs across Member States and anticipated failure to resolve it by the revised AML/CFT framework, and in particular the 6AMLD, were already briefly addressed in this article, it is worth noting that – rather unexpectedly – the AI Act may come as a solution. Recital 59 of the AI Act explicitly mentions FIUs and excludes their analysis of information pursuant to EU AML/CFT from the scope of high-risk AI systems used by law enforcement authorities for the purpose of prevention, detection, investigation and prosecution of criminal offences. It does so by – somewhat, in my view, arbitrarily – stating that FIU’s analytical tasks are ‘administrative’ – as opposed to law enforcement – tasks. In practice, this means that AI systems developed or deployed by FIUs for the assessment of suspicious transactions should not be regarded as high-risk AI systems and therefore subjected to any relevant obligations. As a consequence, the safeguards developed by the AI Act will not apply, leaving individuals affected by the AI measures deployed by FIUs deprived of the standards of protection necessary for the high-risk AI systems.
A ‘meaningful human intervention’ versus ‘human oversight’
Finally, the considerations on the high-risk AI systems and fundamental rights’ aspect of the AI Act link with the central safeguard established for such systems, namely, the requirement of ‘human oversight’ laid down in Article 14 of the AI Act. The provision stipulates that ‘[h]igh-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use’. 113 Article 26(2) of the AI Act obliges to designate human oversight to a qualified and sufficiently supported natural person. Although there are some requirements formulated for the oversight, the AI Act does not prescribe them in great detail. 114 It requires human oversight to aim at averting or limiting the relevant risks, including to fundamental rights, and to be ‘commensurate with the risks, level of autonomy and context of use of the high-risk AI system’. 115
As was explained in Section ‘“AI systems” for AML/CFT compliance according to the AML Regulation: one provision, many doubts’, Article 76(5) of the AML Regulation requires the referred therein decisions to be subject to ‘meaningful human intervention to ensure [their] accuracy and appropriateness’, showing that despite the parallel developments of both frameworks little attention was given to achieving uniformity across the regulated areas. In as much as the ‘effective’ human oversight can arguably be considered synonymous with ‘meaningful’, it is far from clear what the relationship between ‘oversight’ and ‘intervention’ is. 116 The ‘intervention’ may possibly suggest a higher standard of protection as it implies a need for human involvement, that is, validation of an output produced by the AI system before such an output influences a physical or virtual environment (human ‘in-the-loop’). On the other hand, ‘oversight’ may require only monitoring of the decision-cycle without necessarily involving a human in the production of the output (human ‘on-the-loop’). For the deployment of AI in the AML/CFT context, this would mean that at least as far as the obliged entity’s decision on derisking or the choice of the suitable level of customer due diligence is concerned, Article 76(5) of the AML Regulation sets a potentially higher threshold than the AI Act. The same cannot be said about other types of decisions resulting from processes involving AI systems made by obliged entities or any other AI applications used for the AML/CFT purposes by other stakeholders, including FIUs or supervisory authorities – assuming that at least some of them will fall under the category of high-risk AI systems. The above shows the magnitude of the regulatory complexities, which the stakeholders deploying AI in the area of AML/CFT will have to navigate. Even more importantly, however, many of the above identified regulatory shortcomings may negatively impact the protective human rights standards.
Conclusion
The integration of AI in the realm of AML/CFT is not just a trend. It is an essential shift in how financial institutions and regulators approach the fight against financial crime, which presents both significant opportunities and challenges. The recently revised EU AML/CFT framework and the landmark EU AI Act set the stage for an intricate relationship between technological innovation and regulatory compliance. All these new legal instruments demonstrate the increasing complexity of EU law interactions resulting from the Union’s policy ambitions to balance technological advancements, including those with the pursuit of the AML/CFT policy objectives, with the prevention of potentially harmful effects of the AI systems. This growing intricacy stands in contrast to recent political and institutional calls for deregulation, which advocate for reducing regulatory burdens and enhancing legal clarity. 117
What the present analysis has revealed is that despite the nearly parallel adoption of both legal frameworks, quite some degree of inconsistency can be identified, leaving room not only for conceptual doubt but, more importantly, a number of practical pitfalls AML/CFT actors deploying AI solutions will likely stumble upon. They will have a challenging task of reconciling often inconsistent, if not contradictory, rules. The ambiguities emerge not only at the crossroads of the revised AML/CFT regime and the AI Act, but even internally within individual AML/CFT instruments, as exemplified i.a. by Article 76(5) and Recital 150 of the AML Regulation. Even just the modest references to AI in the revised AML/CFT framework are sufficient to cast doubt on the standards of legal protection, which individuals exposed to AML/CFT measures should be entitled. The AI Act, which establishes the general EU regime for AI governance, does not per se address the use of AI for the purposes of combating financial crime, but some reminiscences of the long lobbying, possibly involving the AML/CFT community, can be identified. An interesting example of this concerns fraud detection: AI systems used for this purpose are explicitly exempted from the category of prohibited AI practices by Recital 42 (when applied to undertakings), but also from the category of high-risk systems by Point 5(b) of Annex III and Recital 58 of the AI Act. Recital 59, in turn, excludes from the list of high-risk AI systems, systems used by FIU’s to carry out their statutory activities (‘administrative tasks analysing information’). These exemptions from the general rules designed to sustain the AI Act’s objective to ‘promote the uptake of human-centric and trustworthy [AI]’ 118 are notable and worrisome. They significantly decrease the scope and availability of protective standards enshrined in the AI Act and, thus, diminish the meaning of the fundamental rights objective of this pioneering regulation. 119 Given the character and level of intrusion of the AML/CFT policy measures into the protective realm of fundamental rights, it is regrettable that the newly adopted rules applicable to technological advancements developed with the pursuit of the AML/CFT policy goals have not been more cautiously balanced against the need for upholding effective safeguards for the individuals. At the same time, it means that there is a great role to be played by the EU Court of Justice in interpreting the AI Act and enhancing the protection it ought to offer in the light of the standards enshrined in EU primary law, and most notably the Charter of Fundamental Rights.
Footnotes
Acknowledgements
The author would like to thank Eleni Kosta and João Quintais and two anonymous reviewers for their insightful comments on an earlier version of this article. Any remaining errors or omissions are the sole responsibility of the author.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the AlgoSoc project funded by the Dutch Ministry of Education, Culture and Science (OCW) (Gravitation 024.005.017).
1.
2.
Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance) PE/24/2024/REV/1 OJ L, 2024/1689, 12 July 2024.
3.
For the AI Act, see example, Francesca Palmiotto, ‘The AI Act Roller Coaster: The Evolution of Fundamental Rights Protection in the Legislative Process and the Future of the Regulation’ (2025) 16 European Journal of Risk Regulation 770; Nathalie A Smuha and Karen Yeung, ‘The European Union’s AI Act: Beyond Motherhood and Apple Pie?’ in Nathalie A Smuha (ed.), The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence (1st edn, Cambridge University Press 2025) 229–34. To the knowledge of the author, the complexities of political debates that took place in the context of works on the AML/CFT regulatory package have not yet been comprehensively discussed in the literature. They can, however, be reconstructed from the many policy documents forming reform’s travaux préparatoires available for procedures: 2021/0239/COD, 2021/0240/COD, 2021/0241/COD and 2021/0250/COD.
4.
For the AI Act see, for example, David Trier, ‘Why You Need To Prepare Now For The EU AI Act’ (Forbes, 2024) <https://www.forbes.com/councils/forbestechcouncil/2024/10/08/why-you-need-to-prepare-now-for-the-eu-ai-act/> accessed 18 October 2024; for the AML/CFT framework, for example, Leen Groen, ‘Anti-Money Laundering: Preparing for the Regulatory Tsunami – KPMG Netherlands’ (KPMG, 19 September 2024) <
> accessed 18 August 2025.
5.
Varun Jain and others, ‘Leveraging Artificial Intelligence for Enhancing Regulatory Compliance in the Financial Sector’ (2024) 72 International Journal of Computer Trends and Technology 124; Charles Kerrigan and Antonia Bain, ‘Artificial Intelligence in Financial Services’ (2024) 59 Journal of Financial Transformation 158.
6.
The potential of technological innovation the area of AML/CFT made some argue that fighting financial crime with AI ceased to be merely a trend, but it essentially became a necessity –Niall Twomey, ‘Fighting Financial Crime with AI Is Not a Trend – It’s a Necessity’ (Forbes, 7 February 2024) <
> accessed 28 March 2024; or ‘a strategic enhancement to the core functionalities of regulatory compliance systems’ – Jain and others, ‘Leveraging Artificial Intelligence’ (n 5); see also Chris Stears and Joshua Deeks, ‘Editorial: The Use of Artificial Intelligence in Fighting Financial Crime, for Better or Worse?’ (2023) 26 Journal of Money Laundering Control 433.
7.
The goal of combatting financial crime can be realised by a broader range of mechanisms reaching beyond the AML/CFT policy and encompass, for instance, financial (including eg insurance) market regulations, national corporate, tax or criminal law mechanisms. For the purpose of this paper, financial crime control-relevant obligations are limited, however, to the obligations arising from the AML/CFT framework.
8.
Marc Parker and Max Taylor, ‘Financial Intelligence: A Price Worth Paying?’ (2010) 33 Studies in Conflict & Terrorism 949.
9.
10.
The revised AML instruments were adopted on 31 May 2024 and the AI Act on 13 June 2024.
11.
For more see example, Valsamis Mitsilegas and Niovi Vavoula, ‘The Evolving EU Anti-Money Laundering Regime: Challenges for Fundamental Rights and the Rule of Law’ (2016) 23 Maastricht Journal of European and Comparative Law 261.
12.
Directive (EU) 2015/849 of the European Parliament and of the Council of 20 May 2015 on the prevention of the use of the financial system for the purposes of money laundering or terrorist financing, amending Regulation (EU) No 648/2012 of the European Parliament and of the Council, and repealing Directive 2005/60/EC of the European Parliament and of the Council and Commission Directive 2006/70/EC, OJ L 141, 5 June 2015, pp. 73–117.
13.
Directive (EU) 2018/843 of the European Parliament and of the Council of 30 May 2018 amending Directive (EU) 2015/849 on the prevention of the use of the financial system for the purposes of money laundering or terrorist financing, and amending Directives 2009/138/EC and 2013/36/EU, OJ L 156, 19 June 2018, pp. 43–74.
14.
Directive (EU) 2019/1153 of the European Parliament and of the Council laying down rules facilitating the use of financial and other information for the prevention, detection, investigation or prosecution of certain criminal offences, and repealing Council Decision 2000/642/JHA, OJ L 186, 11 July 2019, pp. 122–137.
15.
Regulation (EU) 2018/1672 of the European Parliament and of the Council of 23 October 2018 on controls on cash entering or leaving the Union and repealing Regulation (EC) No 1889/2005 PE/49/2018/REV/1 OJ L 284, 12 November 2018.
16.
For more on the administrative and criminal law nature of EU AML/CFT measures see: Maria Bergström, ‘The EU’s Fight Against Money Laundering and Terrorist Financing in a Digital and Fragmented World’ in Antonina Bakardjieva Engelbrekt and others (eds), The Borders of the European Union in a Conflictual World: Interdisciplinary European Studies (Springer Nature Switzerland 2024) 188 <
> accessed 24 July 2025.
17.
Directive (EU) 2018/1673 of the European Parliament and of the Council of 23 October 2018 on combating money laundering by criminal law PE/30/2018/REV/1 OJ L 284, 12 November 2018, pp. 22–30
18.
Directive 2014/42/EU of the European Parliament and of the Council of 3 April 2014 on the freezing and confiscation of instrumentalities and proceeds of crime in the European Union OJ L 127, 29 April 2014, pp. 39–50.
19.
20.
ibid.
21.
Proposal for a Regulation of the European Parliament and of the Council on the prevention of the use of the financial system for the purposes of money laundering or terrorist financing of 20 July 2021 [COM(2021) 420 final].
22.
Proposal for a Regulation of the European Parliament and of the Council establishing the Authority for Anti-Money Laundering and Countering the Financing of Terrorism and amending Regulations (EU) No 1093/2010, (EU) 1094/2010, (EU) 1095/2010 of 20 July 2021 [COM(2021) 421 final].
23.
Proposal for a Regulation of the European Parliament and of the Council on information accompanying transfers of funds and certain crypto-assets (recast) of 20 July 2021 [COM(2021) 422 final].
24.
Proposal for a Directive of the European Parliament and of the Council on the mechanisms to be put in place by the Member States for the prevention of the use of the financial system for the purposes of money laundering or terrorist financing and repealing Directive (EU) 2015/849 of 20 July 2021 [COM(2021) 423 final].
25.
Regulation (EU) 2023/1113 of the European Parliament and of the Council of 31 May 2023 on information accompanying transfers of funds and certain crypto-assets and amending Directive (EU) 2015/849 (Text with EEA relevance) PE/53/2022/REV/1 OJ L 150, 9 June 2023, pp. 1–39.
26.
Regulation (EU) 2024/1624 of the European Parliament and of the Council of 31 May 2024 on the prevention of the use of the financial system for the purposes of money laundering or terrorist financing (Text with EEA relevance) PE/36/2024/REV/1 OJ L, 2024/1624, 19 June 2024
27.
Regulation (EU) 2024/1620 of the European Parliament and of the Council of 31 May 2024 establishing the Authority for Anti-Money Laundering and Countering the Financing of Terrorism and amending Regulations (EU) No 1093/2010, (EU) No 1094/2010 and (EU) No 1095/2010 (Text with EEA relevance) PE/35/2024/INIT OJ L, 2024/1620, 19 June 2024
28.
Directive (EU) 2024/1640 of the European Parliament and of the Council of 31 May 2024 on the mechanisms to be put in place by Member States for the prevention of the use of the financial system for the purposes of money laundering or terrorist financing, amending Directive(EU) 2019/1937, and amending and repealing Directive (EU) 2015/849 (Text with EEA relevance) PE/37/2024/INIT OJ L, 2024/1640, 19 June 2024,
29.
Directive (EU) 2024/1654 of the European Parliament and of the Council of 31 May 2024 amending Directive (EU) 2019/1153 as regards access by competent authorities to centralised bank account registries through the interconnection system and technical measures to facilitate the use of transaction records PE/44/2024/REV/1 OJ L, 2024/1654, 19 June 2024.
30.
Bergström, ‘The EU’s Fight Against Money’ (n 16) 194.
31.
Thomas Wahl, ‘The EU’s New AML Single Rulebook Regulation’ [2024] Eucrim 117.
32.
Anna Pingen, ‘New Rules for Crypto-Assets in the EU’ [2023] Eucrim 143.
33.
Consolidated version of the Treaty on the Functioning of the European Union (‘TFEU’), OJ C 326, 26 October 2012, Article 288.
34.
ibid.
35.
Fabio A Siena, ‘The European Anti-Money Laundering Framework – At a Turning Point? The Role of Financial Intelligence Units’ (2022) 13 New Journal of European Criminal Law 216.
36.
Under the current framework, the legal definition of the FIU can be constructed from Article 32(3) of the 4AMLD. Accordingly, FIU is an operationally independent and autonomous central national unit responsible for receiving and analysing suspicious transaction reports and other relevant information and disseminating the results of its analyses and any additional relevant information to the competent authorities where there are grounds to suspect money laundering, associated predicate offences or terrorist financing. The 6AMLD essentially repeats the same in Article 19(2) and (3) but just as its predecessor fails to specify details or lay down uniform criteria for the character and institutional embedding of the FIUs. For details on the problem of FIUs’ cooperation within the EU caused by the diversity of the FIU’s models (administrative, law enforcement and hybrid), see example, Foivi Mouzakiti, ‘Cooperation between Financial Intelligence Units in the European Union: Stuck in the Middle between the General Data Protection Regulation and the Police Data Protection Directive’ (2020) 11 New Journal of European Criminal Law 351; Magdalena Brewczyńska, ‘Financial Intelligence Units: Reflections on the Applicable Data Protection Legal Framework’ (2021) 43 Computer Law & Security Review 105612.
37.
Siena, ‘The European Anti-Money’ (n 35) 244.
38.
European Parliament legislative resolution of 24 April 2024 on the proposal for a regulation of the European Parliament and of the Council on the prevention of the use of the financial system for the purposes of money laundering or terrorist financing (COM(2021)0420 — C9-0339/2021 — 2021/0239(COD)) P9_TA(2024)0365.
39.
European Parliament legislative resolution of 24 April 2024 on the proposal for a regulation of the European Parliament and of the Council establishing the Authority for Anti-Money Laundering and Countering the Financing of Terrorism and amending Regulations (EU) No 1093/2010, (EU) 1094/2010, (EU) 1095/2010 (COM(2021)0421 — C9-0340/2021 — 2021/0240(COD)) P9_TA(2024)036.
40.
Derville Rowland, ‘Innovation and Technology in Financial Crime’ (The 9th Afore Annual FinTech and Regulation Conference, Brussels, 4 February 2025).
41.
Wahl, ‘The EU’s New AML’ (n 31).
42.
AML Regulation, Article 1.
43.
Kerrigan and Bain, ‘Artificial Intelligence’ (n 5) 159; On the use of AI for transaction monitoring see: Umut Turksen, Vladlena Benson and Bogdan Adamyk, ‘Legal Implications of Automated Suspicious Transaction Monitoring: Enhancing Integrity of AI’ (2024) 25 Journal of Banking Regulation 359.
44.
Martin Gill and Geoff Taylor, ‘Preventing Money Laundering or Obstructing Business?: Financial Companies’ Perspectives on “Know Your Customer” Procedures’ (2004) 44 The British Journal of Criminology 582.
45.
For discussion on the notion of ‘AI systems’ under the AI Act see Section ‘AI Act’s “AI systems” for AML/CFT purposes’.
46.
AML Regulation, Article 76(5).
47.
ibid.
48.
Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance) OJ L 119, 4 May 2016, pp. 1–88.
49.
AML Regulation, Recital 150.
50.
See example, Case C-C-418/18, Puppinck and Others v. Commission [2019] ECLI:EU:C:2019:1113, where the Court clarified that ‘[Par. 75] [t]he preamble of an EU act may explain the content of the provisions of that act (see, to that effect, judgment of 10 January 2006, IATA and ELFAA, C‑344/04, EU:C:2006:10, paragraph 76). (. . .), the recitals of an EU act constitute important elements for the purposes of interpretation, which may clarify the intentions of the author of that act. [Par. 76] However, the preamble to an EU act has no binding legal force and cannot be relied on as a ground either for derogating from the actual provisions of the act in question or for interpreting those provisions in a manner that is clearly contrary to their wording (see, to that effect, judgment of 24 November 2005, Deutsches Milch-Kontor, C-136/04, EU:C:2005:716, paragraph 32 and the case-law cited)’.
51.
AML Regulation, Article 76(5)(a).
52.
For more on the problem of data-related constraints in AI in the area of AML/CFT, see example, Dattatray Vishnu Kute and others, ‘Deep Learning and Explainable Artificial Intelligence Techniques Applied for Detecting Money Laundering: A Critical Review’ (2021) 9 IEEE Access 82300, 82312.
53.
AML Regulation, Article 76(5)(b).
54.
See Section ‘AI Act’s “AI systems” for AML/CFT purposes’ below.
55.
56.
AML Regulation, Article 75(4)(g).
57.
AML Regulation, Article 76(1)(c).
58.
For a comprehensive discussion on the concept of explanation and the ‘black box’ problem, see example, Federico Cabitza and others, ‘Quod Erat Demonstrandum? – Towards a Typology of the Concept of Explanation for the Design of Explainable AI’ (2023) 213 Expert Systems with Applications 118888; See also Ouren Kuiper and others, ‘Exploring Explainable AI in the Financial Sector: Perspectives of Banks and Supervisory Authorities’ in Luis A Leiva and others (eds), Artificial Intelligence and Machine Learning (Springer International Publishing 2022).
59.
According to some authors, in the current state-of-the-art, along with the increase in accuracy of the prediction offered by an AI model, the interpretability of the produced outcome usually decreases – see Kute and others, ‘Deep Learning and Explainable Artificial Intelligence’ (n 52) 82309.
60.
Stanisław Tosza and Olivier Voordeckers, ‘An Anti-Money Laundering Authority for the European Union: A New Center of Gravity in AML Enforcement’ (2024) 25 ERA Forum 405, 406.
61.
The tasks enumerated in Article 5(1) of the AMLA Regulation.
62.
The tasks enumerated in Article 5(2) of the AMLA Regulation.
63.
The tasks enumerated in Article 5(3) and (4) of the AMLA Regulation.
64.
The tasks enumerated in Article 5(5) of the AMLA Regulation.
65.
AMLA Regulation, Article 5(5)(i) (emphasis added).
66.
AMLA Regulation, Recital 9.
68.
For a detailed analysis of the problems with FIU.net and solution under the AMLA Regulation, see: Eleni Kosta, ‘The Proposed Anti-Money Laundering Authority and the Future of FIU Collaboration in Europe’ in Maria Bergström and Valsamis Mitsilegas (eds), EU Law in the Digital Age: Swedish Studies in European Law (1st edn, Hart Publishing 2025) <
> accessed 15 August 2025.
69.
AMLA Regulation, Recital 45.
70.
The first version of such strategy is expected by the end of 2025, see AMLA, ‘AMLA Work Programme 2025’ (n 67) 18.
71.
72.
AI Act, Article 70(9).
73.
Kosta, ‘The Proposed Anti-Money’ (n 68) 132.
74.
6AMLD, Article 1.
75.
76.
For instance in India (The Economic Times India, ‘AI: Financial Intelligence Unit Arms Itself with AI, ML Tools to Check Money Laundering’ (The Economic Times, 5 May 2024) <https://economictimes.indiatimes.com/tech/technology/financial-intelligence-unit-arms-itself-with-ai-ml-tools-to-check-money-laundering/articleshow/109860949.cms?from=mdr> accessed 28 March 2025) or Australia, where in February 2025, the FIU issued an official statement about their use of AI, in which they explained that the FIU ‘uses generative AI tools to undertake research and discovery, and for workplace productivity purposes’ but it ‘has not yet deployed any use of AI which directly interacts with the public or is involved in decision making and administrative action without human intervention. This includes automated decision making and automated communication with [their] stakeholders’ (Austrac, Artificial Intelligence Transparency Statement 2025 <
> accessed 28 March 2025).
77.
The FIU in Germany has been using the AI component ‘FIU Analytics’ directly integrated into its operational analysis processes since the end of 2020. For the most recent plans, see Paul O’Donoghue, ‘NEWS: Germany’s FIU Examining How It Can Use AI to Fight Financial Crime’ (AML Intelligence, 23 October 2024) <
> accessed 28 March 2025.
78.
79.
To give an example, already back in 2019, the Central Bank of Brazil started developing a priority matrix, which since then has been improved using machine learning and unsupervised learning techniques to identify obliged entities, which should be included in the yearly supervision plan (FATF, ‘FATF/Egmont Group, Digital Transformation’ (n 78)).
80.
Directive on Access to Financial and Other Information, Article 1(1).
81.
Directive on Access to CBA Registries, Article 1(1).
82.
Directive on Access to CBA Registries, Article 1(2)(b).
83.
Directive on Access to CBA Registries, Article 1(7) and Recital 8.
84.
AML Regulation, Recital 140.
85.
AML Regulation, Article 69(3).
86.
AI Act, Article 1(1).
87.
Palmiotto, ‘The AI Act Roller Coaster’ (n 3) 792.
88.
For the general discussion on the risk-based approach in the AI Act, see example, Martin Ebers, ‘Truly Risk-Based Regulation of Artificial Intelligence How to Implement the EU’s AI Act’ (2025) 16 European Journal of Risk Regulation 684; or Isabel Kusche, ‘Possible Harms of Artificial Intelligence and the EU AI Act: Fundamental Rights and Risk’ [2024] Journal of Risk Research 1. For the discussion on the risk-based approach in the AI Act in the financial sector, see Gabriele Mazzini and Filippo Bagni, ‘Considerations on the Regulation of AI Systems in the Financial Sector by the AI Act’ (2023) 6 Frontiers in Artificial Intelligence 1277544.
89.
AI Act, Article 50.
90.
Marco Almada and Nicolas Petit, ‘The EU AI Act: Between the Rock of Product Safety and the Hard Place of Fundamental Rights’ (2025) 62 Common Market Law Review 103.
91.
ibid 106.
92.
The requirements for the high-risk AI systems encompass, among others, obligation to implement a risk management system (Article 8), draw up and keep up-to-date technical documentation (Article 11), maintain records (Article 12), enable human oversight (Article 14) or – in specific cases – conduct a fundamental rights impact assessment (Article 27).
93.
The scope of the application of the AI Act is determined in Article 2 thereof.
94.
See Section ‘“AI systems” for AML/CFT compliance according to the AML Regulation: One provision, many doubt’.
95.
AI Act, Article 3(1). Some clarification on the definition of the AI system can be found in Recital 12 and in the published on 6 February 2025 European Commission’s Guidelines on the definition of an artificial intelligence system established by Regulation (EU) 2024/1689 (AI Act).
96.
Comprehensively on the topic of prediction and limitations of AI, see example, Arvind Narayanan and Sayash Kapoor, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference (Princeton University Press 2024).
97.
AI Act, Article 5(1)(d).
98.
ibid.
99.
European Commission, ‘Guidelines on Prohibited Artificial Intelligence Practices Established by Regulation (EU) 2024/1689 (AI Act)’ paras 207–208.
100.
ibid 209.
101.
According to the Guidelines, ‘Compliance with that legislation will ensure that the use of individual crime prediction AI system for anti-money laundering purposes fall outside the scope of the prohibition in Article 5(1)(d) AI Act’ (ibid 209).
102.
ibid 211.
103.
For discussion on the problematic role of private entities in the modern ‘chain of security’, see Marieke de Goede, ‘The Chain of Security’ (2018) 44 Review of International Studies 24.
104.
AML Regulation, Article 20(1)(e).
105.
Articles 8–15 of the AI Act set out requirements for high-risk AI systems. Articles 16–27 specify obligations for providers, deployers and other parties concerning high-risk AI systems.
106.
AI Act, Article 6(1) and Annex I.
107.
AI Act, Article 6(2) and Annex III.
108.
AI Act, Article 6(3).
109.
Smuha and Yeung, ‘The European Union’s AI Act’ (n 3) 238.
110.
AI Act, Annex III point 5(b).
111.
On AI credit scoring and evaluation of creditworthiness as a high-risk system, see example, Katja Langenbucher ‘AI credit scoring and evaluation of creditworthiness – a test case for the EU proposal for an AI Act’ in Continuity and Change – How the Challenges of Today Prepare the Ground for Tomorrow (ECB Legal Conference 2021) (ECB 2020) 362, 367.
112.
Necula and Roebling, ‘Reflections on Introducing Artificial Intelligence’ (n 75) 211.
113.
AI Act, Article 14(1).
114.
With the exception of high-risk AI systems referred to in point 1(a) of Annex III, where Article 14(5) of the AI provides that ‘in addition, no action or decision is taken by the deployer on the basis of the identification resulting from the system unless that identification has been separately verified and confirmed by at least two natural persons with the necessary competence, training and authority’.
115.
AI Act, Article 14(2) and (3).
116.
Fink, ‘Human Oversight Under Article 14’ (n 55).
118.
AI Act, Recitals 1 and 176; and Article 1(1).
119.
Almada and Petit, ‘The EU AI Act’ (n 90) 103.
