Artificial Intelligence: Opportunities & Challenges to Fight Money Laundering and Terrorism

24 Nov 2021

Lire la version française

Financial crime is evolving at an unprecedented rate, accelerated by globalization and new technologies. For instance, criminals were able to exploit the loopholes of the sanitary crisis through multiple schemes: fraudulent use of emergency financing measures, increase in cyber attacks and online scams, etc.


Financial institutions are deploying significant resources to identify suspicious transactions; they have invested in tools and implemented standard rules to monitor their customers' transactions. However, these systems, even at an advanced level of maturity, are not always adapted to the risks linked to their activities and, as they are not very flexible, they cannot adapt to exogenous shocks that provoke a sudden change in market and client behaviors. They often generate a high number of irrelevant alerts or "false positives", causing additional work for analysts and a significant cost for compliance departments. 

Today, banks are looking for more flexible and resilient systems that can adapt to the speed of change in criminal patterns and cope with drastic behavioral changes. 

Artificial Intelligence (AI) is an essential lever to make these systems more robust and anti-fragile. Its applications are multiple: dynamic segmentation, real-time model calibration and detection of new crime patterns. It also has a role in improving the operational efficiency of alert management processes, by allowing compliance officers to have a better and broader visualization of the data required for their analysis and by providing them with a risk assessment in order to support their decision. 

If the barriers to AI implementation are still numerous, more and more financial institutions are in a POC phase: AI for AML-CFT is bound to expand exponentially in the coming months and years, until it becomes a new standard. It is crucial that its development is done in compliance with the rules of ethics and explainability of algorithms. It is the responsibility of financial institutions towards their stakeholders (management, supervisory bodies, end customers) to ensure that the models developed and their results can be explained, that their evolution can be traced, and that there is a guarantee of the absence of bias. Meeting these challenges is only possible with the intervention of a human who must validate the model, the data with which it is trained, the results it generates and each evolution that would impact it. A clear governance must therefore be established to keep humans at the heart of AI frameworks.

PwC France and Maghreb, with the assistance of its international PwC Financial Crime network, has conducted a global benchmark of AML/CFT transaction monitoring practices and the use of AI.  

We share our findings, our convictions and accompany you, from the strategy to the implementation of your Artificial Intelligence systems in response to the challenges of the fight against money laundering and terrorism financing. 

Follow us

Contact us

Sébastien d'Aligny

Sébastien d'Aligny

Associé Financial Crime, PwC France et Maghreb

Hide