Vill du byta språk?

AI agents in financial crime prevention: an opportunity or a risk?

AI agents in the fight against financial crime: yay or nay? Here is what Joe Biddle, UK Market Director at Trapets, considers.

Gabriela Taranu

Content Manager Published 2025-05-23
Man in a green sweater working on a laptop at a wooden desk with office supplies.

As AI is entering most work fields, there’s a lot of discussion about the role of AI agents in financial crime prevention. But how much can we trust AI to improve compliance? 

This is a question Joe Biddle, UK Market Director at Trapets, addressed at the ICA Future of FinCrime & Compliance Summit 2025 in London.

Drawing on his experience from all UK credit bureaus, focusing on regulated financial markets, Joe discussed the real-world implications of AI agents in compliance frameworks. 

The message was clear: strategic use of AI demands clear boundaries, deep understanding, and relentless oversight. 

With participants ranging from global compliance leaders to risk analysts, Joe’s session covered more than just AI's promises; it also addressed its limits, boundaries, and responsibilities that financial institutions must uphold as they integrate advanced technologies into their anti-money laundering (AML) processes. 

Here are some key takeaways from Joe’s roundtable discussions. 

1. The illusion of efficiency: why more isn’t always better 

AI systems are often promoted as able to sift through massive datasets, uncovering patterns and anomalies that traditional methods might overlook.

While this can be true in many cases, Joe cautioned that this "power" comes with an inherent limitation. AI tools, trained predominantly on historical data, tend to reflect outdated fraud models.  

While they might initially deliver strong results, these systems can quickly become obsolete if not continuously updated. Worse still, the illusion of efficiency could result in an uptick in false positives, overwhelming already stretched compliance teams. 

This scenario isn't just theoretical. As Joe explained, financial institutions that fail to manage the flood of false positives may be bottlenecked by delayed critical alerts and overlooked risks. 

“AI should support compliance, not flood it,” he mentions in his discussion.  

2. The "black box" problem: when transparency becomes critical 

Another theme in Joe’s session was the transparency deficit often associated with AI-driven tools. These systems, while effective, do not inherently “understand” the problems they aim to solve.  

“AI doesn’t know the problem it is trying to solve; it detects patterns based on pre-determined rules – the onus is on the financial institutions to understand and explain why a decision has been made,” Joe explains.  

He also advocated a “second-step” validation process, ensuring that every AI-driven output is explainable. AI tools must be transparent to compliance officers, auditors, and regulators. 

3. Fighting bias: why governance is not optional 

Joe highlighted the necessity of inclusive governance during AI training and implementation phases. Bias can’t be eliminated, but it can be mitigated through collaborative input from compliance, legal, ESG, and executive stakeholders. 

This cross-functional approach ensures that the AI system reflects a balanced perspective, suitable for operations across jurisdictions and product lines. It’s a proactive move toward fairness and operational integrity. 

“Bias within AI decision-making must be minimised; there will never be a point where all bias is removed from a decision as the models are trained by humans who, by our nature, are in some way biased.” 

4. Human oversight: the best line of defence 

Despite the allure of automation, Joe reinforced a fundamental truth: AI cannot replace human expertise.  

Sophisticated as they may be, AI agents miss the nuances of emerging threats, especially those that don’t fit into historical data patterns. Human analysts, with their experience and contextual understanding, are still the most effective defence against financial crime. 

“Human oversight should not diminish; there will always be a need to interpret the AI decisions confidently and also to be able to over-ride and correct the AI models being depended upon.” 

Joe urged institutions to embed AI literacy into their compliance teams. Professionals need to understand how AI decisions are made, challenge those decisions when necessary, and know when to override the system. 

Final thoughts: lead with strategy, not hype 

Financial institutions must approach AI integration with a deliberate strategy. That means ongoing testing, constant oversight, and deep human involvement at every stage. 

As Joe concluded, “AI is not your defence, it’s your ally. But like any ally, it needs watching.”