Mike Southgate, Co-founder of UK-based RegTech firm Ermi, on why artificial intelligence alone cannot replace human judgment in the creation of rules for automated transaction monitoring

In the drive to modernise and improve financial-crime detection, artificial intelligence (AI) has emerged as a powerful tool. Machine-learning models have the ability to process vast volumes of transactional data, identify patterns invisible to the human eye and flag anomalies at scale.

But despite these clear benefits, AI on its own cannot deliver the transparency, accountability, or contextual nuance that is needed for effective transaction monitoring. Human judgment (Human In the loop) remains absolutely essential.

The Autonomy Illusion

Rising financial crime, advances in laundering typologies and increased regulatory scrutiny, has put financial institutions under pressure to adopt AI-driven anti-money-laundering (AML) systems, with the promise that they will be more effective.

According to the IICFIP Global Financial Crimes Impact Report 2025, global losses from financial crime exceed US $8 trillion annually, including money laundering losses of between US $800 billion and $2 trillion, fraud losses of over US $5 trillion, and corruption losses around US $3.6 trillion. Yet INTERPOL reports that only one percent of illicit financial flows are ever intercepted, frozen, or recovered.

Transaction monitoring vendors are increasingly marketing AI-driven AML solutions, claiming that the algorithms are able to autonomously detect suspicious behaviour. But these capabilities are often vastly overstated. Machine-learning models suffer from multiple issues. They are only as effective as the data they are trained on and ensuring accurate (E.g. data relevant to the firm buying the tool) and up to date data is challenging. Not least because financial crime is a moving target. Criminals continually change their tactics, often faster than AI can be retrained. Because the system relies on patterns learned from historical data rather than anticipating new, adaptive strategies, subtle illicit activity, such as transactions that mimic legitimate behaviour, often go undetected. Similarly, data to train an AI must know whether past patterns were truly criminal, which we may not always know.

Understanding AI’s Shortcomings

Importantly, the line between criminal and normal behaviour will depend upon the client. Consider a scenario where a high-net-worth individual initiates a series of international transfers. An AI model may flag these transactions purely based on volume or geography. Without contextual understanding for the type of client, the alert is likely to be a false positive. Conversely, a sophisticated money laundering scheme could evade detection entirely by mimicking legitimate behaviour. In both cases, human insight is critical. AI lacks context of clients or in-depth knowledge of  of “normal” business models.

Opacity is another concern. Many machine-learning systems operate as black boxes, generating alerts without and meaningful explanation. Regulators are increasingly demanding transparency, for example under the EU AI Act and Financial Action Task Force (FATF) guidance on AI in AML (FATF, 2021). Institutions have an obligation to justify why a transaction was flagged (or not), what criteria were used and how decisions align with risk-based approaches.

Black-box models can also undermine internal governance. Compliance teams need to understand and trust the systems they rely on. And when an alert cannot be traced to a clear rule, confidence is undermined and investigations stall. Over-reliance on automation has the potential to overshadow critical human judgment.

Human Rule Design with Context

Effective transaction monitoring must still therefore have human-led contextual rule design. Unlike generic thresholds or static parameters, contextual rules take into account the full spectrum of customer behaviour, business models and risk exposure. Having defined rules will also allow transparency and traceability.

For example, a transaction exceeding £10,000 may trigger a review in retail banking but is routine in corporate financial operations. Contextual rules enable financial institutions to adapt the detection rule logic based on customer type and risk, transaction purpose, jurisdictional risk and historical patterns.

Contextual rule design also supports dynamic adaptation, so that systems are able to respond intelligently to changes in a client’s behaviour. For example, if a customer suddenly increases the volume or frequency of cross-border payments, the system evaluates these changes against historical patterns, business type, transaction purpose and associated risk factors. Alerts are then generated only when deviations are statistically or contextually significant, rather than for every fluctuation.

By incorporating this nuanced understanding, organisations are able to reduce false positives, prioritise genuinely suspicious activity and ensure compliance teams focus on actionable alerts rather than noise.

Contextual Rules

Importantly, contextual rules enhance explainability. Each rule can be traced to a specific rationale, for example, regulatory guidance, internal policy, or risk appetite. This strengthens audit readiness and helps with regulatory engagement. Transparency also supports continuous improvement as threats evolve or business priorities shift.

Financial crime detection is not just a technical challenge and is fundamentally about context. But AI struggles with nuance. It cannot distinguish between a legitimate seasonal spike and a layering attempt, in which illicit funds are moved through multiple accounts or jurisdictions to obscure their origin. It also cannot surmise intent, assess reputational risk, or weigh geopolitical implications, or above all… just be a sceptical compliance officer who doesn’t trust anyone.

Humans excel at contextual reasoning. They interpret indicators in light of customer behaviour and relationships, market dynamics and regulatory expectations. They ask the right questions, challenge assumptions and escalate concerns when needed. In short, humans bring vital judgment to transaction monitoring.

An example of this in action: in 2024, a European bank’s AI system flagged 80,000 transactions as “high risk.” Only 0.3 percent proved genuinely suspicious (IICFIP, 2025). Without human review, the bank would have wasted significant time chasing false positives, while potentially missing the subtler patterns of actual illicit activity.

Augmentation, Not Automation

The future of transaction monitoring is not about replacing humans but about strengthening them. AI should be used to support decision making by surfacing patterns and anomalies, while humans provide interpretation, oversight and context.

Forward-thinking financial institutions are getting ready for a regulatory landscape that will demand AI models are explainable and auditable. And by carefully combining machine efficiency with human judgment that organisations will reduce operational risk and strengthen compliance.

As financial crime grows more sophisticated, our transaction monitoring needs to evolve too. AI is a powerful tool but it is not a panacea. Effective transaction monitoring requires human insight and contextual awareness. Hybrid models that balance automation with human-led rule sets and interpretation will be essential.

While AI offers unparalleled speed and pattern recognition, it cannot replace the human ability to reason, contextualise and make judgment calls. Human-led transparency, explainability and context are not optional features for effective AML. Organisations that use AI to augment, not replace, human judgment will be best positioned to detect sophisticated threats, maintain regulatory trust and act decisively. In stopping financial crime, trust is essential and trust cannot be automated.

Learn more at ermitm.com

  • Artificial Intelligence in FinTech
  • Cybersecurity in FinTech
  • Digital Payments

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.