Glenn Fratangelo, Head of Fraud Product Marketing & Strategy at NICE Actimize, on financial services fraud prevention in 2025.

2024 marked a turning point in financial crime management with the advent of Generative AI (GenAI). McKinsey estimates GenAI could add a staggering $200-340 billion in annual value to the global banking sector. A potential revenue boost of 2.8 to 4.7%. This underscores the transformative potential of GenAI. IT IS rapidly evolving from a futuristic concept to a powerful tool in the fight against financial crime. However, 2024 was just the prelude. 2025 promises to be the year GenAI truly comes into its own. Unlocking transformative capabilities in combating increasingly sophisticated threats. 

This evolution is not merely desirable, it is essential. The Office of National Statistics (ONS) reported a concerning 19% year-over-year increase in UK consumer and retail fraud incidents in 2024, reaching approximately 3.6 million. This stark reality underscores the urgent need for financial institutions (FIs) and banks to bolster their defences against financial crime. In 2025, leveraging the power of GenAI is no longer a luxury, but a necessity for protecting customers and safeguarding the financial ecosystem. 

The evolving GenAI-powered fraud landscape

Fraudsters have embraced GenAI as a potent weapon in their arsenal. This technology’s ability to create realistic fakes, automate attacks and mimic customers creates a significant threat to the financial landscape.

Deepfake technology has become a particularly insidious tool. By generating highly realistic voice and facial fakes, fraudsters can bypass remote verification processes with ease. This opens doors to unauthorised access to sensitive information, enabling account takeovers and other fraudulent activities.  

In addition, the rise of synthetic identities further complicates the challenge. By blending real and fabricated data, fraudsters can create personas that seamlessly infiltrate legitimate customer profiles. These synthetic identities are extremely difficult to detect, as they appear indistinguishable from genuine customers. Making it challenging for institutions to differentiate between legitimate and fraudulent activities.

Phishing scams have also undergone a dramatic evolution, becoming more sophisticated and personalised. AI-driven techniques allow fraudsters to craft personalised, convincing emails that mimic legitimate communications, resulting in significant data breaches.

Harnessing GenAI

GenAI is being used by criminals – presenting a significant challenge in the realm of fraud. It requires advanced AI capabilities such as real-time behavior analytics that use machine learning to continuously analyse all entity interaction and transaction patterns. This can identify subtle deviations from a customer’s typical behaviour. It allows for initiative-taking and the flagging of suspicious activity before any damage occurs. Moreover, providing a significant advantage over traditional, rigid rule-based systems that often fail to detect nuanced threats.

Fraud simulation and stress testing using GenAI can also empower institutions to proactively assess the resilience of their systems. By simulating potential fraud scenarios, financial institutions can identify vulnerabilities and train detection models to recognise emerging tactics. Furthermore, this proactive preparation ensures that defences remain ahead of fraudsters’ evolving methods, creating a more robust and adaptable security infrastructure.

Low volume high value fraud, such as BEC or other large value account to account transfers usually lack the quantity of data needed to optimise models. GenAI can address this by creating synthetic data that mimics real-world scenarios. This approach significantly improves the accuracy and robustness of detection models, making them more effective against new and unforeseen threats.

GenAI has the potential to transform the investigation process by automating tasks such as generating alerts and case summaries, as well as SAR narratives. This automation not only minimises errors but also frees analysts from mundane tasks, allowing them to focus on higher-value activities. The result is a significantly accelerated financial crime investigation process, enabling institutions to respond to threats with greater speed and efficiency.

The battle against fraud in 2025 and beyond

The battle against financial fraud in 2025 and beyond is an undeniable arms race. Fraudsters, wielding generative AI as their weapon, will relentlessly seek to exploit vulnerabilities. To counter this evolving threat, financial institutions must embrace AI to outmanoeuvre fraudsters and proactively protect their customers.

The future of fraud and financial crime prevention hinges on our ability to innovate and adapt. Institutions that view GenAI not just as a challenge, but as an opportunity, will emerge as leaders in this fight. AI is a force multiplier for institutions striving to combat fraud and financial crime, empowering them with smarter, faster, and more adaptive defences, we can create a more secure and trustworthy financial ecosystem. The choice to innovate in the face of adversity will define the path forward and shape the future.

  • Artificial Intelligence in FinTech

Gabe Hopkins, Chief Product Officer at Ripjar, on how GenAI can transform compliance

Generative AI (GenAI) has proven to be a transformational technology for many global industries. Particularly those sectors looking to boost their operational efficiency and drive innovation. Furthermore, GenAI has a range of use cases, and many organisations are using it to create new, creative content on demand – such as imagery, music, text, and video. Others are using the new tools at their disposal to perform tasks and process data. This makes previously tedious activities much more manageable, saving considerable time, effort, and finances in the process.

However, compliance as a sector has traditionally shown hesitancy when it comes to implementing new technologies. Taking longer to implement new tools due to natural caution about perceived risks. As a result, many compliance teams will not be using any AI, let alone GenAI. This hesitancy means these teams are missing out on significant benefits. Especially at a time when other less risk-averse industries are experiencing the upside of implementing this technology across their systems.

To avoid falling behind other diverse industries and competitors, it’s time for compliance teams to seriously consider AI. They need to understand the ways the technology – specifically GenAI – can be utilised in safe and tested ways. And without introducing any unnecessary risk. Doing so will revolutionise their internal processes, save work hours and keep budgets down accordingly.

Understanding and overcoming GenAI barriers

GenAI is a new and rapidly developing technology. Therefore, it’s natural compliance teams may have reservations surrounding how it can be applied safely. Particularly, teams tend to worry about sharing data, which may then be used in its training and become embedded into future models. Moreover, it’s also unlikely most organisations would want to share data across the internet. Strict privacy and security measures would first need to be established.

When thinking about the options for running models securely or locally, teams are likely also worried about the costs of GenAI. Much of the public discussion of the topic has focussed on the immense budget required for preparing the foundation models.

Additionally, model governance teams within organisations will worry about the black box nature of AI models. This puts a focus on the possibility for models to embed biases towards specific groups, which can be difficult to identify.

However, the good news is that there are ways to use GenAI to overcome these concerns. This can be done by choosing the right models which provide the necessary security and privacy. Fine-tuning the models within a strong statistical framework can reduce biases.

In doing so, organisations must find the right resources. Data scientists, or qualified vendors, can support them in that work, which may also be challenging.

Overcoming the challenges of compliance with AI

Despite initial hesitancy, analysts and other compliance professionals are positioned to gain massively by implementing GenAI. For example, teams in regulated industries – like banks, fintechs and large organisations – are often met with massive workloads and resource limits. Depending on which industry, teams may be held responsible for identifying a range of risks. These include sanctioned individuals and entities, adapting to new regulatory obligations and managing huge amounts of data – or all three.

The process of reviewing huge quantities of potential matches can be incredibly repetitive and prone to error. If teams make mistakes and miss risks, the potential impact for firms can be significant. Both in terms of financial and reputational consequences.

In addition, false positives – where systems or teams incorrectly flag risks and false negatives – where we miss risks that should be flagged, may come from human error and inaccurate systems. They are hugely exacerbated by challenges such as name matching, risk identification, and quantification.

As a result, organisations within the industry quite often struggle to hire and retain staff. Moreover, this leads to a serious skills shortage amongst compliance professionals. Therefore, despite initial hesitancy, analysts and other compliance professionals stand to gain massively by implementing GenAI without needing to sacrifice accuracy.

Generative AI – welcome support for compliance teams

There are numerous useful ways to implemented GenAI and improve compliance processes. The most obvious is in Suspicious Activity Report (SAR) narrative commentary. Compliance analysts must write a summary of why a specific transaction or set of transactions is deemed suitable in a SAR. Long before the arrival of ChatGPT, forward thinking compliance teams were using technology based on its ancestor technology to semi-automate the writing of narratives. It is a task that newer models excel at, particularly with human oversight.

Producing summarised data can also be useful when tackling tasks such as Politically Exposed Persons (PEP) or Adverse Media screenings. This involves compliance teams performing reviews or research on a client to check for potential negative news and data sources. These screenings allow companies to spot potential risks. It can prevent them from becoming implicated in any negative relationships or reputational damage.

By correctly deploying summary technology, analysts can review match information far more effectively and efficiently. However, like with any technological operation, it is essential to consider which tool is right for which activity. AI is no different. Combining GenAI with other machine learning (ML) and AI techniques can provide a real step change. This means blending both generalised and deductive capabilities from GenAI with highly measurable and comprehensive results available in well-known ML models.

Profiling efficiency with AI

For example, traditional AI can be used to create profiles, differentiating large quantities of organisations and individuals separating out distinct identities. The new approach moves past the historical hit and miss where analysts execute manual searches limiting results by arbitrary numeric limits.

Once these profiles are available, GenAI can help analysts to be even more efficient. The results from the latest innovations already show GenAI-powered virtual analysts can achieve, or even surpass, human accuracy across a range of measures.

Concerns about accuracy will still likely impact the rate of GenAI adoption. However, it is clear that future compliance teams will significantly benefit from these breakthroughs. This will enable significant improvements in speed, effectiveness and the ability to respond to new risks or constraints.

Ripjar is a global company of talented technologists, data scientists and analysts designing products that will change the way criminal activities are detected and prevented. Our founders are experienced technologists & leaders from the heart of the UK security and intelligence community all previously working at the British Government Communications Headquarters (GCHQ). We understand how to build products that scale, work seamlessly with the user and enhance analysis through machine learning and artificial intelligence. We believe that through this augmented analysis we can protect global companies and governments from the ever-present threat of money laundering, fraud, cyber-crime and terrorism.

  • Artificial Intelligence in FinTech
  • Cybersecurity in FinTech

Cullen Zandstra, CTO at FloQast on mitigating the risks of AI to deliver benefits to financial services

There’s a lot of buzz around Generative AI (GenAI). What’s not always heard beneath the noise are the very real and serious risks of this fast-developing AI tech. Let alone ways to mitigate these emerging threats.

Currently, one quarter (26%) of accounting and bookkeeping practices in the UK have now adopted GenAI in some capacity. That figure is predicted to grow for many years to come.

With this in mind, and as we hit the crest of the GenAI hype cycle, it’s critically important that leaders focus closely on the potential risks of AI deployment. They need to proactively prepare to mitigate them, rather than picking up the pieces after an incident.

Navigating the risky transition to AI

The benefits of AI are well-proven. For finance teams, AI is a powerup that unlocks major performance and efficiency boosts. It significantly enhances their ability to generate actionable insights swiftly and accurately, facilitating faster decision-making. AI isn’t here to take over but to augment the employees’ capabilities. Ultimately improving leaders’ trust in the reliability of financial reporting.

One of the most exciting aspects of AI is its potential to enable organisations to do more with less. Which, in the context of an ongoing talent shortage in accounting, is what all finance leaders are seeking to do right now. By automating routine tasks, AI empowers accountants to focus on higher-level analysis and strategic initiative, whilst drawing on fewer resources. GenAI models can help to perform routine, but important tasks. These include producing reports for key stakeholders and ensuring critical information is effectively and quickly communicated. It enables timely and precise access to business information, helping leaders to make better decisions.

However, GenAI also represents a new source of risk that is not always well understood. We know that threat actors are using GenAI to produce exploits and malware. Simultaneously levelling up their capabilities and lowering the barrier of entry for lower-skilled hackers. The GenAI models that power chatbots are vulnerable to a growing range of threats. These include prompt injection attacks, which trick AI into handing over sensitive data or generating malicious outputs.

Unfortunately, it’s not just the bad guys who can do damage to (and with) AI models. With great productivity comes great responsibility. Even an ambitious, forward-thinking, and well-meaning finance team could innocently deploy the technology. They could inadvertently make mistakes that cause major damage to their organisation. Poorly managed AI tools can expose sensitive company and customer financial data, increasing the risk of data breaches.

De-risking AI implementation

There is no technical solution you can buy to eliminate doubt and achieve 100% trust in sources of data with one press of a button. Neither is there a prompt you can enter into a large language model (LLM).

The integrity, accuracy, and availability of financial data are of paramount importance during the close and other core accountancy processes. Hallucinations (another word for “mistakes”) cannot be tolerated. Tech can solve some of the challenges around data needed to eliminate hallucinations – but we’ll always need humans in the loop.

True human oversight is required to make sure AI systems are making the right decisions. We must balance effectiveness with an ethical approach. As a result, the judgment of skilled employees is irreplaceable and is likely to remain so for the foreseeable future. Unless there is a sudden, unpredicted quantum leap in the power of AI models. It’s crucial that AI complements our work, enhancing rather than compromising the trust in financial reporting.

A new era of collaboration

As finance teams enhance their operations with AI, they will need to reach across their organisations to forge new connections and collaborate closely with security teams. Traditionally viewed as number-crunchers, accountants are now poised to drive strategic value by integrating advanced technologies securely. The accelerating adoption of GenAI is an opportunity to forge links between departments which may not always have worked closely together in the past.

By fostering a collaborative environment between finance and security teams, businesses can develop robust AI solutions. They can boost efficiency and deliver strategic benefits while safeguarding against potential threats. This partnership is essential for creating a secure foundation for growth.

AI in accountancy: The road forward

The accounting profession stands on the threshold of an era of AI-driven growth. Professionals who embrace and understand this technology will find themselves indispensable.

However, as we incorporate AI into our workflows, it is crucial to ensure GenAI is implemented safely and does not introduce security risks. By establishing robust safeguards and adhering to best practices in AI deployment, we can protect sensitive financial information and uphold the integrity of our profession. Embracing AI responsibly ensures we harness its full potential while guarding against vulnerabilities, leading our organisations confidently into the future.

Founded in 2013, FloQast is the leading cloud-based accounting transformation platform created by accountants, for accountants. FloQast brings AI and automation innovation into everyday accounting workflows, empowering accountants to work better together and perform their tasks with greater efficiency and accuracy. Now controllers and accountants can spend more time delivering greater strategic value while enjoying a better work-life balance.

  • Artificial Intelligence in FinTech
  • Cybersecurity in FinTech