Trust is the foundation and the currency of the financial services industry. When customers hand over their hard earned money, they trust in their chosen provider’s ability to safeguard their finances and help achieve their financial goals.
Long before computers came about, the financial services industry built trust and minimised risk through carefully organised processes led by people. A significant amount of bureaucracy, process control and mapping has reduced mistakes for decades. However, as technology has developed, the way the industry interacts with these processes is changing.
The Rise of Bureaucracy and Software
The introduction of computers enabled the financial services industry to scale processes, increase productivity and widen customer pools. This was achieved through structured software mapped to closed deterministic and bureaucratic processes that allowed the industry to reduce errors and increase efficiency by applying the same structured decision-making to lots of customers automatically, rather than having humans make decisions for each individual customer.
Now we face the rising popularity of AI agents, and effectively integrating these entities into the sensitive systems that were built before them. When applied correctly, they offer immense value, but applied incorrectly, and they risk causing immense harm.
As we are at the relative start of the AI implementation journey, it is crucial to determine how we take AI tools with such significant decision making capabilities, and safely plug them into our systems now to maintain trust, and more importantly so that they help customers, rather than hinder.
The Missing Human Layer
The key to successful AI implementation in the financial services industry is to understand the market gap it can fill. For the last four decades, scaling financial services safely has only been achieved with many layers of bureaucracy – slowing delivery, adding friction, and ultimately limiting who could be served. Furthermore, the human experts who could navigate these bureaucratic complexities and translate it into clear, accessible decisions for customers were few and far between.
This gap is what modern AI systems can close. AI can act as an intelligent layer in front of the bureaucracy, to help the wider public make smart financial decisions with greater confidence. We must learn from the success of large AI systems, as their approachability and ease of use is what draws customers in at scale.
However, for AI to fulfill this promise, it must meet the same standards of institutional safety and compliance. This ease of use must be brought to customers safely, meaning we must engineer the very same systems of safety that currently underpin the financial sector, ensuring AI offers accessibility without compromising on trust.
Engineering Safe Boundaries
To achieve this, we have to go beyond integration – we have to engineer clear boundaries between AI and traditional software. We must use AI to deliver an accessible, relatable customer experience, while ensuring it follows the principles built into tested software. This approach is critical because good outcomes only come as a result of managed risk and tested judgement.
There is significant hype around feeding agents large knowledge bases of policies via Retrieval-Augmented Generation (RAG). While using state-of-the-art models can achieve reasonable, but not perfect, policy concordance for judgement tasks – if the aim is to deliver full flexibility of human interaction to customers at scale, then this protocol is only acceptable for basic customer service, such as issue handling. It falls short when it comes to dealing with the diverse approaches and behaviours customers bring – meaning that errors can only be minimised, not entirely controlled.
When dealing with nuanced considerations such as investment decisions and judgements that have long-standing consequences, it is better to implement software layers that are interactive with AI for logic checking and generating results, rather than trying to emulate complex decision making principles through predictive language.
A Recipe for Success
Modern AI systems, even when producing the right answer 95% of the time, are making decisions on ‘instinct’. No financial firm would implement a workforce of highly instinctual individuals making critical decisions without bureaucratic control. Therefore, putting AI on the path to make financial decisions without the tried-and-tested software to control logical reasoning is a path to failure.
The recipe for success in a customer-facing context is clear. Providers should use AI to mimic everyday language and bring a personal dimension to customers at scale, but keep core financial decision-making within the safe domain of tried and tested software and experts.
While this may sound simple on paper, achieving a seamless system where everything blends together is the core differentiator between companies that will win customer confidence, and companies that will simply offer ‘cool ‘short-term gimmicks. To close the advice gap, the future of financial services should not aim to replace bureaucratic safety systems with AI, but instead integrate AI to deliver human-level accessibility – while keeping decisioning limited to the domain of purpose-built software.
Learn more at moneyboxapp.com
- Artificial Intelligence in FinTech
- Digital Payments
- Neobanking