Financial service institutions are currently navigating an increasingly complex digital landscape where opportunity and risk walk hand in hand. According to The Bank of England’s 2024 report, 75% of financial service firms are already using Artificial Intelligence (AI). Afurther 10% are planning to use AI over the next three years.
It goes without saying that the rapid uptake can be attributed to the benefits of AI for financial service firms. These include enhancing fraud detection and automating customer service, to improving risk assessment and streamlining compliance processes. Financial institutions are undeniably seeing faster, more accurate decision-making and cost saving as a result of AI integration.
However, the reality is more complicated. The same report also reveals security has emerged as the highest perceived risk of AI integration. Both now and looking three years ahead. With this in mind, banks and fintechs alike are struggling to address these immediate security concerns. As well as implementing and keeping ahead of new AI regulation. Meanwhile, also trying to prepare and anticipate what is next for AI technology. With AI becoming essential to the future of financial services, is there too much focus on technical integration and not enough on the human element?
The Current Limitations to AI Integration
While Generative AI’s (GenAI) ability to understand plain language makes it easier to use, this creates an abundance of potential security risks. Financial staff using these tools might accidentally share sensitive data when asking questions, or the AI could reveal confidential trading information if it’s not properly trained or restricted. This can also work in reverse, by continually telling the AI tool that an untrue thing is correct, the AI tool will adopt this position and present it as fact. For example, if a GenAI tool was trained that people called ‘Rob’ are always bad credit risks, it would quickly factor that into its answers irrespective of the clear (to humans) fact that it is nonsense. This of course works equally well accidentally and maliciously.
Another considerable limitation of current GenAI systems lies in how the mechanisms are set to prioritise delivering information. Unlike seasoned human financial analysts who possess the experience and time to make informed decisions, GenAI mechanisms are set to prioritise over a number of known and unknown criteria, that are not necessarily trained from that specific use to the model. For example, a user disconnecting without an answer may mean the Gen AI tool prioritises responding within a specific time frame over providing correct information. This is especially prevalent in public GenAI tools where the context and desire of the user will be different to the current question but may be applied as universal learning. Furthermore, Public GenAI rarely sees the reaction to the output, so it is unable to differentiate between the good and bad answers its given, meaning training on dumb makes the GenAI less smart, not more.
This can lead to potentially dangerous scenarios in critical financial operations. Where the GenAI tool simply guesses or creates an answer that isn’t based on fact, potentially enabling or making the wrong decisions.
A Comprehensive Approach to AI Integration
Instead, financial services and institutions must focus on creating and adopting a comprehensive approach to AI integration and security to address these challenges and limitations.
Firstly, firms should invest in building their own AI models that follow their company’s security rules, rather than relying on unreliable public systems. If public systems are being used by staff though, setting clear rules about, and controls when using these tools, like ChatGPT, will also be essential in ensuring the safety of company information. Staff need to know what they can and can’t share, and monitoring and controls should create clear boundaries and limitations to the use of open AI models.
Companies must also train staff on how to use AI systems safely, as even the best security measures can fail if employees don’t know how to use them properly.
Finally, organisations should also use multiple AI systems that work together with human experts to double-check results, making sure no single system can make unchecked decisions without a human AI partnership.
So, what does a good human AI partnership look like?
How to Leverage Human-AI Partnerships
Finance services institutions need to recognise that the solution should focus on allowing AI and human skills to compliment each other. It isn’t just about better AI – it’s about enabling human expertise to scale efficiently.
The simple principle of “the right tool for the right job” needs to be at the forefront of users minds. A GenAI platform can search through billions of records and identify six that are anomalous in some way. A second AI platform can ask it to validate its findings against the original question. And then a human expert can identify which 4 of the 6 are expected behaviours. And which 2 are malicious, dangerous, or need further action.
In the same way as asking the human to search through billions of records manually is unachievable, asking the GenAI platform to apply context it doesn’t have or retain causal experience is equally unrealistic.
AI excels at processing vast amounts of data to recognise patterns, but humans bring crucial understanding, ethical judgment, and strategic thinking. Working in unison, taking a partnership focused approach can allow organisations to leverage both the processing power of AI and the nuanced decision-making abilities of experienced professionals.
Risk management within this partnership becomes absolutely essential. For instance, if AI flags potential money laundering, a compliance officer needs to review this before any action is taken. Or if AI suggests changes to investment portfolios based on market trends, investment managers must validate these recommendations against their market knowledge and client needs.
Banks too need clear procedures for escalation. If AI suggests unusual trading patterns, there should be a defined process for who reviews this. Whether that’s the trading desk, a separate compliance team, or even senior management. The same applies for credit decisions, fraud alerts, or risk assessments.
The Real Risk: Avoiding AI Altogether
Interestingly, the biggest risk to financial institutions isn’t from those using AI – it’s from those avoiding it altogether. The key is finding the right balance – embracing AI’s capabilities while maintaining strong human oversight and security measures. Financial institutions must create protected data environments and train AI platforms for specific tasks with specific information. They must establish clear guidelines for AI tool usage. And conduct regular security audits to ensure their AI systems remain both effective and secure.
An AI’s development, training, utilisation and continued learning should be planned monitored and developed. This should be longside its human partner’s usage and of course the overall outputs and results.
GenAI Platform Best Practice
When building a GenAI platform, the following principles should be considered.
- Design it carefully, with a restricted scope and a set of agreed outcomes, how will it learn? What makes this the best learning data? And of course GenAI supervised by humans can play a big part in this.
- Validate its learning, tell it what’s right and wrong – a GenAI model will learn (like a human) through mistakes. But it won’t hold the knowledge of why? Or what? So keep the feedback relevant, continuous and tight.
- Try to break it – ask it random things. For example, when it replies “I don’t know” tell it that’s a good answer. When it makes something up, be clear and provide feedback.
- Ensure the human partners understand its limitations – people don’t get to outsource their thinking. They get to participate with a low level, high volume intelligence. Make sure they know that and are checking every answer.
- Measure against your original outcome goals. Don’t scope creep without following the above principles. Yes it can analyse data, but it can’t think if what you’re asking is stupid or not.
- Enjoy the financial, time, accuracy and speed benefits of your human/ai partnership
The future of financial services lies in effective human-AI collaboration, not just AI adoption. Success requires building secure, well-trained AI systems that compliment human expertise rather than replace it. Embrace this partnership mindset while maintaining strong security measures and human oversight. Then financial institutions can harness AI’s power while mitigating its risks.
- Artificial Intelligence in FinTech