Scott Zoldi, Chief Analytics Officer at FICO considers whether the current AI bubble is set to burst, the potential repercussions of such an event, and how businesses can prepare

Since artificial intelligence emerged more than fifty years ago, it has experienced cycles of peaks and troughs. Periods of hype, quickly followed by unmet expectations that lead to bleak periods of AI-winter as users and investment pull back. We are currently in the biggest period of hype yet. Does that mean we are setting ourselves up for the biggest, most catastrophic fall to date?

AI drawback

There is a significant chance of such a drawback occurring in the near future. So, the growing number of businesses relying on AI must take steps to prepare and mitigate the impact a drawback or complete collapse could have. Research from Lloyds recently found adoption has doubled in the last year, with 63% of firms now investing in AI, compared to 32% in 2023. In addition, the same study found 81% of financial institutions now view it as a business opportunity, up from 56% in 2023.

This hype has led organisations to explore AI use for the first time. Often with little understanding of the algorithms’ core limitations. According to Gartner, in 2023 less than 10% of organisations were capable of operationalising AI to enable meaningful execution. This could be leading to the ‘unmet expectations’ stage of the damaging hype/drawback cycle. The all-encompassing FOMO of repeating the narrative of the incredible value of AI does not align with organisations’ ability to scale, manage huge risks, or derive real sustained business value.

Regulatory pressures for AI

There has been a lack of trust in AI by consumers and businsses alike. It has resulted in new AI regulations specifying strong responsibility and transparency requirements for applications. The vast majority of organisations are unable to meet these in traditional AI, let alone newer GenAI applications. Large language models (LLMs) were prematurely released to the public. The resulting succession of fails fuelled substantial pressure on companies to pull back from using such solutions other than for internal applications. It has been reported that 60% of banking businesses are actively limiting AI usage. This shows that the drawback has already begun. Organisations that have gone all-in on GenAI – especially those early adopters – will be the ones to pull back the most, and the fastest.

In financial services, where AI use has matured over decades, analytic technologies exist today that can withstand regulatory scrutiny. Forward-looking companies are ensuring they are prepared. They are moving to interpretable AI and backup traditional analytics on hand while they explore newer technologies with appropriate caution. This is in line with proper business accountability, vs the ‘build fast, break it’, mentality of the hype spinners.

Customer trust with AI

Customer trust has been violated by repeated failures in AI, and a lack of businesses taking customer safety seriously. A pull-back will assuage inherent mistrust in companies’ use of artificial intelligence in customer facing applications and repeated harmful outcomes.

Businesses who want their AI usage to survive the impending winter need to establish corporate standards for building safe, transparent, trustworthy Responsible AI models that focus on the tenets of robust, interpretable, ethical and auditable AI. Concurrently, these practices will demonstrate that regulations are being adhered to – and that their customers can trust AI. Organisations will move from the constant broadcast of a dizzying array of possible applications, to a few well-structured, accountable and meaningful applications that provide value to consumers, built responsibly. Regulation will be the catalyst.

Preparing for the worst

Too many organisations are driving AI strategy through business owners or software engineers who often have limited to no knowledge of the specifics of algorithm mathematics and the very signifiicant risk in using the technology.

Stringing together AI is easy. Building AI that is responsible and safe is a much harder and exhausting exercise requiring model development and deployment corporate standards. Businesses need to start now to define standards for adopting the right types of AI for appropriate business applications, meet regulatory compliances, and achieve optimal consumer outcomes.

Companies need to show true data science leadership by developing a Responsible AI programme or boosting practices that have atrophied during the GenAI hype cycle which for many threw standards to the wind. They should start with a review of how regulation is developing, and whether they have the standards, data science staff and algorithm experience to appropriately address and pressure-test their applications and to establish trust in AI usage. If they’re not prepared, they need to understand the business impacts of potentially having artificial intelligence pulled from their repository of tools.

Next, these companies must determine where to use traditional AI and where they use GenAI, and ensure this is not driven by marketing narrative but meeting both regulation and real business objectives safely. Finally, companies will want to adopt a humble approach to back up their deployments, to tier down to safer tech when the model indicates its decisioning is not trustworthy.

Now is the time to go beyond aspirational and boastful claims, to have honest discussions around the risks of this technology, and to define what mature and immature AI look like. This will help prevent a major drawback.

  • Artificial Intelligence in FinTech

Related Stories

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.