The transformative impact of GenAI on Financial Services

Generative AI is a powerful technology opening up myriad possibilities for businesses while also raising challenges. How will it impact financial services?

none

The world is on a speed highway for new technologies. The design, creation, testing and adoption of new technologies has accelerated exponentially since the Industrial Revolution. It no longer takes decades, but days, for the masses to adopt new technologies. The Generative Artificial Intelligence (GenAI) and Large Language Model (LLM) revolution is further propelling the world to yet another brand new reality. 

What are the implications of this powerful new technology? How is it reimagining the future of financial services? And how can we balance innovation with human responsibility for an ethical usage of AI? Speakers on a recent panel discussion at the BNP Paribas Global Official Institutions Conference (GOIC) shared their insights. 

GOIC's panel on Generative AI
What are the implications of GenAI on financial services? Speakers on a recent panel discussion at the BNP Paribas Global Official Institutions Conference (GOIC) shared their insights.

Generating content with AI 

Opening the discussion, Alexei Grinbaum, Senior Research Scientist at CEA-Saclay, explained: “The way AI systems generate content is asemantic. Generative AI systems don’t understand anything as human do and do not work with the dimension of human meaning. Computation just serves to select the next token in a sequence. Token selection is the result of probabilistic computation, and that’s all GenAI does. What is interesting is that, although GenAI only computes missing tokens, it actually produces content that makes sense to us and appears to be full of meaning to the user.”  

He added: “AI generation doesn’t evaluate whether some content is true or false, because human meaning is not part of the computation process. We need to add a system of filters, called alignment, to filter out what is wrong, discriminatory, or toxic.” 

GenAI remains a machine. The quality of what it delivers depends on the prompts we give it: “In order to speak to this non-human agent, you need to learn specific ways of speaking.” We cannot just use human ways of using language and assume that this would give the best result with the AI systems. 

Alexei Grinbaum

In order to speak to this non-human agent, you need to learn specific ways of speaking.

Alexei Grinbaum
Senior Research Scientist, CEA-Saclay

The adoption of GenAI 

After Carl Benz created the very first gas powered vehicle in 1886 during the Industrial Revolution, it took 27 years before its adoption became more widespread when Henry Ford brought it to the masses in 1913. 

In comparison, when ChatGPT was launched in November 2022, it took five days to get 1 million users. The previous record was held by Instagram for which it took two months to reach that number of users, said Guillaume Bour, Head of Enterprise at Mistral AI, speaking at GOIC. 

Adoption of GenAI has been lightspeed, applications seem endless and the race to the next frontier has started. Making the technology free for use on open source will further accelerate the transformational nature of GenAI. 

Among sectors, financial services are one of the industries at the forefront of AI adoption, with a growing number of explorations in the past 18 months as well as a shift from employee centric and on non-core business applications which were mostly off-the-shelf solutions to business use cases with much more significant gains in competitive edge, Bour noted. A majority of use cases he has seen have been tools that improve more administrative, behind-the-scenes, tasks, such as credit decisioning, know-your-customer (KYC), ESG and risk management improvements. 

Client-facing applications are the next level competition as companies increase the AI capabilities of their client-facing contact centre chatbots. Swedish e-commerce startup ​​Klarna, for example,​ claims to have​ a chatbot that can manage over 2 million customer conversations​ in a month​ without negatively impacting its Net Promoter Score (NPS) which measures customer satisfaction. 

The provider’s view 

Being a good provider requires two key ingredients: expertise and capital. ​Bour ​estimates that there are currently ​a few thousand ​people in the world who master this technology. On the capital side, developing GenAI models is very capital intensive because it needs to run a lot of ​GPUs​ non-stop for days or even weeks. 

Big, powerful models are important for research, but not always financially viable when going into production and scaling. “Return on investment is not obvious for many use cases,” Bour noted. “In order to support the scale of large enterprises and institutions around the world with the most efficient cost-to-performance ratio, you will need big, very powerful models, which are more expensive, but also mid-sized models that you will be able to fine tune and specialise to specific tasks.” 

Bigger models ​have​​ ​more reasoning capacity and ability to perform more advanced tasks, and are also slower. The industry needs to find a balance in the trade-off between speed and reasoning capacity. 

Taking the example of legacy code refactoring, Bour explained: “If you want to translate millions of lines of COBOL to Java, which is a challenge that many financial institutions face, it needs a lot of accuracy and performance for a model. This can take days or even weeks, which doesn’t really matter. On the other hand, if you are building a chatbot for your customers, then latency is extremely important. So you would rather choose smaller models that do have a very good level of performance as well.” 

Another differentiating factor for Mistral is the company’s flexibility to deploy its models within their clients’ data centres instead of putting the data in the cloud, which is definitely a selling point for financial institutions. “We believe that our customers should have the choice of bringing the model to the data and not the other way around,” Bour noted. 

Multilinguality is another key feature: It is the ability for a model to understand and to communicate natively in a specific language instead of using translation. This ensures the model understands cultural sensitivities and nuances. “It’s not only about vocabulary or grammar, but also about the cultural aspects, and distilling the specificities of a language and a culture,” said Bour. 

The users’ view 

French financial markets regulator, Autorité des Marchés Financiers (AMF), considers itself as “AI consumers,” said Iris Lucas, AMF Head of Data Intelligence. “Indeed, at the AMF we are asking ourselves the question of use for our own purposes and several projects are underway, for example investors’ protection, promotion of sustainable finance, market abuse detection, and operational efficiency.” 

AMF deployed its first AI tools in 2019, with a scam detection called FISH, Financial Investment Scam Hunter. In 2021, AMF launched of a big transversal data programme called ICData, with the ambition to extend the use of data in order to make the AMF more data driven.  

On sustainable finance, AMF is using AI to extract the sustainability objectives in fund management policies. AI is also a valuable tool AMF is using for clustering in market abuse detection while supervised learning techniques are being explored to identify specific types of market manipulation. In operational efficiency, AMF is exploring the use of AI for automatic generation of meeting Minutes, support for experts, and increased supervision assistance, to reduce manual tasks without added value. “Our point of view is that LLM will not replace our coworkers or experts, but it can extend their capabilities,” Lucas noted. 

Looking at the use of AI at financial institutions, Léa Deleris, Head of Risk Artificial Intelligence Research at BNP Paribas, explained: “My team develops AI use cases to help the efficiency and efficacy of risk management. We also work on the strategy of the function and contribute at the Group level to all the community around AI, especially around responsible AI, that is ethical, secure, robust, bias free, mindful of the carbon footprint, and explainable.”  

She added: “We have over 50 use cases in production that address risk and compliance use cases, mainly in identification and detection.” 

Deleris sees most of the impact of GenAI on documentation and controls. Large global institutions like BNP Paribas have controls in very diverse geographies and jurisdictions. “At the level of BNP Paribas Group, we want to make sure that we cater to the local specificities while ensuring we keep a global coherence. Having a tool that can read, summarise and find challenges will help us be even more efficient,” she noted. 

Léa Deleris

At the level of BNP Paribas Group, we want to make sure that we cater to the local specificities while ensuring we keep a global coherence. Having a tool that can read, summarise and find challenges will help us be even more efficient.

Léa Deleris
Head of Risk Artificial Intelligence Research, BNP Paribas

Other areas where AI can provide significant efficiency is in modelling, stress testing, speed up documentation and alignment to procedures and regulations. 

Responsible AI 

“Beyond the benefits and challenges in implementing AI, an essential topic for all of us is how to do it in a responsible way,” said Hugues Even, BNP Paribas Chief Data Officer, who moderated the discussion. 

This is the purpose of the EU AI Act which is “aimed at creating a unified legal framework for AI systems and applications across the European Union (EU),” explained Deleris, adding: “The intent of the AI Act is to ensure that AI systems are developed and deployed in a safe, ethical, and trustworthy manner.” 

The AI Act categorizes AI systems based on their level of risk and introduces specific obligations and requirements for each category. High-risk AI systems will be subject to strict compliance obligations, including conformity assessments, data governance, and transparency requirements. 

Deleris cautioned that the AI Act only focused on protecting a category of risks affecting the fundamental rights of citizens. Institutions face additional risks such as cyber but also operational or reputational loss from improper use of AI. “Model risk management is not new in financial institutions and is exactly about ensuring that a model (AI or not) is fit for purpose and used in the correct purpose. So we already have frameworks and mostly need to nurture a culture of managing those risks,” she added. 

“While the AMF does not currently supervise AI models developed by market participants under the AI Act, this may evolve in the future as regulations progress. In the meantime, we are actively engaging with market players to understand how AI is being used and whether it impacts their compliance processes or risk management,” noted Lucas. She added that “The key challenge remains to strike a balance between leveraging the opportunities AI offers and managing its risks. This is central to the AMF’s ambition: to support innovation while ensuring the proper functioning of the markets.”  

Grinbaum at CEA-Saclay agreed: “The AI Act is not just more bureaucratic procedures and more mandatory certificates. There are also very interesting provisions – even a new ethical principle – and important calls for more research.” 

“The rapidly evolving technology of Generative AI offers opportunities for early adopters in their markets, and the potential of and use cases already being implemented in areas such as modelling, detection, investigation and reporting. This also call for controls and ethics considerations, and the potential for regulation including the EU’s landmark AI Act,” concluded Even.  

Suggested reading
Women touching screen
Tech & Innovation

Harnessing the power of AI for the future of banking

woman using laptop representing the next wave in generative AI
Cash Equities

Generative AI: The next wave