With more experimentation, an evolving regulatory regime and positive outcomes boosting confidence in its use, AI will continue at a rapid pace, as it has done in recent years.
A recent survey by Statista found that in 2022 there was a 46% widescale adoption rate among financial businesses worldwide, 25% with limited adoption, and 14% piloting its use. Only 6% were not using AI at all. Crucially, 8% of respondents regarded it as critical, with this share expected to rise to 43% by 2025.
Another survey, undertaken by the web security company Indusface finds that over a quarter of all UK finance and accounting businesses have adopted AI, while another by NVIDIA reveals that 35% of respondents report that AI applications create operational efficiencies, with 20% saying that it has reduced the cost of ownership.
Over a third of respondents claimed that it had lowered their annual costs by more than 10%. These benefits accrue by retaining customers, creating new business and improving the accuracy of modelling, lowering their risk exposures.
“AI has many different forms, and has existed for many years, in all walks of life,” says Andrew Flegg, chief technology officer of Alfa, a software company in the asset finance space using technology to improve its clients’ processes.
“It sits behind just about everything we digitally interact with and, given this, it is perhaps surprising that its widespread appeal and excitement has only peaked recently,” he adds.
AI is involved every time a phone is used, an e-mail scanned, or a beverage is purchased, highlighting its key uses in financial services, for credit referencing, market forecasting and crucially fraud detection. It is only now, it seems, since the emergence of Chat GPT developed by Open AI, that everyone is talking about it.
A survey conducted by The Economist in 2022 noted how, among the many uses of AI, fraud detection was number one (57.6%), followed by optimising IT operations (53.7%), digital marketing (50.2%), risk assessment (48.3%) and personalising customer experience (43.9%).
AI is a very broad field, notes Flegg. It encompasses machine learning, neural networks, computer vision, robotics, and more. It has a huge number of applications, and it will rapidly evolve.
He mentions that finance has been ahead of the curve, employing AI in these previously labour-intensive areas, as well as other broader applications such as document processing and customer service enhancements – for optimal character recognition, call transcriptions and chatbots, etc.
This has been mainly handled by the big players given the scale, expertise and expense required, but the technology is becoming increasingly accessible, and cheaper to employ.
Deborah Reuben, CEO and founder of TomorrowZone, a tech strategy consultancy, holds discussions on AI capabilities in her role as chair of the Equipment Leasing and Finance Association (ELFA) Innovation Advisory Council.
The key use cases, she says, focus on a range of applications across the leasing life cycle. They include customer service, sales and marketing, credit underwriting, fraud prevention, predictive intelligence for collections, contract analysis, pricing, Know Your Customer (KYC) and ID verification, business intelligence and predictive analytics.
“Generally we see the potential for AI to drive productivity, improve decision-making, and enhance customer experiences,” she says.
Myriad solutions
Simon Harris, a consulting director with Finativ, has a close eye on the pulse of financial services and says there is a wide variety of AI solution providers, from start-ups to in-house developments.
“Institutions are probably most brave with websites and customer service, where they might be tempted to try the ‘latest new thing’ from a provider that very obviously brings skills they do not have in-house,” he says.
“In back-office applications, they may engage an existing vendor in developing or buying an extension to existing tech solutions, whereby this is probably the more conservative end of AI, namely machine learning (ML) and/or robotic process automation (RPA).”
Flegg concurs, stating that AI will “have a profound impact on asset finance, but is not a one-size-fits-all solution. The types of AI solutions vary.”
They can take the form of in-house machine learning, using data scientists and AI algorithms. There are cloud companies, such as Amazon Web Services and Google Cloud Platform, providing AI solutions that include Amazon SageMaker, Amazon Bedrock or Vertex AI.
Alfa joins Amazon’s global community of cloud service providers
AI-as-a-Service companies, such as Jumio, provide small point solutions that can be easily integrated, and there are platform companies, such as Alfa or Salesforce providing AI and machine learning solutions integrated into their market-specific products.
Reuben notes that machine learning applications are not new, but that equipment finance service providers are beginning to offer AI-enabled capabilities as part of their offerings, mostly larger companies currently with the technical resources and budgets.
At Alfa, Flegg says, they have been working on AI and machine learning solutions under the Alfa iQ brand for several years, with promising results that prove AI solutions are valuable, notably for the use of credit scoring.
The right solution invariably depends on the underlying business requirements. “It is always important to ensure before embarking on any AI project that the data behind it is well understood, congruent in its format, and easily accessible via clean application programming interfaces,” he says.
“It’s incredibly easy to build in different biases and skews by improperly merging data across different domains,” he warns.
Alleviating concerns
Naturally, there are many concerns about the use of technology that has the potential to transform human activity. Chatbots, for example, do not have a favourable reputation.
Flegg says that it is up to the industry to ensure the tooling that is deployed serves customers well by improving their experience rather than acting as a barrier to successful resolution.
Large Language Models (LLMs), such as ChatGPT, may enable chatbots to better provide customer service, but human interaction is still important. “Financial institutions need to find the right balance between AI-powered automation and human-centric interactions to ensure a positive customer experience,” he says.
AI algorithms are complex and opaque. It is difficult to understand their decision-making processes and this lack of transparency raises concerns about accountability, bias and fairness.
Flegg warns that AI systems can perpetuate existing biases in the data they are trained on, leading to discriminatory outcomes in areas such as credit scoring, insurance pricing and hiring decisions.
Then of course there is the issue of fraud. AI can help to detect and prevent fraud, but it also creates opportunities for fraudsters to exploit vulnerabilities in AI systems and other infrastructure.
“Financial institutions need to stay vigilant and continuously adapt their AI-powered fraud detection and cybersecurity mechanisms to stay ahead. It is a game of cat and mouse,” says Flegg.
Reuben goes on to mention that an appropriate and responsible approach to AI is important as the risk of non-compliant use of these technologies is a major concern.
“Security, privacy, and fraud are major concerns we are discussing in leasing innovation forums, as fraudsters and bad actors are innovating right along with the rapid change, creating an ever-evolving threat landscape,” she says.
She goes on to say that as leasing is typically a ‘relationship business’, the potential for losing the human touch is a particular concern for the lessor and lessee.
“If we get too narrow in our thinking, and the application of tech, we could lose the human touch. The key is thinking not in terms of ‘either-or’, but ‘both and’ when it comes to applying this technology to improve customer, partner and employee experience.”
Harris mentions the fear of mistakes and the ability to ‘undo errors’.
“The whole culture and history of financial institutions is steeped in the ability to make risk decisions, so such a culture is naturally sceptical about tech-based solutions, in some areas actively resistant,” he says.
He goes on to mention that this cultural risk-aversion can extend even to IT, for example in areas such as procurement. An established bank’s IT function may be uncomfortable engaging a fintech start-up to develop an AI solution, even with a strong business case.
Importantly, he states that the current generation of AI most likely to be deployed in financial services businesses (LLMs) can be based on, or biased towards, the data within the organisation. This means that AI could end up reflecting, even reinforcing, any pre-existing organisational biases, such as towards (or against) particular customer profiles or characteristics.
“This could become a challenge when a business is looking to grow new business or expand into new segments, where it lacks experience,” he says, adding that it is already the case in the UK today that SME business clients are feeling increasingly under-served by, and excluded from funding from many established institutions.
On the downside, Harris warns that businesses that see AI/automation almost exclusively as a means of lowering costs by replacing people will eventually lose the skills and knowledge that built them up in the first place. If that is combined with a relatively narrow and/or internally/historically biased data set, that could lead to a spiral of decline and shrinkage.
Regulatory review
Many will be looking to the regulators for protection, though of course financiers are already incorporating their own frameworks of AI governance and risk management. With data also now well protected, Harris does not feel as if any new, specific AI regulation is required.
It should be noted that the current framework, such as the General Data Protection Regulation (GDPR), introduced by the EU, and either copied or acting as a benchmark for guidelines in other countries – including the UK – already covers automated decision-making on consumers (in Article 22). However, regulators globally are looking into whether the present laws are adequate.
This was demonstrated recently at the UK Government roundtable, or AI Safety Summit, held at Bletchley Park at the beginning of November, looking into the risks of AI adoption and how to remain proactive in terms of regulation.
Rishi Sunak addresses existential AI threats ahead of UK safety summit
“It is a hot topic,” says Reuben, adding “we don’t have a lot of answers yet, but there is a need for a framework, and there are discussions underway.”
According to Flegg, part of the problem relates to users and stakeholders lacking an understanding of how AI makes decisions. “When they do they are more likely to trust and accept it,” he says.
Harris says the risk of AI comes from the speed of processing and learning, meaning that data triangulation might undermine the assumed anonymity of data. That risk alone might cause financial services businesses to be cautious in their application of AI.
“Certainly, in the conversations I have heard between financial services stakeholders, it never takes long for someone to ask: how did you test data security? It is probably not a bad thing, that such caution exists, and it suggests that the existing GDPR serves as a deterrent, without the need for something specific to AI.”
Flegg believes that regulation will inevitably become more stringent, building on Article 22 of the GDPR, the proposed Recital 6 amendment to the EU AI Act and the proposed FTC regulation to enforce what he describes as “explain-ability” and “interpret-ability” requirements.
The advantages are clear as it will inevitably have a positive impact on businesses obtaining finance, he believes. AI will demonstrate it can make better credit decisions than humans or legacy scorecards. This, he says, will lead to a greater number of creditworthy businesses being offered finance, and a smaller number of defaults.
“This is a win for both the industry and the real economy that is supported by finance products.”
Harris believes that, looking ahead, the use of AI is likely to go hand-in-hand with financial product innovation, in terms of tailoring and personalising the customer proposition, everything from “rate-for-risk” pricing to “pay-per-use” models, where usage data can give a very granular view of a customer’s habits and behaviours.
AI is an area that is evolving so rapidly it is proving a surprise to Reuben, who has been monitoring its progress and experimenting with its applications since 2016.
She recently attended a cross-industry roundtable on AI adoption and future implications. “Certain aspects of the research that we were reviewing had already become obsolete by the time we were together discussing it. This happened in a matter of a few months,” she says. It is a topic you must stay ahead of to have any hope of not being left behind, she adds.
For Harris, there is a huge spread of opinions about AI across the industry, and thus the speed with which it is being adopted varies widely between different institutions.
“By its very nature, the financial services industry is conservative and risk-averse,” he says, going on to mention that there will be some already “digitally-led” companies, both established (e.g. HSBC-owned first direct) and challengers (Allica Bank, Starling Bank) that have always embraced technology developments and are among the early adopters.
However, many of the established banks and companies without much of a digital presence, (for example, those reliant on introduced business, such as from brokers), are nervous about the risks associated with AI and may not have the in-house digital or IT resources or capability to make the most of it.
The Nvidia survey (previously mentioned) finds that obtaining data scientists who are IT experts is becoming a real problem.
Reuben thinks it will not be a question of ‘humans versus AI’, but of ‘humans plus AI’ and Harris agrees that it probably will not massively change employment in the very short term.
“Some of the first examples seen have not necessarily led to headcount reductions, but instead have allowed operations or coverage to expand – for example, customer online/out-of-hours self-service, whilst increasing the human service levels within conventional working times,” he says.
However, as the tech performance improves, he believes it is increasingly likely that it will take over human activities, rather than supplementing or complementing them.