As Fintechs mature, the reliability and transparency of their algorithms will become increasingly important. Fintechs lack the elaborate risk management frameworks of their traditional competition, but can use technology to bridge the gap.
Shameek Kundu is Head of Financial Services at TruEra, a company dedicated to making AI trustworthy. With an MBA in finance and multiple years of experience working for financial companies and IT, as well as sitting on sit on Responsible AI public/ private groups at the Bank of England/ FCA, the OECD Global Partnership on AI and the Monetary Authority of Singapore, Kundu has a wealth of knowledge on artificial intelligence and machine learning. It has developed at an incredible rate over the last decade but certain requirements must be met in order for AI and machine learning to be successful and used correctly.
Effective use of data and algorithms has been central to the Fintech success story. For example, Fintechs’ ability to offer entirely digital customer experiences – from rapid onboarding to near-instant loan approvals – is underpinned by image and text recognition algorithms and credit models that use non-traditional data sources. Their superiority over traditional incumbents in harnessing the potential of data has allowed them to anticipate customer needs more effectively, introduce product innovations in an agile manner, and reimagine traditional processes for greater efficiency.
However, even as the industry has been doubling down on data and algorithms, broader societal anxiety around the impact of Artificial Intelligence (AI) has also been growing. High profile examples, such as the appearance of ‘sexism’ of a new credit card and the use of facial emotion recognition to detect fraudulent insurance claims, have contributed to this concern.
Some of these may be instances of early over-reaction to poorly understood technology. However, there are very real risks underneath. AI algorithms can be more difficult to understand than their traditional rule-based or statistical counterparts, making it difficult to justify decisions internally, or explain their results to customers and regulators. Because such algorithms learn from patterns in historical data, the quality and representativeness of the input data can cause them to introduce or accentuate unfair biases, and to become unreliable over time. Finally, without human oversight, such systems can heighten risks related to market stability and competition.
There are three good reasons for Fintechs to start caring about the quality of their AI/ Machine Learning (ML) models. First, compared to traditional incumbents, many Fintechs are far more dependent on automated algorithms for core decision-making and operations. Their business models are often predicated on the use of more data, including from non-traditional sources, and AI/ML models play a key role in leveraging that data meaningfully. As a result, poor quality AI/ML models can impact Fintech bottom lines in a much more direct manner – for example, through credit and fraud losses, or poor pricing decisions.
Second, while Fintechs’ customers are likely to be more comfortable with technology, they are also more aware of the risks around data and algorithms than the rest of the population, and expect greater transparency in this regard. Getting it wrong can cause rapid loss of customer trust.
Third, regulators in the UK, the USA, the European Union, Canada, Singapore, Hong Kong and many other locations have introduced, or are in extensive consultations around, guidelines on AI/ML use. Importantly, where regulatory guidelines have been published, they apply not just to incumbents but also to Fintechs – for example, the European Commission’s draft AI law’s requirements around Credit. Transparency, fairness, robustness and stability of such models are among their key concerns.
In responding to these requirements, most Fintechs are starting with a handicap compared to their ‘legacy’ competitors. They lack the internal risk and compliance DNA of incumbent Financial Institutions, such as the well-established frameworks around Model Risk and Data Management. While this might make them faster and more innovative, it also limits their ability to respond effectively to customer and regulatory concerns around algorithms.
How should they respond? Perhaps by focusing on what they know best: using technology to change the way in which risks are managed. By implementing systems to understand, analyse and monitor their ML models, Fintechs can automate a significant subset of the model risk and data management requirements around ML models, and enable their teams to build and run high quality models. For example, they can use such systems to
- Create transparency around the key drivers of the model’s predictions/ decisions (“Why did X get a loan, while Y was rejected?”)
- Assess any potential biases in model predictions, and the root causes (“Do women get better car insurance rates? If so, is that justified?”)
- Monitor model and data stability over time, trigger alerts when they breach pre-defined thresholds and identify the root causes of such instability (“Is our model flagging X% fewer alerts for fraudulent transactions this month? If so, what is driving that change?”)
- Identify potential parts of the population for which the model is unreliable (“Is the model’s predictions for over-60 white collar workers based on too few data points?”)
- Identify potential changes in data quality which might impact the predictive accuracy of the model (“Has a recent acquisition reduced the amount of information we can access on individual payments? Does this make our anti-money laundering model less effective?”)
Human experts will continue to have a critical role – in reviewing and acting upon the answers to these questions. However, automating much of the diagnostic and monitoring of AI/ML models can allow Fintechs to embed some of the risk management discipline of their traditional competitors, without having to sacrifice agility and build costly overheads.