As AI’s power and reach grow, algorithms increasingly impact decisions affecting people’s lives. For example, in the financial services space, they’re now commonly used in risk scoring and fraud detection. AI can speed up processes and generate real insights that humans would have a tough time extracting. But as AI pervades our lives more and more, the potential for bias to creep into algorithms also grows.
In this article from Rise, created by Barclays, we expand on expert insights from Ana Perales, AI Horizontal and Conception X Lead, Barclays Ventures, in the latest edition of Rise FinTech Insights, our flagship thought leadership publication.
We consider why startups in that space, especially those that handle data and use machine learning in their products, should be concerned by bias and how they might deal with it. Bias is a big area, and ways to address it range from the technical (the software and logic used) to the cultural (the diversity within teams). Perhaps the single most important thing is to consider bias just as seriously as you would consider testing your software – you wouldn’t ship software that hadn’t been thoroughly tested, would you?
The latest edition of Rise FinTech Insights focuses on diversity in FinTech. Download that and all other editions.
AI in Fintech
Consumer data, spending data and other data that describes people’s behaviour are rich sources of information used to develop financial services for consumers and businesses. All of that data is valuable – or to be more precise the insights derived from it are. In addition, advances in computer imaging and natural language processing allow us to augment the data available to develop exciting new propositions. The disruptive nature of AI means the potential gains are huge.
This makes data a core component of many FinTech’s business models, and AI is increasingly playing a role in their innovation roadmaps. In this context, the FinTech community may want to become more ‘bias-aware’ because:
- Machine learning algorithms are becoming more powerful at categorising and personalising information, and identifying anomalies
- Data availability from multiple sources is surging, enabled by Big Data, APIs and the Internet of Things
FinTechs are already exploring better ways to use data and AI, so that they can build relevant, fair trustworthy insights, recommendations and controls into their products and services. Understanding the sources of bias and their associated risks is the first step.
Bias can creep in at any point. The data set used to train an algorithm or the wider data itself could be biased. The data scientists or developers may be biased, or historical or societal bias might be present. For example, according to a Haas School of Business review of US mortgages, ‘both online and face-to-face lenders charge higher interest rates to African American and Latino borrowers, earning 11 to 17 per cent higher profits on such loans’.
Technology can help. Open Source toolkits are available, including:
- Aequitas measures discrimination in models, and allows users to make informed decisions about how best to develop and deploy predictive tools.
- IBM AI Fairness 360, has a focus not just on the measurement of bias (like Aequitas) but also on its mitigation. It’s been applied to scenarios such as credit scoring, predicting medical expenditures and classifying facial images by gender.
- Google What-If, lets you test model performance in hypothetical situations, examine how features of the data affect the model and visualise this behaviour.
However, technology is only one part of the equation, and has its limitations. The composition of your team plays a vital part too.
Diverse teams with a range of backgrounds and experiences are in a better position than teams with the same make-up to assess the impact of biased algorithms, and to recognise patterns and risks. For example, how would an existing risk-scoring algorithm trained on data from approved customers with regular income perform on a group of gig-economy workers with very different income patterns? Probably not very well.
Startups that are playing a part
The startup community is exploring bias in AI and trying to overcome the gaps and challenges in the existing tools to improve their outcomes and make them more accessible, not just to data scientists but also decision-makers.
A number of commercial tools measure bias and the trustworthiness of algorithms, including these from the Rise community:
- Smarter Human is a diagnostic and monitoring solution that helps companies understand and monitor the risks, biases and unintended consequences arising from their use of AI.
- Alibi, from Seldon, is an open-source Python library for ML model inspection and interpretation, and focuses on auditability and explainability. In 2020, Alibi earned Seldon a CogX Best Innovation in Explainable AI award.
- ai is a RegTech startup focusing on governance and internal auditing of algorithmic decision-making systems, in accordance with regulatory and ethical standards. This area is of interest to financial services because it’s such a highly regulated area.
Holistic.ai is Conception X startup (of which Barclays Ventures is a foundation partner), a not-for-profit company set up to turn deep tech PhD students into entrepreneurs and their theses into value propositions.
The role of regulation
The importance of bias in AI to society as a whole is enshrined in global regulation. The message for FinTechs is that it’s very important to tackle bias in order to protect citizens and instil their trust in the growing number of systems (and institutions) that use data and AI. In the UK, the Information Commissioner’s Office (ICO) states that “the UK anti-discrimination legislative framework, notably through the UK Equality Act 2010, offers individuals protection from discrimination, whether generated by a human or automated decision-making system.” The General Data Protection Regulation (GDPR) complements this with provisions that include the right to non-discrimination.
Regulators are increasingly looking at the ethics of AI and the need for explainability, accountability, replicability and fairness.
FinTechs can track the evolving landscape in this area of UK regulation by reviewing the following:
- Explainable AI (XAI) for an understanding of some of the concepts
- Project ExplAIn, a collaboration between the ICO and The Alan Turing Institute, for the developing practical guidance
- Project ExplAIn’s finance and economics programme, for research and innovation in the financial services sector
Are you ready?
“To blend the absolutism of AI and the reality of the world that we live in as humans, with governments and organisations and things we believe to be right and wrong, will be an on-going journey.” – Ben Narasin, Venture Partner at New Enterprise Associates
Ben Narasin, a 25-year entrepreneur and 10-year early-stage investor focused on emerging technologies and new markets, describes the problem well. It’s a meeting of two very different worlds that requires serious thought. Organisations are increasingly using AI to make decisions about individuals. If this is something your Fintech does or supports, or it’s something you’re thinking about, you might want to learn more about ways to understand bias in your product or service and its impact on generating unintended outcomes, because it’s a big subject with a huge potential to affect people’s lives. It might mean changing your test procedures, exploring technology options and maybe making your team more diverse. It certainly means developing an ethical framework to mitigate the risk of unfair and unintended consequences.