AI Banks North America

How to De-Bias Artificial Intelligence in Banking

Though Artificial Intelligence is used more and more in the financial services industry, it can actually become biased. “Bias” occurs when AI systems produce results that are prejudiced due to unintended, erroneous assumptions in the machine learning process.

Michelle Palomera, Global Head of Banking and Capital Markets at Rightpoint has experience with this. With over 25 years of experience in customer and digital consulting, Michelle combines practical industry and technology knowledge with a personalised style in working directly with clients and team members. Her extensive knowledge of financial services, which spans consumer, buy-side/wealth, commercial and institutional banking helps clients develop strategies for new revenue channels as well as launch new businesses through digital products and services.

Here she explains how to de-bias AI in banking.

Michelle Palomera, Global Head of Banking and Capital Markets, Rightpoint

When bias becomes embedded in AI software, financial institutions may unfairly reward certain groups over others, make bad decisions, issue false positives and diminish their opportunity. This will ultimately result in poor customer experience, decreased revenues and increased costs and risks. To recognise, correct and prevent potential bias, banks must hire the proper talent and adopt proactive product and platform innovation techniques.

Machines, like people, can be biased. In AI systems, bias occurs when the systems produce results that are prejudiced due to unintended, erroneous assumptions in the machine learning (ML) process. For financial institutions, prejudiced AI algorithms could reward certain groups over others, resulting in biased lending and credit decisioning processes that could limit the market over time and even change the landscape of the economy.

Bias has a particular problem with AI models in emerging markets where data is too often skewed to “unbanked” households. This issue contributes to broader social, inclusion and equality issues.

The more AI is embedded into a bank’s core processes and systems, the more of a risk it poses. In addition to mitigating potential damage, removing bias can improve the customer experience, increase market opportunity and provide business insights. It also allows for a more inclusive and diverse perspective and improves bank decision-making processes.

De-biasing AI is good for business and good for the world. As such, both government institutions and various industry organisations are looking into the issue. Last year, the House Financial Services Committee (FSC) formed a Task Force on Artificial Intelligence. In February, the FSC met with industry experts and regulators and agreed to support a consumer-focused model governance framework. The framework will audit AI systems to look for bias and build in safeguards and buffers to guarantee fairness in algorithm models.

To keep up with evolving regulations and best practices, and create their own “anti-bias in AI” measures, banks must invest in people with the right expertise and apply a multi-disciplinary approach in the research, analysis, product design and platform development process when leveraging AI/ML. They need teams and partners who are familiar with cognitive biases commonly found in user experience, as well as the contributing factors that cause AI bias in the first place. These include processes that may yield missing data, pose potential for unexpected behaviour, or involve parameters that impact probability.

Universities are putting a greater onus on ethical coding, which should help banks recruit the talent they need. For example, the Embedded EthiCS program brings together philosophers and computer scientists to teach college students how to make more thoughtful and ethical decisions when building technologies.

Financial institutions should also adopt a qualitative research-driven yet data-informed approach, with appropriate checks and balances to ensure there is always a “human in the loop.” This approach should include validation techniques for inputs to any AI capabilities and an ongoing mechanism to avoid bias creeping into data models over time.

When conducting user research and testing required to build an AI tool, banks should assemble diverse user groups and stay open and neutral as they collect information. Rather than asking yes/no-type questions, they should ask descriptive ones and gather as much data as possible to feed their models to bias at bay.

Once a platform launches, financial institutions can get feedback from the system itself, using metrics. AI/ML can be useful tools for automating the process of adjusting to that feedback and even orchestrating customer journeys, but banks must be aware of common limitations. Consider Facebook when they made the assumption that users did not want to hear from friends beyond the ones they interact with regularly on the platform. Its learning engine made their world smaller, instead of more diverse, with posts from users’ entire friend lists.

To properly build, audit and safeguard AI systems, organisations need the right people and processes. They also need the right mindset. Investing in ethical AI might take more time upfront and make for slightly longer foundational sprints as you innovate, design and build new platforms. But it is worth it to avoid the far-reaching consequences of bias, from negative business implications to unjust societal ramifications.

Author

  • Polly is a journalist, content creator and general opinion holder from North Wales. She has written for a number of publications, usually hovering around the topics of fintech, tech, lifestyle and body positivity.

Related posts

Mobile Edge Computing Annual Spend to Reach $11.2 billion by 2024, Fuelled by IoT, Start-Ups, AI & Machine Learning Innovation

Mark Walker

NatWest Group Launches ESG Starter Quiz to Encourage SME Sustainability

Tom Bleach

Target Group Makes a NICE Selection for Fraud, AML and KYC Solutions

Tyler Pathe