Data And Artificial Intelligence With Dr. Inma Martinez by Richie Santosdiaz for The FinTech Times
AI Asia

Skymind’s Three Ways to Improve the Ethical Outcomes of AI Driven Banking Solutions

Artificial Intelligence is one of the most popular fintech sectors at the moment. With a plethora of potential that even banks and financial institutions are tapping into, AI solutions are dominating the technology world.

Shawn Tan is CEO of Skymind, an open-source enterprise deep-learning software firm and an AI ecosystem builder that enables companies and organisations to launch AI applications for their businesses. Here he shares his three ways to improve the ethical outcomes of AI-driven banking solutions.

Whether for better or for worse, artificial intelligence (AI) is leading the largest technology revolution the world has ever witnessed. AI solutions are already being used at most major banks, with 75% of banks with over $100 billion in assets reporting implementation of AI strategies. These large banks are the industry leaders using AI in three key areas: front office (conversational banking), middle office (anti-fraud) and back office (underwriting).

For example, when JPMorgan Chase needed to review legal documents and extract crucial data points and clauses, manual review of 12,000 annual commercial credit agreements would typically absorb around 360,000 hours. Through their Contract Intelligence (COiN) “chatbot”, that mountainous stack of agreements could be analysed in mere seconds.

And this is just the beginning. A bank can anticipate possible savings of around 20 and 25 percent across IT operations (namely infrastructure, maintenance and development costs) from AI applications, with a forecasted aggregate cost savings of around $447 billion by 2023.

Still, many top banks claim AI will be an “augmenting” force for employees, as opposed to a “replacement”, arguing that AI solutions can provide a better level of customer service in both visible and invisible ways to the consumer. What the consumer might not see is the middle office solutions that can help banks more actively monitor fraudulent transactions in real-time and save customers tens of millions a year.

Despite the fact that AI will be transformative, we need to also ensure that we can achieve ethical, non-discriminatory AI-driven banking solutions. According to one source, “There are genuine concerns, however, that allowing machines to make important decisions about our lives using data that may only tell part of the story is a risky business. Could the use of AI in finance have damaging outcomes for customers and citizens?”

Banks have always had the research skills and research funds to be in a strong position to lead on setting standards in AI development. There are three clear ways to improve outcomes: ethics should be part of the design process as opposed to an afterthought, data needs to be assessed for accuracy, and development processes need to be accompanied by thorough documentation.

A final crucial aspect is making sure that the teams designing AI solutions not only have the right skills but also a diversity of backgrounds. If this is not the case, it can have lingering effects. When YouTube announced a new video upload app for iOS, between 5 and 10% of videos that users uploaded would appear upside-down. The app’s design team had all been right-handed and had not considered how left-handed people hold a phone.

Another instance was when AI from Google Photos started to label African Americans as gorillas. This is still brought up in shareholder meetings in a call for Google to increase diversity. What the tech giant is pledging to do is to invite others to learn from its rocky journey into AI – the company intends to launch new AI ethics services later this year. At first, there will be advice offered around creating the ethical guidelines behind AI projects and reducing racial bias in computer vision systems.

After all, data bias causing discrimination against individuals or groups is one of the biggest risks for banks using AI solutions. While some argue that there is an amazing opportunity to change how credit and risk are allocated – with the promise of fairer and more equitable systems – it’s clear that AI can also produce the opposite effect, exaggerating bias and perpetuating discriminatory cycles in credit allocation.

Diversity in AI teams means that developers come from a range of ages, socioeconomic backgrounds, and genders. And with the rate of industry change outpacing the cycles of traditional education institutions, talent sources like coding boot camps are becoming more popular. Makers is a coding training provider pledging a high commitment to diversity in tech, with 35% of graduates being women and a selection process which filters for coding talent (and not previous education background).

Whether AI helps to accelerate the challenging or maintaining the existing status quo is yet to be seen. What we do know is that we are still at the stage where banking leaders still have the chance to design the process – before AI starts to design them.

Author

  • Polly is a journalist, content creator and general opinion holder from North Wales. She has written for a number of publications, usually hovering around the topics of fintech, tech, lifestyle and body positivity.

Related posts

Sibos 2023: Industry Reaction and Toronto’s Top Takeaways

Claire Woffenden

ila Bank Harnesses National eKYC Platform To Accelerate Customer Onboarding Process

Tyler Pathe

Standard Chartered and Quantexa Join Forces to Tackle Financial Crime With Contextual Decision Intelligence

Mark Walker