Machine Learning
AI Europe Fintech Trending

AI: An Essential Tool for Financial Institutions, but Not a Magic Wand

AI has become increasingly popular in the financial world, and is often thought of as an essential tool for financial institutions if utilised well. It can offer many opportunities to banks and other organisations, but as with anything also comes with its own set of risks

Alix Melchy is the VP of AI at Jumio, an online mobile payments and identity verification company that provides card and ID solutions. Jumio uses AI, biometrics and machine learning to help companies protect against fraud and financial crime in today’s digital economy, verifying customer identities. Here Alix explains how AI is essential to financial institutions and how they can best utilise it.

Over the last few years, artificial intelligence (AI) has been increasingly positioned as an essential investment for banks and other financial services organisations. Particularly in the last six months, where we’ve seen bank branches close and consumers rely more on digital banking services, the need to invest in AI to solve current challenges has accelerated. While AI and machine learning algorithms are often seen as a way of speeding up service delivery and helping to offer a more personalised experience, we have seen its application come under scrutiny.

One display of the risks associated with AI was highly publicised in August. With UK students unable to sit their exams due to the Covid-19 pandemic, there was much discussion both in and out of government about how best to award grades fairly. With university places and further education plans hinging on these results, it was vital to get right. But on results day, students from certain communities were disproportionately and negatively impacted, while other students saw their results inflated, all as a result of the algorithm implemented by Ofqual. This prompted enormous outcry up and down the country, mostly aimed at the algorithm that underpinned the decision-making process.

While this was a hugely stressful time for students, we should pause to think about the problems that could result if this fiasco occurred in the financial services space. For example, if it was around calculating whether a consumer was eligible for a mortgage, how fast a loan had to be paid off or whether someone had the right to a credit card.

AI undoubtedly offers huge opportunities for banks to enhance their services, but if they plan to utilise this technology they need to learn from the Ofqual scenario.

Find the Objective

While it would be great if we could wave a magic AI wand over a situation and solve whatever the problem might be, it doesn’t quite work like that. AI is highly effective but only when given a specific focus and question it needs to answer. For this reason, if businesses are going to start implementing AI, they will need to know what the exact problem they are trying to solve is. Companies need to stop overestimating the power of AI — it is powerful but it cannot dispense from clearly articulating the problem statement and identifying the acceptability criteria. By finding the question you want to solve from the start, you will be able to come back to this initial goal time and time again throughout the project to ensure it still aligns.

Data, Data, Data

Without strong and relevant data underpinning an AI model, it will never be able to produce strong and relevant results. When it comes to designing a fair algorithm, you need to have enough data to truly represent the entire community that you are trying to represent. AI bias has been widely covered across industries as a major challenge for companies to combat, with Twitter recently coming under fire for racial bias in image previews, and that is just the tip of the iceberg. When setting up a new model, companies must try to eliminate that bias. While it is possible to buy datasets to speed up the process, when doing so it is essential that the data you buy meets the criteria you require rather than simply being a large data set.

You also cannot completely rely on the AI. A human eye is still needed to understand how it is working and to continue to improve it through constant monitoring, training and tuning. Companies have to be careful that they don’t set up an AI model and assume the problem is solved. It will require attention so that it becomes increasingly accurate and continues to answer the question posed, even if the real-world scenario changes over time.

Practice Makes Perfect

As the Ofqual issue revealed, practice is a vital step to ensuring the algorithm works as expected before putting it into a real-world scenario. By running algorithms through a pilot testing phase, companies can assess feasibility, duration, costs and adverse events, and better understand why an algorithm is making a certain decision. As this was not sufficiently done in the Ofqual case, it simply didn’t provide the right answer to the problem it was trying to solve.

Put Ethics at the Heart

Ethics should be thought of when it comes to every element of a company’s decision-making process. This shouldn’t stop when it comes to AI. There are some key areas which businesses should consider when putting AI into place:

  • Usage consent: make sure that all the data you are using has been acquired with the proper consent
  • Diversity and representativity: the Twitter example could have easily been avoided if the AI model and the practitioners working on it had represented a broader cross-section of society. By taking this into account, companies can avoid reflecting biases
  • Transparency and trust building: ensure the use case and how the model works is clear to all users

In the financial services industry, there are many ways that AI can be leveraged. This is increasingly the case in the document-centric identity proofing space whereby an identification document, such as a passport, is matched with a selfie of the user to confirm real and virtual identities. This will be an essential area of focus for financial services companies as they look to confirm that users are who they claim to be when the physical branch is diminishing. When analysing if a person is the same as the picture on their documentation, for example, a biased AI model can completely undermine the decision made. Thankfully, organisations are growing more keenly aware that demographic bias in the performance of identity-proofing processes could reflect negatively on their brand, in addition to raising possible legal issues, according to the 2020 Gartner Market Guide for Identity Proofing & Affirmation.

The Gartner Market Guide also predicts that the identity verification market is expected to be worth $9 billion by 2022, with 95% of RFPs in this space containing clear requirements around minimising demographic bias. As such, there is a real opportunity to leverage AI solutions to provide the best service when it comes to this function, but financial institutions must ensure that they are doing so in an ethical and accurate way by focusing on these key areas discussed.

Author

  • Polly is a journalist, content creator and general opinion holder from North Wales. She has written for a number of publications, usually hovering around the topics of fintech, tech, lifestyle and body positivity.

Related posts

Brits Wary of New Technology in Banking, Despite High Demand

Mark Walker

Insurtech UK’s New CEO Sets Ambitious Agenda For Insurance Innovation

The Fintech Times

Raphaels Bank wins Emerging Payments Industry Contributer Of The Year at 2016 Emerging Payments Awards

Manisha Patel