AI Banks Editor's Choice Europe Lending

Linedata: Why We Need an AI Framework in Finance

The development of artificial intelligence has enabled many complicated, manual tasks to be fully automated. While this is a huge advancement in technology, AI does not have the capabilities to understand data like humans do. Though smarter in some cases, AI can misinterpret data, leading to biases being created and therefore ineffecient technology. An often-cited example of how biased data sets might affect outcomes in finance is in lending. If a credit applicant with all the necessary guarantees lives in an area where residents’ credit applications are often rejected – even if for legitimate reasons – a badly trained model using a biased dataset might interpret this statistical coincidence as a systematic rule. 

This is why creating a framework for ethical AI is essential – to ensure that these mistakes do not happen and everyone has equal opportunity. Timothée Raymond, Head of Innovation and Technology at Linedata, has over 10 years experience working in the financial sector. He discusses the important of ethical AI being implemented in all fields:

Timothée Raymond, Head of Innovation and Technology at Linedata
Timothée Raymond, Head of Innovation and Technology at Linedata

Up until now, the lack of a legal framework for Artificial Intelligence in Europe has raised important questions about the trustworthiness, ethics and safety of the algorithms currently in use in various sectors, including financial services. Many algorithms work in black boxes and can’t be observed in action – making it difficult to be fully transparent about the processes involved to reach their results and conclusions. There is, however, an increasing desire to tackle this transparency issue, and to integrate ethical frameworks to ensure data sets have no innate bias that could result in discrimination during analysis. An often-cited example of how biased data sets might affect outcomes in finance is in lending. If a credit applicant with all the necessary guarantees lives in an area where residents’ credit applications are often rejected – even if for legitimate reasons – a badly trained model using a biased dataset might interpret this statistical coincidence as a systematic rule. The result is the application might be rejected without good reason, perpetuating a cycle of fiscal discrimination.

It is no surprise then, that the banking sector’s approach to AI is a cautious one, staying away from the more complex and scalable autonomous systems associated with ‘non-consensual’ AI and generally using operational and automated algorithms. These merely simulate human intelligence and perform basic, repetitive tasks. But the desire to use AI to improve services and streamline operations is only growing. Gartner’s 2021 ArtificialIntelligence Heat Map for Banking and Investment Services showed that Machine Learning is the most widely adopted form of AI in banking and finance, driven by the need to improve customer service (43%), cost optimisation (24%) and risk management (19%).

Recognising the need to enable industries to adopt more advanced AI safely, on April 21st the European Commission proposed new rules aiming to transform Europe into a hub for trustworthy AI. If established, this would become the first ever international legal framework for AI, aiming to complement the development of advanced analytics models to ensure their applications are well defined – increasing user trust and understanding. It is an important development in the journey to equity and ensuring respect for fundamental human rights.

Ethical AI in Europe

Europe has chosen to base the degree of ethical requirement of an AI on the risk associated with it, meaning all algorithms are potentially concerned. It might seem obvious that military or medical AI would be particularly scrutinised, but in reality, many professionals use AI tools on a daily basis – and often in ways that can potentially have negative consequences on humans if they are not sufficiently supervised. In the HR sector, for example, Europe explicitly mentions algorithms used in candidate selection. In finance, the algorithms used to establish credit scores or to grant credit will have to be re-evaluated. The introduction of a framework will help professionals across all sectors make better data-driven decisions and will go some way to ensure the results from AI models are consistent and non-discriminatory.

It is also an important development for the future of the business operations of European companies. In order to innovate wisely and carefully plan where to invest capital, clarity on regulations and practices surrounding AI is critical, especially in the context of heightened global competition and the still opaque use of data. The regulation will encourage the modernisation of AI, bringing it closer to reality and thus accuracy, increasing the potential of more refined and individualised results. In retail banking, and lending specifically, it would be accessibility criteria that would be particularly in need of review. Instead of purely relying on permanent contracts the applicant is involved in, we can look for other ways to measure the sustainability and consistency of an individual’s income. This could include other types of contracts that can be weighted more fairly in the decision-making process. There are many other financial services that can be streamlined using other AI tools such as Natural Language Processing, including chatbots for customer service, or adding the possibility of analysing unstructured data to resolve complex financial problems. These tools, once backed by a solid framework, are ripe for innovation.

Across Europe, the number of initiatives to guarantee the best levels of requirement for AI use are rising. Companies are making daily commitments to ensure AI is trustworthy, such as setting up internal evaluation grids for AI solutions developed including various levels of measurement. This might include cross-checks, training requirements, codes used, and analysing best practices. Employees involved, from developers, to project and innovation managers, IT directors up to CISO’s, can integrate an ethical approach right from algorithm development, ensuring data sets are unbiased and models are properly constructed and trained.

The European Commission’s framework is an important step, and hopefully marks the beginnings of Europe becoming a global leader in the development of regulations that can both encourage and foster innovation, whilst crucially continuing to safeguard human rights.

Author

Related posts

VibePay Reveals Further Growth With Banks and Payment Providers Plus New Business Account Function

Gina Clarke

From Regs to Riches: The Evolution of Identity Verification in the Modern Economy With Trulioo

The Fintech Times

Philipp Pieper: “I would not call it ICO, rather a token offering…”

Manisha Patel