Digital background depicting innovative technologies in (AI) artificial systems, neural interfaces and internet machine learning technologies
AI Cybersecurity Europe Fintech

Poorly-Trained AI Algorithms Carry Bias That May Harm Financial Institutions; Expert Warns

Although financial institutions have long since recognised the benefits associated with the use of Artificial Intelligence (AI) powered solutions and continue to implement them throughout their systems, some are warning about the potential bias this type of technology may carry.

Marius Galdikas, CEO, ConnectPay
Marius Galdikas, CEO, ConnectPay

Marius Galdikas, CEO at ConnectPay, proposes training algorithms with large amounts of verified data to create successful machine learning system models that are worthy of the banking sector.

The popularity of AI solutions has grown over the years, with studies estimating AI could add around 15.7 trillion dollars to the global economy by 2030. The benefits provided by these solutions have not gone unnoticed by financial institutions either, as 75 % of the highest revenue banks are implementing them right now.

However, incorrectly trained AI algorithms can produce inaccurate and biased results. Marius Galdikas says how vast amounts of training data can help minimise the potential bias of machine learning (ML) algorithms and make the solutions more efficient.

A recent survey found that the majority of banks are planning to implement a range of machine learning solutions in the near future. A few of the main benefits that ML systems provide to financial institutions are more effective fraud detection, the ability to predict cash flows and adjust credit scores. As the recognition of the benefits of machine learning systems continues to grow, so do the discussions concerning biased algorithms.

AI models use available information (training data) to build their database and it’s the foundation for all of the predictions and calculations machine learning systems produce.

However, if the dataset used to train the AI models is too narrow, this can negatively impact the results. According to Marius, insufficient, unverified, or low-quality data can actually lead the machine learning systems to produce not only inaccurate, but also biased conclusions.

“The data used to train AI models is a significant contributor to the overall results they will produce, and inadequate data can lead to logical inconsistency in the algorithms. In the financial services sector, if machine systems process data without context, this can lead to AI algorithms producing unfavourable conclusions for members of different religions, ethnicities, or to those belonging to minorities. For instance, algorithms assessing credit risk might unjustly put these clients at a disadvantage, basing their results on statistical data that isn’t accurate or is simply biased,” Marius explained.

He proposes that machine learning systems need to be trained using vast amounts of verified data in order to produce the most accurate results.

“The training of machine learning systems is a lengthy process, requiring large quantities of time and verified, un-skewed data. There are many factors to consider when predicting client behaviour — where they live, what’s their cultural background, etc. In order to assess all of these different variables, AI algorithms must be equipped with a great amount of contextual information, otherwise the results will fall through, and the conclusions they produce might not be applicable elsewhere if any of the variables are changed,” added Marius.

As for how financial institutions can increase transparency when using AI algorithms, Marius recommends using them in areas with vast true information.

“By using machine learning system models in areas like activity logs of systems, which have a great amount of information representing customer and system behaviour, the financial institutions maximise the likelihood that the AI algorithms they’re using will produce more accurate and unbiased results,” Marius explained.

He concluded that machine learning systems can be a useful tool for the banking sector if employed correctly, but will not produce the desired results if not trained properly.

“It’s important to understand the significance that training data has on AI algorithms and invest into them accordingly. Even though this learning process will take longer, the end results will eventually be
cost-efficient. All in all, the ability to have accurate predictions about customer behavior and be able to provide services accordingly is worth investing more time and money to train AI using vast amounts of verified data.”


  • Tyler is a fintech journalist with specific interests in online banking and emerging AI technologies. He began his career writing with a plethora of national and international publications.

Related posts

Crypto Authorisation Completes Fasset’s EU Expansion

Francis Bignell

“Fake data” will make banks vulnerable, according to Accenture

Manisha Patel

UserTesting: UX Is the Great Differentiator in the Crypto Boom

The Fintech Times