ai consumer duty
AI Europe Thought Leadership

FICO: Ensuring the Ethical Use of AI in Financial Firms as Consumer Duty Expectations Increase

As recently discussed in a webinar with Provenir and Zilch at The Fintech Times, the FCA will be introducing new Consumer Duty regulations later on this year. While the webinar focused on the payments and lending space, these updated regulations will also apply to fintechs in the automation and AI spheres. Ensuring ethical AI is used to meet these regulations is an absolute must. But how can this be achieved?

Global analytics software platform provider FICO in its annual State of Responsible AI in Financial Services report, developed in collaboration with market intelligence firm Corinium, found financial services firms lack responsible AI strategies despite surging demand for AI solutions.

The study was conducted amongst 100 banking and financial C-level AI leaders on how they are ensuring AI is used ethically, transparently, securely, and in customers’ best interests.

Exploring this further Scott Zoldi, chief analytics officer at FICO, examines the best way to develop an AI governance standard in accordance with Consumer Duty expectations:

Ensuring the ethical use of AI in financial firms as consumer duty expectations increase
Scott Zoldi, chief analytics officer at FICO
Scott Zoldi, chief analytics officer at FICO

AI governance is one of the most important organisational weapons that financial services and banking firms have in their arsenal to head off unfair customer outcomes. It becomes even more important as they scale their AI initiatives into new parts of their business, setting standards for model development, deployment, and monitoring.

With changes to UK consumer duty regulations coming in July, along with a main objective of improving consumer protection, organisations must prepare to use all the tools at their disposal to make sure these fresh expectations are met.

The state of ethical AI at financial firms

As AI technology is scaled across financial services firms it becomes crucial for business leaders to prioritise responsible and explainable AI solutions that provide tangible benefits to businesses and customers alike. A new report from Corinium Intelligence, sponsored by FICO, found that 81 per cent of financial firms surveyed in North America have an AI ethics board in place.

The insight also suggests that financial services companies are taking responsibility for detecting and correcting bias in their AI algorithms in-house. Only 10 per cent currently rely on evaluation or certification from a third party.

Additionally, 82 per cent of financial firms currently evaluate the fairness of decision outcomes to detect bias issues. Forty per cent check for segment bias in model output and 39 per cent have a codified definition for data bias. 67 per cent of firms also have a model validation team charged with ensuring the compliance of new models. And lastly, 45 per cent have introduced data bias detection and mitigation steps.

Understanding is maturing

These findings show that the understanding of responsibility when it comes to the use of AI is maturing. However, more needs to be done to ensure the ethical use of AI by financial firms. As AI strategies mature, we are seeing more companies expand their use of AI beyond centers of excellence. At the same time, partnerships with vendors are making advanced AI capabilities accessible to companies of all sizes.

Corinium’s research also reveals that many financial firms are playing catch-up on responsible AI initiatives. Twenty-seven per cent of organisations surveyed in North America are yet to start developing responsible AI capabilities and only eight per cent describe their responsible AI strategy as ‘mature’.

The case for further investment in and development of responsible AI initiatives in financial services is clear. Data and AI leaders expect responsible AI to drive better customer experiences, new revenue-generating opportunities and reduced risk. For this to take place, they will need to:

  • Create model development standards that can be scaled and integrated with business processes,
  • Develop the means to monitor and maintain ethical AI model standards over time
  • Invest in interpretable machine learning architectures that can enhance explainability.
Should AI be explainable or predictive?

A key component of AI ethics is the ability to explain a decision made by an AI or a machine learning algorithm. After all, how can you know if a decision is fair if you don’t know the parameters upon which it was made? This raises a conflict about what’s most important in an AI algorithm. Either its predictive power or the extent to which you can explain why it came to that conclusion.

In business, explainability is key to determining bias and therefore to using AI ethically and responsibly.

Responsible AI requires the explainability of ‘black box’ AI algorithms. The more that can truly be seen through the process, the more trust can be assured. However, the Corinium study indicates that many organisations still struggle to determine the exact reason for machine learning outcomes.

While local explanations are still a common means of explaining AI decisions, these are largely not effective. The Corinium research findings show that organisations are dropping poorly explained legacy methods in favour of exploring different architectures. Newer interpretable machine learning architectures are increasingly providing a more effective means to improve the explainability of AI decisions.

Combating AI model drift

In total, more than a third of companies surveyed by Corinium said that the governance processes they have in place to monitor and re-tune models to prevent model drift are either ‘very ineffective’ or ‘somewhat ineffective’. A lack of monitoring to measure the impact of models once deployed was a significant barrier to the adoption of responsible AI for 57 per cent of respondents.

If organisations have machine learning models making inferences, recognising patterns and then making predictions, it is inevitable that the data coursing through the model will change the model itself. This means not only that the validity of predictions may change over time, but also that the data itself may drive bias into the decisions. This must also be monitored; it’s part of the cost of doing business. If an organisation is going to have models, it must govern and monitor them to manage their use.

There’s no doubt that effective use of responsible AI will help optimise customers’ experiences and outcomes. Notably, at every single step of their banking journeys. The list of real-time, real-world applications of AI grows longer every day. For example, fraud detection and personalisation are just a couple of the many major areas AI technology has improved.

While it seems that firms are being creative and efficient, extracting all they can out of the tool, responsible AI practices must be established to both develop algorithms and monitor the algorithms in place.

Author

Related posts

Camino Financial: Artificial Intelligence in the Fintech World and Lending Industry

The Fintech Times

Payments Sphere in 2022 – What the Industry Is Getting Excited About

Francis Bignell

UK Fintech News: The Latest Stories 05/05

Polly Jean Harrison