Explainable AI
AI Banks Europe

NVIDIA: What is Explainable AI?

With artificial intelligence (AI) becoming increasingly pronounced within the use of fintech, it’s crucial that its processes remain transparent to the consumer, and that they fully understand how their data is being used. 

Jochen Papenbrock Financial Technology Customer and Partner Relationship Manager EMEA, at NVIDIA
Jochen Papenbrock

In light of this, Jochen Papenbrock Financial Technology Customer and Partner Relationship Manager EMEA, at NVIDIA has provided The Fintech Times with this highly comprehensive piece, where he details AI’s use cases, how its use translates to Explainable AI, and what Explainable AI actually is. 

Our trust in artificial intelligence (AI) will largely depend on how well we understand it — explainable AI, or XAI, helps shine a light into the complexity of AI models.

Banks can use artificial intelligence to determine whether to extend credit, and how much, to customers. Payment providers are using AI to detect and prevent payment fraud. And insurance firms are using AI to automate claims handling for simple and clean cases.

These are just a few examples of how AI is being adopted across financial services. With so much at stake, businesses and governments adopting AI and machine learning are increasingly being pressed to lift the veil on how their AI models make decisions.

Charles Elkan, formerly a managing director at Goldman Sachs, offers a sharp analogy for much of the current state of AI, in which organisations debate its trustworthiness and how to overcome objections to AI systems:

We don’t understand exactly how a bomb-sniffing dog does its job, but we place a lot of trust in the decisions they make.

To reach a better understanding of how AI models come to their decisions, organisations are turning to explainable AI.

What Is Explainable AI?

AI models are vastly complex — so complex and large that it’s not possible for humans to fully understand the exact calculations in an entire model.

Explainable AI, or XAI, is a set of processes and methods that allows people to comprehend how the algorithm gets from point A (the data inputs, such as an individual’s financial history) to point B (the conclusion, such as whether a loan is approved or not).

Understanding the model’s expected impact and potential biases, along with a model mapping summary from input to output, means users can understand and trust the results created by machine learning algorithms.

The concept is simple. But today, XAI is hard, and in some cases, it may not yet be possible.

It all comes down to the size of the model. It’s easy for humans to understand small systems and algorithms, where we can understand links between several data points. But humans are not capable of understanding hugely complex systems, where AI models are. A complex system could contain ~28 million lines of code, which would take even a fast reader over 3.5 years to read.

Simple systems are not always practical for their desired purpose. Simple systems are exactly that — simple. Complex systems can deliver much more thorough analysis with much higher performance.

Currently, there are several ways that XAI can be implemented:

  • The first is through documenting how an algorithm was built, and completely understanding the data it was trained with. Data needs to be relevant for the intended use, to ensure it’s appropriate and determine if there are likely to be any biases.
  • The second is the transparency of the calculations. An algorithm that is very complex and requires thorough knowledge won’t be as easy to understand as one that is designed with explainability in mind.
  • And the third is to build a continuous cycle of explainable systems, and implement tools so developers understand how an algorithm is working. By sharing these learnings with other AI developers, explainability can rapidly become easier to implement.

Building large, complex systems means XAI can be computationally intensive. Powerful computing platforms are needed, especially for continuously learning models that continue to grow in size.

How Does Explainable AI Work?

While the standardisation of XAI processes are still being defined, a few key points resonate across industries implementing it. Who are we explaining the model to? How precise does the explanation need to be? And which part of the process needs explaining?

Explainability boils down to: What exactly are we trying to explain, to who, and why?

Explaining the model’s origin:

When understanding a model’s origin, there are a few questions to be considered: How the model was trained, how the data was used and how were biases in the data measured and mitigated?

These questions are the data science equivalent of explaining what school your financial analyst went to — along with who their teachers were, what they studied and what grades they got. Getting this right is more about the process and leaving a paper trail than it is about pure AI, but it’s critical to establishing trust in a model.

Explaining the overall model:

Most overall model explanations fall into one of two camps.

The first is a technique sometimes called “proxy modelling”, which are simpler, more easily understood models, like decision trees, that can approximately describe the AI model. We can also build proxy models out of explanations of many individual decisions. Proxy models give more of a “sense” of the overall model, rather than a scientific, precise understanding.

The second approach is to “design for interpretability”. This guides the design and training of the AI model to be built out of smaller, simpler parts, leading to models that are still powerful, but with behaviour that’s much easier to explain.

Why XAI Can Explain Individual Decisions Best

Currently, the best-understood area of XAI is individual decision-making: why a person didn’t get approved for a loan, for instance.

Some techniques, like LIME or SHAP, used in combination with other XAI methods offer very literal mathematical answers to questions like these which can be presented to data scientists, managers, regulators or consumers.

One use case, Explainable Machine Learning in Credit Risk Management, uses SHAP values to identify the most important variables for decision making in credit risk management. By analysing and grouping the explanatory data of portfolio constituents into clusters with very similar data, it is possible to get an in-depth understanding of the inner workings of a trained model.

The SHAP method breaks down the variable contributions to the forecast outcome probability. Every data point (i.e., a credit or loan customer in a portfolio) is not only represented by input features, but also by those input feature contributions to the prediction of the machine learning model.

This can reveal segmentations of data points (customers) where each of those clusters contains very similar decision-making criteria, not just based purely on their input variables. These clusters summarise the mechanics of the machine learning model and they represent the ways the model makes decisions – meaning users get a better understanding of what the model has learned in order to verify its decision.

These clusters can also highlight trends, anomalies, hot spots, emergent effects and tipping points within the data, which can all be analysed and understood.

Predicting Performance with XAI

Munich Re, one of the world’s leading providers of reinsurance, primary insurance, and insurance-related risk solutions, developed an XAI approach to analyse the robustness of diversification resulting from various portfolio allocation methods.

Using two allocation methods, Hierarchical Risk Parity (HRP) and Equal Risk Contribution (ERC), they start by generating sample market data to measure how much HRP outperforms ERC. Using a block bootstrap methodology, 100,000 scenarios are generated and the correlation structures are preserved.

This approach helps to test hypotheses and to better understand what the actual relevant characteristics of a portfolio are, identifying which factors drive performance quality of the portfolio, enabling a more robust and smarter asset allocation.

Using machine learning, the connections between the properties in each scenario and the outperformance can be revealed. The programme learns which scenarios cause outperformance and what the specific properties of those scenarios are, revealing which market regimes/states it would be advantageous to use the HRP allocation method.

Creating this system enabled Munich Re to quickly identify situations where certain allocation methods like HRP perform better than others. Using this knowledge, they are able to construct robust investment strategies for things like the challenging goal of saving for old age.

The Future of Explainable AI

Industries and governments around the world are already trying to implement updated guidance on XAI. There isn’t yet a standard, and the output requirement varies depending on the model, risk-level, data and the context for what needs to be understood.

While healthy debate remains about how to implement it, XAI can be used to understand model outputs and it’s part of a wider practice of AI risk management. Ultimately it can lead to more trust in AI and thus wider adoption and better inclusion and accessibility.

Author

Related posts

SeedBlink Launches Digital Secondary Market for European Tech-Exclusive Deals

The Fintech Times

UserTesting: UX Is the Great Differentiator in the Crypto Boom

The Fintech Times

Will the AI Hype Die Down or Is It Here to Stay?

Francis Bignell