Evan Chrapko is the Chairman and CEO of Trust Science, an Edmonton-based AI firm currently dedicated to reshaping modern data analysis techniques, particularly in the area of credit scoring. Here he shares his thoughts on how XAI looks to overcome AI’s biggest challenges.
With the proliferation of artificial intelligence (AI) technologies in different financial and IT industries, AI’s potential has only increased. As AI tools become more advanced, there is an increased possibility that most of the decision-making computations are done in a “black-box” with no human explainability. Explainability is the key to customer trust, especially in the field of credit scoring – customers will not trust a machine’s decision if they don’t understand the complex decision-making process. XAI equips the AI learning models with transparency, fairness, accountability and explainability. With Explainable Artificial Intelligence (XAI), companies in the 21st century can solve their “black-box” problem and help customers understand how and why they received the particular credit score.
XAI refers to the tools and frameworks that can make the predictions and decisions made by machines understandable to humans. XAI widens the interpretability of AI models and helps humans to understand the reasons for their decisions. Another problem of AI is accountability- it is impossible to understand who is responsible for a wrong decision taken by a machine. For example: in November 2019, Apple and Goldman Sachs faced allegations that their AI-powered credit scoring algorithms are gender-biased. Apple’s co-founder Steve Wozniak accused the company of giving him 10 times higher credit limit than his wife despite the fact they share all their assets. Such adverse consequences of recklessly using AI call for a need to integrate XAI models in decision-making systems. The following are the advantages of integrating XAI in the systems of 21st-century companies:
According to D. Gunning, the Programme Manager of the Defense Advanced Research Projects Agency (DARPA) in the US, “XAI will create a suite of machine learning techniques that allow humans to understand, appropriately trust and effectively manage the generation of artificially intelligent partners.” The use of XAI especially in the domain of credit lending and credit scoring will give the borrower customers a reasonable explanation of any adverse action- for example, why their application was rejected or why they were charged with a higher interest rate. With XAI, the borrowers will also know which of their information was used and how the decision was made. This makes the entire credit-lending process extremely transparent which in turn enhances the trust of the customer in the company. For lenders, explainability is important as they must know how the automated systems choose the ideal borrower. They must be able to trust the system to invest their money in the company.
A proper XAI explanation must entail: data explanation – what data has been used to reach the decision, responsibility explanation – who is responsible development, management and implementation of the AI solution, safety and performance explanation – steps taken to ensure that the data provided by the clients is handled safely, fairness explanation – steps taken to ensure that the decision-making process was fair and non-discriminatory and impact explanation- steps taken to consider and monitor the impact of AI solution on all stakeholders and wider society. In the domain of credit scoring, the customer has the right to know the reason for adverse action and be able to challenge an incorrect report. This also corresponds with the provisions of Fair Credit Reporting Act and Equal Credit Opportunity Act. XAI can provide a reason to customers why they have been denied a credit.
Internal debugging and sanity tests
Sometimes, the AI algorithms and tools are so complicated that even their developers are unable to pinpoint the cause of error. Explainable AI is also instrumental for internal debugging of a company’s decision-making systems. XAI can help determine why and how the machine made a wrong decision and thus help designers to diagnose the problem to prevent further incidents. XAI can be used to fix the structural weakness of the models. XAI also finds applications in the domain of feature engineering i.e., identifying the features that are useful for the model. For designers and creators, XAI provides explainability that can help them improve the functionality of their models.
Explainable AI will expose vulnerabilities and flaws in the credit lending systems by identifying the gaps in the data and its environment. Moreover, XAI will draw attention to the adversarial perturbations that have led to wrong decisions and biases. Data scientists, for instance, can then immerse themselves into the process of debugging and develop machine-learning models that are understandable.
One of the major concerns of using AI tools is the reproduction of bias present in the traditional data. AI feeds on large amounts of data from different sources and if the data fed into the AI tools is biased, there is a possibility that the AI will replicate those biases into the results. Research has shown that XAI can generate a fairness report that can measure the degree of bias in the results. XAI can be used to explain why a user or group is treated unfairly with the given data. Some data scientists have proposed a fair-by-design approach to develop ML models that have less bias and have explanations understandable by humans. Such an XAI can pinpoint the reasons for biases in the system and thereby, help developers to fix them.
In credit lending, XAI can make non-expert users and customers understand why and how a certain decision was made, thus, maintaining transparency with their customers. Using XAI, these companies also comply with several national and international norms of data information and protection (for example: Fair Credit Reporting Act in the US and the General Data Protection Regulation in Europe). This reduces the risk of penalisation and improves customer trust and reputation in the international market. The use of XAI is highly beneficial in the credit lending industry where historically marginalised and vulnerable groups (Black, Asian, Latin communities, women and sexual minorities) have been denied credit due to the lack of credit histories. XAI can identify these biases in the traditional credit data and model systems that give these marginalised populations the credit they deserve. Therefore, the use of XAI can help not only FinTech companies but it is a step towards financial inclusion and greater financial equity.