By James Loft, COO, Rainbird
The recent news that the FCA is partnering with the Alan Turing Institute to explore the explainability of AI in financial services is a welcome development. While financial institutions are increasingly using AI to improve efficiency and productivity, there is little transparency in how so-called neural networks make decisions, putting organisations at risk of inaccurate and even fraudulent decisions. Even worse, it is far more difficult for financial institutions to audit bad decisions by AIs than it is to audit human decisions.
In exploring how to make AI more transparent and explainable, the FCA needs to address a number of issues in reducing the threat of unaccountable AI decision-making across the financial sector.
The Black Box Problem
The FCA faces a difficult task in auditing AI in finance because the neural networks used by many banks operate on the basis of complex probabilities and correlations beyond the comprehension of anyone but a data scientist.
Because of this, neural networks create a legal ‘grey area’; if a financial institution gives incorrect advice that led to a failed merger, it might be difficult to tell how the neural network got the decision wrong. Therefore, it is extremely difficult for the FCA to hold a financial institution accountable for fraud perpetrated by an AI because a neural network might contain an unknown critical flaw that causes an organisation to unwittingly facilitate financial crime.
The difficulty in auditing financial AIs increases in direct proportion to the complexity of the decisions they are making; as neural networks are increasingly tasked with everything from auditing accounts to predicting fraud, their inner workings are becoming more opaque. It may be difficult for the FCA to gain access to the necessary information to regulate AI; the secret behind an AI’s decision-making process is often part of its Intellectual Property which makes companies reluctant to open them to external scrutiny.
Data hygiene
The deep learning’ systems used by financial institutions are difficult to audit or understand because most organisations lack the necessary in-house expertise or ‘data hygiene’ to use them effectively.
Neural networks also cannot think outside the context of their ‘learning environment’ and thus a neural network is only as good as the data it was trained on. This means they are prone to inheriting biases from bad data; if an organisation has incomplete or poor tax accounts, the AI might make incorrect tax decisions.
The deep learning’ systems used by financial institutions are difficult to audit or understand because most organisations lack the necessary in-house expertise or ‘data hygiene’ to use them effectively.
An insurance AI let loose on unstructured customer data might decide blonde people are at higher risk of fraud because they happened to be over-represented in the sample of fraud cases.
Industries such as insurance have poor ‘data hygiene’ which means that unsupervised ‘black box’ AIs let loose on oceans of financial data could make critical errors such as giving customers higher insurance premiums or poorer credit ratings for spurious reasons.
Price-setting algorithms trained on poorly-curated financial data, may ‘learn’ to avoid price wars causing an organisation to be inadvertently guilty of price-fixing due to ‘algorithmic collusion’. Since the machine did not explain its decisions and the organisation was unaware of this, this raises the question of who the FCA can hold accountable in such cases.
Auditing an Audit
However, the fundamental problem is that any FCA investigation into the way that machines make decisions will be immensely costly and complex because of the expertise required to audit ‘deep learning’ systems. It could take weeks to derive accurate decision pathways that explain how and why each decision was made, and even then, it will still be in code that only data scientists can understand, so there will need to be additional time factored in for the ‘translation’ of these into human terms.
As we’ve seen with Deutsche Bank recently hiring thousands of people to audit its anti-money-laundering (anti-ML) operations following the ‘Global Laundromat’ scandal, auditing human decisions is not an easy task. If the bank had been auditing deep-learning anti-ML systems, it would have taken an army of data scientists many months to complete the same audit.
Some financial regulators are using AI in the auditing process. Yet this problem becomes even more acute when AIs are tasked with auditing AI decisions, since this then raises the question of who audits the auditor.
Black Box to a Glass House
What we need to see at the end of this FCA investigation is a return to human-centric decision-making, but not in the traditional sense. We need to put humans back in the loop by transforming financial AI from a black box into a glass house that operates according to human logic.
Since AIs will increasingly be helping human professionals from accountants to fraud prevention teams, it makes sense for their human ‘colleagues’ to be involved in customising and auditing them. The only answer is a return to ‘rules-based’ AI systems that reflect human thinking and can therefore be configured and audited by relevant subject matter experts.
We need to put humans back in the loop by transforming financial AI from a black box into a glass house that operates according to human logic.
Human-centric, rules-based AIs will enable humans to audit every decision they make without the need for external scrutiny, because they explain their decisions in human terms. Financial institutions that use human-centric, rules-based AIs will have far more visibility of the decision-making process. This enables them to ensure decisions are regulated, and omit any biased data or inaccuracies, meaning the threat of discriminatory or inaccurate decisions is eradicated.
Crucially, the process of mind-mapping human decisions for rules-based AIs also enables ethical and compliant decision-making processes to be visualised and taught across the financial sector. A financial-brokerage AI trained with a ‘mind map’ to reproduce typical trading decisions might expose unfair or unethical trading practices. In this way, algorithmic accountability will lead to greater human accountability.
As the leading regulatory body for the sector, the FCA should also play a major part in the on-going regulation of AI use. The FCA needs to take an active role in ensuring financial institutions are implementing transparent, explainable solutions. Crucially, they need to ensure that the people working with AI in financial institutions actually understand the tools they are using and that the tail is not wagging the dog.