Data And Artificial Intelligence With Dr. Inma Martinez by Richie Santosdiaz for The FinTech Times
AI Europe Thought Leadership

TruEra: Capturing AI’s Opportunity While Adhering to Guidelines – How Ethical AI Can Win

Artificial intelligence (AI) has become a necessity for organisations in the financial sphere to succeed. As the attitude of waiting to see how regulations impact the technology is fazed out in favour of embracing innovation as it is developed, organisations must be aware of guidelines and regulations in place, and how these will also develop. Therefore, employees need the right training and tools to ensure the tech is being used in an ethical and compliant manner. 

Lofred Madzou, director of strategy and business development at TruEra, is an expert in responsible AI and AI governance and has spent most of his career driving responsible AI in government and corporate settings. In his role, he works with organisations to strengthen their AI governance, prepare for regulatory requirements and emerging guidelines, and establish processes that allow them to use AI in more effective and responsible ways.

Prior to TruEra he worked at the World Economic Forum, where he led various global and multi-stakeholder AI governance projects. In practice, he advised various EU and Asia-Pacific governments on AI regulation and supported organisations in their implementation of responsible AI practices. His work there primarily focused on the use of AI in high-impact use cases such as law enforcement use of facial recognition, hiring, and AI for pandemic response.

Madzou also serves as a research associate at the Oxford Internet Institute, focusing on the governance of AI systems through audit processes. He spoke to The Fintech Times to explain how success in AI requires overcoming barriers and risks, analysing how this could be done:

Lofred Madzou, director of strategy and business development at TruEra
Lofred Madzou, director of strategy and business development at TruEra

Data and analytics are transforming industry competition and companies’ core businesses at an accelerating pace. According to a McKinsey survey, “high-performing organisations are three times more likely than others to say their data and analytics initiatives have contributed at least 20 per cent to earnings before interest and taxes over the past three years.” The ability to capture value from data and analytics is a real competitive advantage.

While most companies understand this and have invested important resources to become data-driven, only a few of them have managed to deploy analytics at scale and the gap between leaders and laggards is growing. The latter has struggled to overcome a set of well-identified challenges. First, cultural barriers are strong – restructuring business processes and decisions around data is a complex and ongoing process. Second, poor data quality is a persistent problem, often because available data cannot be used in their current format and requires significant data wrangling. Third, there remains a data science skills shortage.

These challenges are amplified by the rapid adoption of AI models that are heavily reliant on large datasets and developed with complex algorithms. Indeed, those AI models may cause consumer privacy issues, accentuate unfair biases, and lead to opaque decisions. There has been intense media and regulatory interest in cases where AI has led to questionable outcomes.

However, by using appropriate AI quality metrics  – the set of observable attributes of an AI system that allows one to assess over time the system’s real-world success  – financial institutions and fintechs can maximise the benefits of AI and data analytics while mitigating their potential risks.

Current guidelines point the way to future regulation on AI in financial services

Regulations regarding consumer data in financial services have been in place for quite some time and large financial institutions have well-established frameworks around Model Risk and Data Management to comply with them. Also, these frameworks have been strengthened since the adoption of the General Data Protection Regulation (GDPR), the comprehensive European Union (EU) regulation on data protection and privacy, passed on May 25, 2018. On their side, fintechs lack the elaborate risk management frameworks of traditional incumbents, but can use technology to catch-up.

With the rapid adoption of AI models across industries, additional regulatory proposals and policy guidelines have been formulated to mitigate their potential risks. Last year, the European Commission (EC) released the EU Artificial Intelligence (AI) Act  – a cross-sectoral regulation proposal on artificial intelligence. In the initial draft, companies that deploy high-risk use cases (e.g credit) will be required, among other things, to ensure a high level of robustness, deploy risk mitigation strategies, address data bias, and provide transparency to customers.

In the United States (US) banking regulatory agencies have issued a request for information (RFI) on financial Institutions’ use of AI that includes considerations on data bias. Regulators in Canada, Singapore, Hong Kong, and the UAE have also clarified their thinking on the responsible use of AI and data management. More recently, the Bank of England’s AI Public-Private Forum has released guidelines around the use of AI in finance, which provides recommendations on how to mitigate the risks associated with AI.

Despite this important policy activity, there is no specific regulation on AI in financial services to date. This has created an interesting paradox. According to a joint survey conducted by BCG and MIT Sloan Management Review, “82 per cent of managers in companies with at least $100million in annual revenues agree or strongly agree that responsible AI should be part of their company’s top management agenda”. Yet, a YouGov survey by Alteryx indicates that 42 per cent of employees working with data in UK businesses do not believe that data ethics – the moral obligations of protecting and carefully using personally identifiable information – is their concern.

This suggests that despite genuine goodwill from business managers, data science and business analyst teams are not well-equipped to deliver on these expectations, both from a training and tooling perspective.

Smart steps companies can take today

In this situation, adopting a wait-and-see attitude until specific regulations are passed would be a bad strategy because financial institutions would waste invaluable time and miss out on the business opportunities offered by AI. More specifically, traditional incumbents would be further at risk of losing market shares to innovative fintech startups. So, what should they do? Improve their risk mitigation strategies while scaling AI because not using AI remains the biggest risk. Here are concrete recommendations to help them in this effort:

  • Provide training on responsible AI and data management. Designing and deploying trustworthy AI systems should be an organisation-wide effort. As such, all employees should have a sound understanding of the regulatory requirements and ethical implications of collecting and managing personal information. Further, as companies scale AI across business functions, employees should also be informed about how AI models can create certain risks (e.g privacy infringement, unfair bias, opaque decisions, lack of robustness), and what best practices and tools are available to identify, and mitigate them.
  • Empower your company with appropriate tools. Having the right training is a good start but it is not sufficient to deliver high-quality AI models. Indeed, to ensure that AI models advance a company’s business goal and social purpose, diagnostic and monitoring capabilities are needed. By enabling data scientists, business users, and risk and compliance teams to understand exactly why a model makes specific predictions, what are the causes of unfair bias or why the predictive accuracy of a model has dropped recently, such capabilities allow corporate stakeholders to assess the effectiveness of their risk mitigation processes and make necessary corrections.

Providing appropriate training and tooling to employees, especially those working with data, would help significantly mitigate the risks associated with AI models and data. Indeed, the key factor to successful risk mitigation strategies is to find the right balance between human judgment and automation. Once employees know what is the right thing to do and have the resources to do it, opportunities are easier to seize.

Author

Related posts

Open Banking and SCA Forum Aims to Reduce Regulatory Ambiguity and Drive Initiatives

Polly Jean Harrison

Harnessing the Power of AI to Reduce Chargebacks With PaymentsMB

The Fintech Times

Quotevine helps brokers give SMEs faster access to vital COVID-19 recovery finance – with new Quickstart initiative

Mark Walker