Kuwait Finance House Bahrain Announce Partnership to Enhance Compliance Operations
AI Editor's Choice Europe Thought Leadership

Tintra PLC: Democratising Financial Regulation via Artificial Intelligence

Artificial intelligence adoption has been on the rise in the last few years. However, this has been hindered by an illusion that the ‘correct’ decision is aligned with the decisions a human being would make.

Richard Shearer is the CEO of Tintra PLC, a forward-thinking fintech organisation that is focused on enabling financial institutions, EMIs, multinationals, and large corporates in the emerging world to gain access to banking systems that understand their geographic need.

Shearer spoke to The Fintech Times and explained how understanding that there is no one size fits all for KYC/AML is the best way to democratise regulation via AI:

For businesses, entrepreneurs, and individuals across Europe and the US, international transactions are synonymous with the stress and inconvenience that comes hand-in-hand with regulatory red tape.

In the past year, post-Brexit complications have caused a substantial decline in UK-EU goods trade – and January 2022 is likely to bring further disruptions through the imposition of a further wave of customs-related bureaucracy.

Perhaps this new perspective on the challenges of intra-Europe transactions will result in an increased sympathy for people and businesses in emerging markets, who regularly encounter similar and somewhat debilitating barriers – even when performing activities as comparatively simple as merchant payments and account transfers.

International payments between emerging and developed countries are plagued by two distinct but related issues: red tape, on the one hand, due to the sheer number of financial entities that the transaction needs to pass through on its compliance journey when fintech’s don’t have their own custody. Along with the additional hurdle of KYC/AML bias.

Behavioural scientists at Dutch consultancy firm &samhoud have found, perhaps unsurprisingly, that KYC processes currently used by both legacy banks and fintechs are deeply impacted by employee bias and judgments which aren’t necessarily based solely on the data – any business operating in the emerging world will confirm this. Being given a ‘no’ with no supporting rationale and knowing that the KYC pack provided is equal to, or better than, one that would have been accepted from a UK/US entity.

Clearly, then, any efforts to democratise financial regulation need to address this pressing global issue – and, naturally, the need to speed up inefficient manual processes and eliminate human errors of judgment should direct us towards the latest technology.

Reducing or repeating AML bias?

The temptation, at this stage, is to assume that implementing the right technology will provide an easy fix to the problems of speed, compliance, and bias.

And to individuals in emerging markets, it should certainly feel that way: transactions should become largely effortless or frictionless, just as they do in developed world domestic banking. Frankly speaking, AI is not a panacea to solve all compliance ills.

However, the goal of providing an effortless end-product must come hand in glove with acknowledging that there are higher risk metrics in non-developed markets and extra care does need to be built into these models.

After all, though the available tech solutions to this problem are powerful, we must let that power magnify – rather than replace – existing systems in a move to entirely address the pressing need to democratise financial regulation cross border and support these emerging markets who suffer at the hands of this inherent bias.

This kind of power is particularly noticeable in the artificial intelligence piece. Such technology is, without question, going to be, or the, vitally important tool for enhancing the effectiveness of KYC/AML in these markets, but this can only be achieved by organisations who are willing to face, head-on, the legacy issues that frame current KYC practices.

Algorithmic interventions aren’t magic, after all they’re designed and implemented by people – and if the people involved don’t recognise the imperfection of human KYC decisions, the result will be to amplify current biases rather than replace them in some utopian vision of a borderless society.

This isn’t a hypothetical situation, but one that is being encountered across myriad AI applications and one that needs to be addressed at the outset. Banks have not been doing this very effectively and are using the same, now dated, data sets to drive machine learning and AI down routes that are only iteratively better.

A recent report from McKinsey cites hiring algorithms that demonstrate clear biases against applicants who attended women’s universities for example, whilst – according to the Harvard Business Review and in fact my own experiences with market available tech – facial recognition technologies have noticeably higher rates of error for minorities.

In short, attempts to eliminate prejudice through tech must be careful not to repeat the same biases, it must be very mindful to improve the thinking and create a genuinely level playing field.

Overcoming bias and unlocking AI’s potential

Of course, it’s important to remember that these algorithmic extensions of our unconscious biases aren’t mysterious, and they can absolutely be addressed in meaningful ways if the team is right and the philosophy is sound.

Returning to the example of facial recognition it’s clear that such issues are rooted in problems with the data used to ‘train’ the AI systems involved.

By underrepresenting minority people in the training stage of the process, the resultant algorithms are naturally unable to recognise the faces of minority individuals accurately – and this is a problem that can be fixed simply by more mindful approaches to training. But there are more complex ways in which this can, and indeed must, be addressed.

A similar case can be made for AI trained to make KYC/AML-related decisions – it’s just a question of ensuring that bias doesn’t take insidious root in its algorithmic makeup.

This can be achieved, first and foremost, by removing any illusions that the ‘correct’ decision is necessarily aligned with the decisions a human being would make. Humans have biases, as we’ve seen, so we need to recognise this and ensure that AI doesn’t look to humans as the ‘gold standard’ of AML decision-making.

In a practical sense, this means ensuring that the AI makes decisions on an evidentiary basis, rooting its reasoning on cold, hard facts – for example, by turning to previous occasions in which transactional complications have arisen and also understanding what may look ‘good’ in one jurisdiction is ‘bad’ if it comes from another. Understanding that one size does not fit all for KYC/AML.

There’s everything to gain from making this effort, as the marketplace will grow with all of the advantages of faster, frictionless, and automated banking processes but with a significantly reduced set of biases hamstringing the attempts to innovate.

This will be a highly beneficial result for emerging market businesses and individuals and – by extension – for the larger project of democratising global banking, given that each of these factors will increase accessibility and improve ability to undertake transactions and to bank globally. Levelling the playing field so that we are all genuinely benefitting from the march of globalisation and not only the select few.

Learning humility

The tech-first approach I describe obviously requires a broad embrace of AI and automation with a view to improving the lives of people and businesses in emerging markets along with the no less vital ingredient of humility: we need to recognise that human decision-makers are fundamentally flawed.

This does not mean that ‘the robots’ replace the humans, it simply means that we need to take the best bits of what has been done historically and continue to do that but leave behind the prejudicial parts that don’t reflect well on any of us. And in turn understand where colour-blind, religion blind, nationality blind tech can do a better job than we have.

Whilst human interventions in KYC/AML processes will be reduced through the use of AI, this vision of democratised global banking requires us all to exercise the very human qualities of self-reflection, combined with a genuine desire for positive change, in order to achieve it.


  • Francis is a journalist and our lead LatAm correspondent, with a BA in Classical Civilization, he has a specialist interest in North and South America.

Related posts

Nets Announce Its 2021 Payments Outlook Innovation Report on COVID-19 Recovery

Francis Bignell

Research Finds Loans Issued via AI Underwriting Will Reach $315 Billion by 2025

Polly Jean Harrison

Fingerprints Finds 31% of Germans Would Switch Banks to Get a Biometric Payments Cards

The Fintech Times