By Charley Brooke Barnett
On March 19, TFT CEO Katia Lang was invited to attend Re-Work’s Deep Learning in Finance Summit where she chaired a discussion entitled; Utilising AI to Assist Security Efforts & Enhance Safety in Finance. The panel covered the opportunities and limitations of using artificial intelligence (AI) to combat financial crime.
Enter four industry experts: Mason Edwards (Customer Engineer, Google Cloud), Adam McMurchie (Head of DevOps, Barclays), Adri Purkayastha (Head of Technology & Cyber Risk Analytics, BNP Paribas Group) and Lorenzo Cavallaro (Professor, King’s College London).
The benefits of AI have been well documented. Its ability to learn from vast quantities of data and flag potential threats in real time, is undoubtedly an asset. Darktrace, the cyber security specialist, even likens its AI algorithms to the human immune system, with its unique ability to detect and eliminate threats.
With cyber attacks becoming increasingly prevalent and sophisticated, Edwards was keen to emphasise the importance of staying ahead of the threat:
“You need to be able to add new data to your systems, retrain and deploy new models to stay ahead of your opponent.”
It’s easy to see how this works for a flexible challenger, but harder to imagine legacies being so agile. McMurchie weighed in:
“It’s not just about how strong the model is, it has to be at least better than the traditional models we’ve been using for 30 years.”
The big banks have huge customer bases and therefore plenty at stake if AI is haphazardly implemented.
You need to be able to add new data to your systems, retrain and deploy new models to stay ahead of your opponent
Katia Lang then asked the panel to comment on the role supervised learning has to play when safe-guarding data. Purkayastha was quick to interject:
“Traditionally with fraud detection, we like to imagine bad actors, but the problem is actually an imbalanced classification issue.”
According to the 2018 Cyber Security Breaches Survey, 43 per cent of businesses in the UK reported cyber security breaches or attacks. This is an alarming and costly statistic, not only for the business but the consumer too. The survey found that breaches were more often identified among the organisations that hold personal data. With this in mind, how do organisations manage their data without it becoming compromised?
Edwards is in favour of embracing inevitable obstacles, with the end goal of finding out what’s useful and what’s not.
“The first step to every machine learning AI project is getting your data house in order, looking at the problem, scoping it down and building on your success.”
Playing Devil’s Advocate, Cavallaro warned of the dangers of buying into the hype of deep learning, as it’s not necessarily applicable to every problem you’re trying to solve. He pointed out:
“It doesn’t always work as well as traditional machine learning approaches, it’s important to understand data limitations.”
What happens when data falls into the wrong hands? If both businesses and fraudsters have machine learning and AI at their disposal, who comes out on top?
McMurchie was first off the mark:
“It really starts with education. Banks cannot build their entire AI strategy themselves.”
He called on industry collaboration and addressing “archaic and out-dated mind-sets.”
It’s no good then, to have a strategy without a well informed partnership.
“The issue is not around collection of data, it’s around governance of data. Can this database talk to this database legally and do we know the impact from a regulation standpoint?”, McMurchie continued.
The Head of DevOps at Barclays then suggested the industry focus on niche data, which is easier to categorise and classify. “If you can generate that data with your customer base, you’ll have a great advantage.”
It appears then that the sector is moving ever-closer, increment by increment, to defeating the savvy fraudsters in this game of digital cat and mouse.