generative ai
Business Resilience Editor's Choice World-Region-Country

Generative AI Impact on Compliance With Ubiquity, Business Financing, AgileBlue, Q2, LiquidX

Amid rising inflation and interest rates, and the growing number of cyber threats, businesses are constantly evolving in order to be resilient. This month, The Fintech Times is highlighting how businesses are showing this resilience against a myriad of factors – some within, and some beyond, their control.

Having discussed how organisations are showing their resilience in regards to different working models (i.e working from home, hybrid working, working from the office), our focus now turns to another factor which is testing resilience: generative AI.

Generative AI has become the biggest double-edged sword in the fintech industry. On the one hand, it has the capability to massively advance teams’ approach to regulations and compliance. But on the other hand, it has been viewed as a potential replacement for human employees. In light of this, we reached out to the industry to find out whether generative AI is friend or foe.

A solution and a problem
John Goodale, Executive Director and Head of Europe, Ubiquity
John Goodale, executive director and head of Europe, Ubiquity

Artificial intelligence works based on the information that it is fed. As a result, this makes it very helpful in some instances and not at all in others. Talking about how businesses can best utilise the technology to remain resilient during a tough economic period, John Goodale, executive director and head of Europe, Ubiquity, the business process outsourcing service says: “AI is a catalyst for refining internal processes, facilitating time savings and building deeper customer relationships.

“AI can help banks and fintechs to streamline their offering, improve customer retention and reduce compliance risks. We see AI’s main advantages in automating basic manual tasks and enquiries, enhancing business analytics and quality assurance, and helping customer support staff and other departments improve performance and productivity.

“However, AI could prove challenging for anything beyond straightforward, common customer requests or agent advice. While it might be able to predict and flag fraud patterns, AI can pose problems in relation to credit or dispute decisions, for example. An AI algorithm decision is only as accurate as the information that’s fed into it. And an algorithm typically can’t explain why it made a decision, therefore, a bank or fintech could be exposed to possible accusations of discrimination, errors, or unfair judgment.

“Given increasing interest and investment in AI, it will become more advanced more quickly. But right now, the possibilities for data hallucinations, biased or incorrect data, and the difficulties in interpreting complex regulatory expectations mean that AI needs humans to ensure it is performing correctly, and with accurate data. We see tremendous potential for AI to improve efficiency and help people work better, but the technology also needs humans to continue governing it and ensure it works in the right ways.”

Not overly relying on the tech is key
Ian Wright, founder of Business Financing
Ian Wright, founder of Business Financing

Ian Wright, founder of Business Financing, the guide to all things finance in the UK, says that AI can have a huge impact on the financial sphere. However, over-reliance on the technology is where troubles start to arise.

“I would say that that the impact and resulting approach is two-pronged in terms of internal AI usage, and usage on the end of the consumer.

“The first is that compliance, by its very nature, is not a process that can be even partly outsourced to generative AI in the same way that processes can be in industries less stringent from a regulatory standpoint.

“Yes, those in the fintech space need to understand negative implications that generative AI can have on compliance if overly relied-upon, but ultimately the main takeaway is that compliance should be centred on a lack of reliance on generative AI, rather than how to fully integrate it to replace and even partly assist with existing processes.

“Secondly, those in the industry need to already be collecting data on consumer generative AI usage, and whether internal adoption on a per-firm or per-process basis is not only expected, but required to keep up with the evolving demands of a younger, more tech-savvy consumer base.”

AI is a strong tool in the fight against fraud
Tony Pietrocola, president at AgileBlue generative ai
Tony Pietrocola, president at AgileBlue

The emergence of generative AI is a good thing says Tony Pietrocola, president at AgileBlue, the SOC|XDR platform. It enables organisations to have another tool in the arsenal to combat the evergrowing fraud landscape.

“AI is massively impacting every aspect of a fintech’s business. But when you consider AI, risk mitigation may be their number one priority when it comes to compliance and security. Because the AI landscape evolves daily, new advancements and thus challenges are constant.

“In terms of security, generative AI can help mitigate security vulnerabilities by determining anomalous behaviours in both humans, networks and the cloud and quickly respond to threats before a fintech is breached. Furthermore, fraud may be a fintech’s most pressing challenge. Nothing can hurt a business more with customers and regulators than fraud. Fintechs can use generative AI to understand and recurse all kinds of data to train machine learning and AI models for fraud monitoring and detection.

“Considering compliance, generative AI could be used to automate compliance certifications and audits to reduce or eliminate human testing, errors and time inefficiencies. Of course, this comes with huge risks, Fintechs must be cautious about the data they input into AI models to ensure confidentiality of data (we have seen case studies of this going awry already) and with compliance with data protection regulations.”

In the wrong hands, the technology can be devastating
Jesse Barbour, Chief Data Scientist at Q2 generative ai
Jesse Barbour, chief data scientist at Q2

Though generative AI has huge potential in how it can benefit organisations, it can also test a business’ resilience if abused by bad actors. Jesse Barbour, chief data scientist at Q2, the digital banking and lending solutions provider, says: “Generative AI has already demonstrated an indirect but substantive impact on consumer-facing social engineering fraud.

“These attacks typically involve interactions where a bad actor attempts to trick a legitimate user into either supplying sensitive information (username, password, etc.) or taking some action (‘wire money to this account…’) that will ultimately result in the loss of funds.

“At the heart of these efforts is language. The language that the bad actor uses to gain an individual’s trust. The language that the bad actor uses to instill confidence in their legitimacy. Generative AI, especially large language models (LLMs,) are a powerful tool in the hands of fraudsters intent upon executing these kinds of attacks.

“With modern LLMs, bad actors can craft and orchestrate these interactions at a level of scale and sophistication beyond anything we have seen.”

Unlocking fintech’s potential
Dominic Capolongo, CRO at LiquidX generative ai
Dominic Capolongo, CRO at LiquidX

Though some measures need to be put in place to stop bad actors abusing generative AI, in the right hands it can be extremely beneficial. In fact, according to Dominic Capolongo, CRO at LiquidX, the fintech solutions provider, combining the technology with other tech likes document digitisation, is where it can really shine.

“Compliance becomes scalable with generative AI and digitisation; their advanced algorithms efficiently decode intricate regulatory texts and automate compliance reporting at a granular level. Thanks to generative AI and digitisation, fintechs can confidently navigate ever-changing regulations, ensuring adherence to stringent standards, and fostering trust with customers.

“The real strength lies in the ability to know your customer (KYC) and transactions on an unprecedented scale. By harnessing large language models, generative AI, combined with document digitisation, analyses vast transactional data in real-time. This deep understanding of customer behaviour and transaction patterns enables fintech companies to identify potential risks and anomalies, fostering robust risk management practices and enhancing customer-centric services.

“Risk management takes a proactive turn as generative AI leverages historical data for predictive risk analysis. Fintech firms gain insights into various risk scenarios, enabling them to preemptively mitigate potential threats. This integration provides a holistic view of risks, ensuring better-informed decisions and safeguarding financial stability.

“Embracing generative AI and digitisation is the key to unlocking fintech’s potential in the rapidly evolving landscape of financial technology. As the fintech industry evolves, those embracing generative AI and digitisation will lead the charge in reshaping the financial services landscape for the better.”

Author

  • Francis is a journalist and our lead LatAm correspondent, with a BA in Classical Civilization, he has a specialist interest in North and South America.

Related posts

Lord Holmes Champions Security Tokens at Blockpass Seminar

Jason Williams

The Growth Of Contactless Payments During The Covid-19 Pandemic

Polly Jean Harrison

Top 10 ICOs to Watch: #4 SingularityNET

Jason Williams