ai regulators
AI Europe Feature Stories

How Are Regulators Reacting to the Speed of AI Development?

Particularly since the release of OpenAI‘s ChatGPT at the back-end of 2022, the world has sat up and taken notice of the potential of artificial intelligence (AI) to disrupt all industries in countless ways. To kick off 2024, The Fintech Times is exploring how the world of AI may continue to impact the fintech industry and beyond throughout the coming year.

Whether you think it is a game-changer or a curse, AI is here to stay. However, to ensure its success, proper regulations must be implemented. Exploring how ready regulators are to take on this challenge with AI, we spoke to Informatica, Caxton, AvaTrade, ADL Estate Planning, Volt, and FintechOS.

ChatGPT risks data breaches
Greg Hanson, GVP EMEA at Informatica
Greg Hanson, GVP EMEA at Informatica

OpenAI’s ChatGPT has been largely adopted by firms across the globe and according to Greg Hanson, GVP EMEA at Informatica, enterprise cloud data management, this won’t slow down in 2024. However, organisations should move with caution.

“In 2024, the desire from employees to leverage generative AI such as ChatGPT will only grow, particularly due to the productivity gains many are already experiencing. However, there is a real risk of data breach associated with this kind of usage. Large language models (LLMs) like ChatGPT sit fully outside a company’s security systems, but that reality is not well understood by all employees. Education is essential to ensure that staff understand the risks of inputting company data for summarising, modelling, or coding.

“We’ve already seen a new EU AI act come into force that places the responsibility for use of AI onto the companies deploying it in their business processes. They are required to have full transparency on the data used to train LLMs, as well as on the decisions any AI models are making and why. Careful control of the way external systems like ChatGPT are integrated into line-of-business processes is therefore going to be essential in the coming year.”

Fraud prevention is at the top of priority lists
Rupert Lee-Browne, founder and chief executive Caxton
Rupert Lee-Browne, founder and chief executive Caxton

For Rupert Lee-Browne, founder and chief executive of the paytech Caxton, the most important factor regulators must consider in AI’s development is fraud prevention. He says: “Undoubtedly, governments and regulators need to lay out the ground rules early on to ensure that those companies that are building AI solutions are working in an ethical and positive fashion for the advance of AI within the financial services sector and in society.

“It’s really important that we all understand the framework in which we’re operating and how this comes down to the practical level of ensuring that AI is not used for negative purposes particularly when it comes to scams. We mustn’t overlook the fact that whatever legitimate businesses do, there will always be a rogue organisation or nation that builds for criminal intent.”

Can’t overlook ethical implications
Kate Leaman, chief market analyst at AvaTrade
Kate Leaman, chief market analyst at AvaTrade

Financial education surrounding AI is paramount for employers and employees. However, it is equally important for regulators too. Kate Leaman, chief market analyst at AvaTrade, the trading platform, explains that regulators need a proactive approach when it comes to AI regulation.

“Caution is essential throughout the fintech industry. The rapid pace of AI development demands careful consideration and regulatory oversight. While the innovation potential of AI is immense, the ethical implications and potential risks should not be overlooked. Regulators worldwide need to adopt a proactive approach, collaborating closely with AI developers, businesses, and experts to establish comprehensive frameworks that balance innovation with ethical use.

“Global regulations should encompass standards for AI transparency, accountability, and fairness. Collaboration and information sharing between regulatory bodies and industry players will be pivotal to ensure that AI advancements align with ethical standards and societal well-being without stifling innovation.”

Blockchain can protect data
Mohammad Uz-Zaman, founder of ADL Estate Planning
Mohammad Uz-Zaman, founder of ADL Estate Planning

For Mohammad Uz-Zaman, founder of ADL Estate Planning, the wealth management platform, Skynet becoming a reality is not a current issue. Instead, he says managing AI data securely is the bigger problem.

“The bigger issue is the level of data that will be accumulated by private institutions and governments and how that data is used and could potentially be exploited. AI cannot evolve without big data and machine learning.

“This is where blockchain technology could become highly relevant to protect data – but it’s a double-edged sword. Imagine being assigned a blockchain at birth that records absolutely everything about your life journey – every doctor’s visit, every exam result, every speeding ticket, every missed payment, every application, and you have the power to give access to certain sections to private institutions and other third-parties.

“All that data could be handed over to the government from day one. AI can be used to interpret that data and then we have a minority report world.

“Regulators have a very difficult job to determine how AI can be used on client data, which could be prejudicially. It could be positive or even judicious prejudice, for instance, determining the credit worthiness of an entrepreneur or bespoke insurance premium contracts.

“Regulators must be empowered to protect how data can be used by institutions and even governments. I can foresee a significant change to our social contract with those who control our data, and unless we get a hold on this our democratic ideals could be severally impacted.”

Guiding researchers, developers and companies
Jordan Lawrence, Co-Founder and Chief Growth Officer, Volt
Jordan Lawrence, Co-Founder and Chief Growth Officer, Volt

Jordan Lawrence, co-founder and chief growth officer, Volt, the payments platform explains that in 2024, regulators must step up and guide companies looking to explore AI’s use cases.

“The speed of AI development is incredibly exciting, as the finance industry stands to benefit in several ways. But we’d be naive to think such rapid technological change cannot outstrip the speed at which regulations are created and implemented.

“Ensuring AI is adequately regulated remains a huge challenge. Regulators can start by developing comprehensive guidelines on AI safety to guide researchers, developers and companies. This will also help establish grounds for partnerships between academia, industry and government to foster collaboration in AI development, which brings us closer to the safe deployment and use of AI.

“We can’t forget that AI is a new phenomenon in the mainstream, so we must see more initiatives to educate the public about AI and its implications, promoting transparency and understanding. It’s vital that regulators make such commitments but also pledge to fund research into AI safety and best practices. To see AI’s rapid acceleration as advantageous, and not risk reversing the fantastic progress already made, proper funding for research is non-negotiable.”

Avoiding future risks with generative AI


  • Francis is a journalist and our lead LatAm correspondent, with a BA in Classical Civilization, he has a specialist interest in North and South America.

Related posts

Behind the Idea: CarFinance 247

Polly Jean Harrison

Judopay and Thyngs enable hospitality businesses to offer touch-free cashless payments

Manisha Patel

TFT Webinar Review: Can FinTechs save the UK economy – PART3

Mark Walker