Artificial Intelligence (AI) continues to dominate discussion and for good reason. Industries across the globe are utilising the technology to great effect and this is especially true for financial institutions (FIs). But as AI is quickly introduced, how can organisations ensure they remain compliant?
Shaun Hurst is the principal regulatory advisor for EMEA at Smarsh, which helps organisations to identify regulatory and reputational risks within their communications data. Hurst boasts over 20 years of experience solving complex IT challenges for financial services institutions. Here, he delves into the balancing act for FIs when utilising AI, but also trying to stay compliant.

Much like the advent of the internet, AI is increasingly being recognised as a technology that will fundamentally transform an array of different industries. The financial services industry is no exception to this trend, and in-fact banks have been investing vast amounts over recent years to incorporate AI into their businesses.
In particular, banks are beginning to adopt ‘conversational AI’ tools or chatbots to provide financial advisors with solutions to customer queries. Just recently, for example, the investment bank Morgan Stanley has rolled out a chatbot powered by OpenAI technology to its 16,000 financial advisors.
However, with banks increasingly looking to capitalise on the benefits of this ground-breaking technology, they must remember that compliance is an essential component in delivering the many benefits of the AI revolution.
Data Management
At a foundational level, banks’ use of conversational AI has the potential to increase the scale of communication data that banks must contend with. For example, if a chatbot is having multiple conversations with a host of financial advisors each day, it would be as though the bank had doubled its headcount in terms of the sheer communication data generated. With this influx of communication data, financial institutions need to ensure they can properly collect, store and monitor these conversations.
It’s important to recognise that the recent adoption of conversational AI by banks comes as financial regulators are turning up the heat on failures of communication compliance. In 2022 alone, Wall Street banks paid a record $1.8billion in fines for failures in record-keeping practices for employee communications – a trend that looks likely only to intensify in 2023. Financial institutions need to be cognisant of this regulatory climate and actively factor communication data management into any conversational AI tools they adopt.
Executive Oversight
Ensuring effective oversight of AI is a key component of the legislative measures being developed in both Europe and the US to regulate its use. For financial services companies that are beginning to incorporate AI tools more comprehensively, these measures have important implications for compliance officers and senior managers. Central to this effort is banks’ use of ‘explainable AI’, which refers to AI models that have decision-making processes that can be understood by humans and are defensible.
However, one of the consequences of AI’s rapid introduction to the financial services industry is the likelihood of knowledge gaps at senior management levels, preventing effective oversight of its use by employees. To remedy this, banks should consider introducing training programmes for their current managers as well as including AI knowledge in their recruitment criteria moving forward.
To incorporate AI into their operations responsibly, banks must balance the introduction of this new, innovative technology with empowering managers to maintain compliance standards.
Privacy Concerns
One of the factors that best situates banks to develop AI tools is the vast reserves of consumer data they house internally. While this positions them well in AI development, the use of data for such purposes also increases the prospect that privacy laws could be violated. Indeed, when surveyed by The Economist Intelligence Unit in 2022, a group of IT executives working in banking identified “security and privacy breaches” as the greatest risk associated with adopting AI.
While banks should actively inform customers of the intended use of their data as a first step in ensuring data privacy standards, financial institutions also need to consider the increasingly dynamic regulatory environment vis-à-vis the use of personal data for AI purposes.
For example, the Biden Administration has released a blueprint for an ‘AI Bill of Rights’ and the EU’s AI Act is expected to be passed this year, while in the UK, the Data Protection and Digital Information Bill was introduced in parliament and the Financial Conduct Authority will soon publish a discussion paper on AI in financial services. Against this backdrop, banks looking to incorporate AI models into their operations need to practice astute regulatory awareness to ensure that compliance with the rules to come is guaranteed.
Potential for Bias
Alongside privacy violations, another challenge to maximising the benefits of AI is the potential for instances of bias to emerge, with cases of inadvertent discrimination potentially precipitating reputational and legal harm. The threat resides in the vast datasets that AI algorithms are trained on – if the inputs contain historical instances of bias, there is potential that an AI tool could exhibit bias by providing investment advice to clients based on non-financial information such as a client’s ethnicity or gender.
The possibility of bias is a key barrier to achieving ethical AI, or the use of AI that is considered ethical, both in its intended application and the results it produces. To holistically mitigate against the potential for bias, there must be scrutiny at both the input and output levels. So, before training AI algorithms, skewed datasets must be remedied. At an output level, banks should employ communication monitoring tools to identify bias in investment advice.
Customer Experience
As banks become increasingly confident in their use of AI, it’s likely that the technology will progress from being used as an in-house tool to become a client-facing offering. Already in 2022, Deutsche Bank entered into a partnership with software company Nvidia to provide AI-powered interactive avatars to their banking clients. In the years to come, banks’ primary points of contact for financial advice may very well be their AI tools, rather than their employees.
However, perhaps more than any other industry, banking relies on trust – and customers need to be confident that they can trust the advice given to them by an AI tool. Poor user experience will have an eroding effect on this trust, and as such, financial services companies should prepare thoroughly before providing customers with AI tools and consider whether the benefits of such a move outweigh the challenges.
Ultimately, the use of AI provides enormous opportunities for banks. To reap these rewards, however, banks should remember the considerations outlined above and ready themselves to navigate the AI revolution that lies ahead.