Particularly since the release of OpenAI‘s ChatGPT at the back-end of 2022, the world has sat up and taken notice of the potential of artificial intelligence (AI) to disrupt all industries in countless ways. To kick off 2024, The Fintech Times is exploring how the world of AI may continue to impact the fintech industry and beyond throughout the coming year.
Having explored the biggest AI trends, expert predictions on how AI will evolve, insight into how fintechs are utilising AI and how regulators are tackling the technology, what else is there that we need to know about AI that no one is talking about? To find out, we asked the experts.
AI: The social implications
As Rory Yates, SVP of corporate strategy at digital insurance platform provider EIS, explains: “The psychological and social implications are not talked about enough.
“Even in the context of ‘freeing human’ capital as opposed to eradicating human jobs we don’t see enough deliberate attempts of corporations to proactively displace and retrain people into high-value work.
“We did learn in the last full Industrial Revolution that the net result was more work and a better economy. What we should have also learned is that people died, and generally we experienced social and economic depression. With whole swaths of society and communities not ‘displaced’ but obliterated. And it’s with this knowledge and understanding that we need to move forward.
“We must apply AI knowing that it will be better for humanity, not hoping it will be and accepting the unnecessary ‘casualties’ of progress along the way.”
Is the data up to scratch?
Robert Houghton, founder and CTO at Insightful Technology, believes that the data available to institutions is too inconsistent to achieve the best results: “Today, about 90 per cent of institutions can’t achieve the precision needed.
“AI isn’t the problem, it’s the data.
“The information institutions hold is inconsistent, in silos and not tagged properly. Not even generative AI can solve this challenge. Institutions must find a way to create a single ‘golden source’ of data that’s structured in a standard format if they want to reap the rewards of AI in compliance.”
Good vs Bad bots
Eran Vanounou, CTO of Forter, the trust platform for digital commerce, explains that discerning between good bots and bad bots is a challenge not often discussed: “Traditionally, bots have been synonymous with malicious activity, infiltrating websites to steal inventory, test stolen credit cards or execute credential-stuffing attacks.
“However, in recent years, particularly in 2023, a new trend is emerging: powered by GenAI tools, new types of bots come equipped with user-friendly features, such as shopping on behalf of consumers, gathering information, comparing flight prices and more.
“Think of an Alexa-like experience for every smart device you own – TV, fridge, closet, cat litter box. Soon, all of these devices will be able to purchase what they need automatically, using the consumer’s credentials and payment info, and they will have permission from the user to do so, hence ‘good bots’.
“The impending challenge for fraud managers lies in distinguishing between good and bad bots. Considerable effort has been invested in developing tools and models to differentiate human activity from bot activity. Yet, the advent of GenAI introduces a significant twist, making it crucial to discern not just between humans and bots, but between good and bad bots. Those who can navigate this distinction will undoubtedly lead the way in digital commerce.
“GenAI is set to play a pivotal role in how fraud experts manage and analyse vast amounts of data. It will enable them to swiftly and meaningfully consume and interact with data by posing questions in their own natural language. This presents a golden opportunity for fraud managers aiming to broaden their spheres of influence, as they gain access to information that was previously trapped under unnecessarily complicated data structures.”
‘The possibility of a self-aware, fully autonomous digital entity is not far-fetched’
Tomas Navickas, CTO and co-founder of digital banking platform myTU, explains: “One crucial aspect of AI that is often overlooked is its current state of development.
“It’s important to understand that AI, as it stands today, is not self-aware. It doesn’t ‘think’ in the human sense, doesn’t have a chain of thought, and is devoid of goals or emotions. AI is essentially a sophisticated tool that requires intelligent and creative application by humans.
“However, given the rapid pace of AI evolution, we are approaching a point where AI could start to autonomously iterate over steps to achieve predefined goals. This development would mark a significant shift from our current use of AI as a tool to a more autonomous agent.
“Looking further into the future, the possibility of a self-aware, fully autonomous digital entity emerging is not far-fetched. While this prospect raises a host of ethical, philosophical, and practical questions, it’s a conversation that needs to start now. Understanding AI’s current limitations and future potential is crucial for anyone looking to engage with this technology, whether in development, application, or regulation.”
Utilising AI – beyond creating a ‘wow’ factor
Dr Scott Zoldi, chief analytics officer at FICO, also added: “Most organisations that utilise AI today do not have a corporately defined model development standard, or responsible AI framework. Many don’t have uniform strategies to detect and address bias or monitor for responsible and safe applications of their AI.
“The focus has been on ‘wow’ factor vs. ‘use’ factor where proper model development governance, auditability, and monitoring standards need to come standard with every model and AI developed and deployed that can impact customers.”