The number of real-time payments (RTP) transactions is growing across the globe. But with this growth, RTP systems introduce new opportunities for fraudsters to exploit. How can financial institutions tackle these challenges, and how big a role can artificial intelligence (AI) play?
Christina Emmanouilidou is the vice president of financial crime compliance technology at BlackSwan Technologies, an organisation aiming to reinvent enterprise software through agile intelligence for the enterprise with a fusion of data, AI and cloud technologies.
Emmanouilidou boasts over a decade’s worth of experience in the banking and financial services industry. She specialises in financial crime compliance, encompassing operations, technology architecture, program management, and global remediation programs.
Here, Emmanouilidou discusses how artificial intelligence can be used to tackle types of financial fraud emerging as the number of real-time payments continues to grow exponentially across the globe.

It will come as no surprise that real-time payments are on the rise, but the speed of growth and the volume of transactions remains staggering; 195 billion real-time payments transactions were recorded globally in 2022, a year-on-year growth of 63.2 per cent.
By 2027, the number is forecast to be over 500 billion. But as with anything that offers undeniable benefits such as convenience and speed, there is also a downside; RTP systems introduce a number of new vulnerabilities and challenges, creating opportunities for fraud and increased risk exposure.
One of the types of criminal activities that organisations can encounter is authorised push payment (APP) fraud whereby a victim is tricked into sending money directly to the fraudster’s account. Other common types of fraud are more prevalent as instant payments mean that the money has disappeared before the victim is able to reclaim it.
These include:
- Business Email Compromise (BEC) scams, whereby fraudsters pretend to be a colleague or partner
- Phishing messages – which are used to manipulate employees into changing payment information.
- Longer-term scams whereby fraudsters pretend to be in relationships with their victims to gain their trust and then request them to send money
- Crypto investment scams which entice people with promises of life-changing riches
- Meanwhile, real-time payments have also seen rises in money mules – defined by the National Crime Agency as someone who lets someone else use their bank account to transfer money, often keeping a little bit for themselves.
The complex technological infrastructure that RTP systems rely on introduces potential vulnerabilities that cybercriminals will seek to exploit to gain unauthorised access to accounts or to manipulate transactions.
The impact on victims doesn’t end there; their mental health can be impacted, and they are likely to lose trust in payment systems and financial institutions as a result; if operators and financial institutions don’t take any action, they could stand to lose consumers for good. Meanwhile, banks and financial institutions are being held more liable for fraud.
How to mitigate risks
To mitigate these risks, financial institutions should be investing in robust security measures such as multi-factor authentication, regular security updates and vulnerability assessments, as well as collaboration and information sharing among financial institutions, regulators and law enforcement agencies.
User education about the risks associated with RTP systems is another important pillar. These are all necessary measures that organisations should be undertaking, but in addition, organisations should be looking at how their existing monitoring capabilities can be improved to combat such crime and fraud.
Banks have been implementing real-time detection systems to help prevent account takeover fraud, but fraudsters are using scams that circumvent these systems, so regulators are introducing further measures to encourage banks to keep on top of these scams. Despite this, many existing systems that companies have in place to combat fraud are rule-based and configured to detect only ‘known’ scenarios.
These systems, while helpful, result in high numbers of false positives that require manual investigations. Organisations that want to combat fraud should continue to use rule-based systems but also incorporate more proactive AI-based methods.
How can AI help organisations to combat fraudulent real-time payments?
One of the ways anti-fraud systems can utilise AI and machine learning is to support the detection of unusual patterns. AI and ML algorithms can sift through millions or even billions of transactions quickly to identify suspicious patterns or activities that might be signs of fraud. These algorithms learn to recognise new and changing threats, making them more effective at detecting new kinds of fraud.
How this works in practice:
Step 1 – Behavioural analysis: By using advanced analytics and machine learning algorithms, models are created that capture the typical behaviour patterns and relationships between different parties involved in transactions. This helps in understanding normal transactional behaviour and can be used as a baseline for comparison.
Step 2 – Identifying odd couples: Once the relationship patterns are established, it becomes possible to identify unusual connections or odd couples that deviate from the typical behaviour. These odd couples might include transactions between parties with no apparent connection, unusual transaction amounts or frequencies, or connections between high-risk individuals or businesses.
Step 3 – Investigating anomalies: When unusual connections or odd couples are detected, they are flagged for further investigation. This may result in transaction blocking, and in most cases will require review by compliance and risk management teams, supported by additional data enrichment performed from high-credibility sources about the parties involved in the transaction.
Step 4 – Refining models: As new odd couplings and unusual connections are discovered, the relationship patterns and models should be continuously refined and updated. This ensures that the detection systems adapt to evolving fraud tactics and maintain their effectiveness in identifying potential risks.
AI and ML algorithms are also utilised in the following anti-fraud capabilities:
- Risk Scoring: AI and ML models can give risk scores to transactions based on different factors, such as a user’s transaction history, account information, and behaviour patterns. Transactions with high-risk scores can be checked more closely or stopped, helping to prevent fraud before it happens.
- Understanding text: Natural language processing techniques should be used to analyse communication, like payment instructions and comments, to find possible anomalies but also to help establish the source of funds and clarify relationships and the purpose of the transactions.
- Login analysis: This detects suspicious logins by IP network providers outside the customer’s country by examining the transaction login history for unusual times and durations, such as after midnight and with a duration of under three minutes. This may include logins by several IP addresses and from different locations, as well as the common IP address logins of different customers.
Banks and financial institutions are always looking to improve customer convenience. But they need to ensure that before enabling real-time payments, they have the right types of anti-fraud systems and processes in place. Otherwise, fraudsters will have an easy route towards their customers’ money. AI and machine learning are important components that can help organisations to be proactive in thwarting fraud, and subsequently by using these technologies, firms can prevent fraudulent activity, and retain customer trust – while providing them with the best customer service.