As the Covid-19 pandemic continues, online gaming is more popular than ever, continuing a trend that started way before the current situation. Seen as a way to keep in touch with friends, there has been a surge in sign-ups to gaming platforms. However, this also means a potential increase in fraudulent activity, as gaming platforms keep multitudes of payment information on file.
One way to combat this is through the use of AI, which Max Wolke, head of strategy at Fraugster, and AI fraud prevention company, knows all about. His role involves working with founders, board and investors to define new growth opportunities that help the company scale, as well as defining, developing and executing strategic initiatives.
Here Max has identified the following fraudulent activity trends that can be curtailed with AI technology.
With increasing internet penetration, affordable smartphones and local government investment in mobile internet infrastructure, there are more than 1 billion people that globally stream games over the internet every month. That amounts to a market worth $159 Bn this year, and one that is forecast to surpass $200 Bn by 2023. This surge in numbers along with the integration of cloud, mobile apps and social networks, a range of games and platforms, and shifts in revenue models, means that gaming operators are continuously defending against sophisticated, fraudulent attacks. The Covid-19 pandemic and numerous global lockdowns further bolstered the online gaming sector with new titles being released and significant increases in the volume of signups, which has led to the relaxation of onboarding, screening, procurement, and payments controls.
Gaming platforms are becoming increasingly attractive to fraudsters because they keep multiple payment credentials on file, permit relative anonymity, and lack the relevant regulatory requirements and KYC (Know Your Customer) checks.
Here are some of the main fraudulent activity trends that can be curtailed with AI technology.
A player’s riskiest moment is at the start of their journey. Artificial intelligence is used to assess customers’ credentials and historical behaviour to minimise the risk of signing up a bot or a bad actor.
A new wave of breached credentials and personal data is driving increased attempts by unauthorised users to access a player’s account and commit fraud. AI allows platforms to assess several parameters in real-time when the user logs in and our AI scoring tool flags high-risk players based not only on device risk, but also location and IP address parameters.
A quick succession of email addresses and credit cards used in signups with minimal details changed between attempts alerts us to Bonus Abuse. Using Network Analysis, suspicious player/s characteristics can be identified. AI scoring determines the potential for fraud. This influences whether future transactions are denied regardless of how many new devices are used and details what rules to make for blocking these users.
Knowing which device is used for fraudulent behaviour is key. We identified three tactics used by fraudsters to evade detection: VPNs/Tor Networks, Anonymous proxy servers, device irregularities and transaction locality. An AI engine can profile device risk, and provide an AI score, even if this type of risk has not been recorded previously. A key part of this approach is fingerprint technology which tracks risky users through smart rules and lists. We have observed linkages between devices and accounts which helps with spotting and stopping fraud rings.
Fraud rings work to access thousands of stolen credit card details, racking up fraudulent transactions, leading to costly chargebacks. Monitoring chargeback patterns across our gaming network provides insight, enabling us to intervene before charges are incurred.