Data is often described as ‘the new oil’ in finance, driving smarter, faster trades and empowering quicker decision-making. Which is why banks, investment firms and insurance companies are some of the largest consumers of high-performance computing (HPC), using it to evaluate and act upon swathes of information to deliver innovative and useful services to their customers.
HPC – sometimes referred to as ‘supercomputing’ – is a market expected to increase by $12.51billion from 2020 to 2025, according to a 2022 study from Technavio. Investment in HPC environments enable companies to compete more effectively by analysing data faster to make speedier decisions that reduce risks and identify market opportunities. HPC computing is also increasingly performed in the cloud, which can make large-scale computing accessible to organisations that are unable to afford the initial high capital costs of purchasing systems.
HPC clusters work together with artificial intelligence, data analytics tools and software frameworks to enable the distributed storage, processing and analysis of huge amounts of data, drawing on the processing power of accelerators.
HPC provision
According to Dell Technologies, one of the biggest global providers of HPC solutions, financial institutions (FSIs) have long used HPC to gather and process their deep wealth of data. And, as HPC becomes more pervasive, many FSIs are using it to fuel artificial intelligence (AI) and machine learning (ML) for deep, fast and intelligent analytics that can give them ‘that extra edge’.
AI can deliver smarter, securer services, while NVIDIA graphics processing units (GPUs) are powering AI solutions that can centralise data to drive deeper, more holistic insights.
Anas Bricha, a director of sales engineering (HPC/AI) at Dell, says: “HPC customers have historically been dedicated to a specific workload and it’s been all about efficiency and maximising the use of that cluster. But what we have started to see now is a lot of requests related to conversion, machine learning, data analytics and AI with existing workloads. For example, one FSI we work with now wants to design a unified solution platform that will allow them to run an AI workload during the daytime but then switch and move to the trading platform at night or after business hours.
“It’s about customers leveraging the investment to the full capacity and tapping into all the resources that they invested in to make sure that they’re efficient and providing the right solution not to just a set of customers as related to data science or trading platform, but maybe open it widely to the corporation that they support and providing that mainstream workload to the entire organisation.
“We really want to keep advancing our HPC and AI solutions to support FSIs in every use case scenario, such as fraud detection in banking or insurance risk management, and we’re also going to continue to innovate through more partnerships that advance the best architecture not for just large customers but also for those small and medium business customers that are starting to tap into this great fascinating world of new technologies.”
Innovation in HPC computing
The Dell Technologies HPC & AI Innovation Lab encompasses a 13,000‑square‑foot data centre devoted to high-performance computing and artificial intelligence with thousands of servers, a wide range of storage and network systems. It is home to computer scientists, engineers and subject matter experts who actively partner and collaborate with customers and other members of the HPC community, including tech firm NVIDIA.
NVIDIA’s GPUs support a broad range of AI inference and mainstream enterprise compute workloads, such as conversational AI and computer vision. Their customers can also support mixed AI and graphics workloads on a common infrastructure, ideal for deep learning inference and computer-aided design.
Kevin Levitt, who leads the global business development strategy at NVIDIA, believes HPC is becoming increasingly essential for complex workloads and data-intensive computing.
He says: “It used to be the compute was expensive and the data small. But more and more as we enter the big data age, we really need to think about moving the compute to the data. If you look at traditional HPC centres, there’s probably 20 or 30 workloads that represent 90 per cent of what they do. But if you look at a typical investment bank there’s 300 to 400 workloads that define that first 80 or 90 of what they do, and the tail gets longer.”
FSIs using HPC
Payment-processing giant Mastercard leverages machine-learning algorithms running on HPC systems to rapidly process large data sets in order to identify and stop fraudulent transactions.
The company has two billion cards in use in more than 210 countries and territories, processing 165 million transactions per hour, so it needs to instantly stop fraud without disrupting or delaying legitimate transactions.
With every transaction, the machine-learning algorithms examine things like a cardholder’s buying habits, geographic location and travel patterns, along with real-time data on card usage — such as what they are trying to buy, where they are trying to buy it and what else they bought in the same day.
Using the processing power and fast throughput of HPC clusters, together with data analytics tools and software frameworks, it can analyse huge amounts of data.
Raj Seshadri, president of data and services at Mastercard, says about data: “Accelerated digitisation means that we are also generating increasingly larger quantities and diverse types of data as we complete the tasks essential to our daily lives. Data-driven innovation is all about enabling the products, services and solutions of the future – while balancing the scale of current and future risks.”
Find out more about how AI is transforming payment systems at Mastercard.