The EU has announced new regulations that will affect how AI is used in every sector, including the financial one. The regulations categorise the risk of AI, which theoretically will make it more trustworthy and police it.
Shiran Weitzman, CEO and Co-founder of Shield explains that the wheels of AI are already in motion and that enforcing strict regulation won’t be beneficial for businesses or the public. Instead, regulators must work with tech companies to understand how it can be used in a safe way:
As artificial intelligence continues to infiltrate across all industries, the European Union has decided to propose a new act to help regulate the use of the technology in all forms. This proposed law, deemed the Artificial Intelligence Act, is set to be the most restrictive regulation in AI to date, and comes on the heels of increased reports of AI systems exercising bias, discrimination, invasiveness to private data and violations of human rights. If passed, the law is set to be used across all industries yet will mainly focus on those in high-impact sectors, such as finance and banking, public affairs and healthcare. The European Commission states the law was formed to “ensure that Europeans can benefit from new technologies developed and functioning according to Union values, fundamental rights and principles.”
Since the proposal of the act, criticism and disagreements around the European Commission’s decision have entered conversations amongst other countries, especially after a report from the Center for Data Innovation, an international think-tank, revealed these regulations could potentially cost the European economy more than $30billion to introduce and manage. While these astronomically high numbers have drawn skepticism, what policymakers must understand is that regulating AI out of industries like banking is impractical because of its inherent value is much greater than the cost of compliance and regulation. Not to mention, the potential value in AI, especially in finance, has already been experienced and is too great for companies to halt development and rollback adoption now.
This is why it is crucial for regulators to work with tech companies and have a complete understanding around how they’re using AI so that banks and other financial institutions can continue to use AI, and pass its value on to end consumers.
Reason for hesitation by US and other countries
Policymakers in other countries where AI has taken prevalence are now receiving increased pressure to follow Europe’s lead. However, unlike in Europe, other government bodies are more cautious to roll out new, sweeping proposals around AI regulation, with many taking a less severe approach, because there is an understanding that this tech can bring true value when rolled out properly.
The holdup when it comes to AI is its reliance on the “black box”, which essentially enables regulators to see inputs and outputs – what information goes into the AI and what comes out – but fails to show the process of how it actually works due to security and intellectual property concerns. AI companies are hesitant to open up their “black box” because it’s proprietary, and there are potential risks like having a well-established competitor with significant resources rip off your technology.
AI gets a rap for being biased because it often relies on historical data that is, in fact, inherently biased. For example, in the past, financial risk assessment algorithms have been trained using data that included an extensive collection of loan defaults correlated with historically low incomes of people of colour, which led to the implicit bias to have been built-in to all models. What the bias is and what is the “acceptable rate” of accuracy in the financial risk assessment models which is assumed to be highly variable from one financial institution to another.
In an industry as tightly regulated as financial services, there is currently no existing set of standards or rules governing how AI and ML (and the underlying training data) can be used to assess financial risk. The EU is looking to champion those policies, however, it risks over-regulation, when instead, it needs to work with tech companies and encourage implementing a risk management system with mitigation and control measures, training, and regular testing to ensure there are protections in place upon any alerts of potential implicit bias. Technical documentation, record-keeping, transparency, human oversight, and cyber-security will be essential components of adherence and financial firms are gearing up on how to best prepare for this level of exposure.
How AI usage will continue development despite these new regulations
As stated above, the Artificial Intelligence Act will impact the future of AI in financial services and other industries that operate globally, so, those who wish to remain in business will need to prepare to uphold these new regulatory standards even if they are not based in Europe. Many believe that these regulations will cause significant changes to the adoption and development of AI in these industries, however, by following certain steps and standards AI will be able to evolve in a safe and effective way.
Firms will need to find a balance between being transparent while also protecting proprietary information. While this is no easy task and will take some time to perfect, firms should focus on finding an AI system that allows for a quality management system, can host detailed documentation, and is at the same time transparent by regulatory standards. This goes hand-in-hand with prioritising upgrades to their data archiving solutions, which will increase data security, governance, data recovery and restoration. By finding solutions that meet this criteria, companies will be able keep their data private, while also easily coordinating with regulators to ensure proper regulatory compliance is up-to-date.
Another aspect that companies need to take into consideration is the practice of remote work that is now the new normal for many financial institutions, at least at a hybrid level. Companies will need to find a solution that allows for communication flows to be monitored across multiple channels in different areas of the world if they wish to comply with these new rules set by the European Commission. Of course, one of the best ways to ensure financial institutions and those in the public sector are meeting regulatory compliance with their AI systems is having the correct amount of human oversight, as the appropriate human monitoring and frequent assessments of AI is needed to quickly recognise any breaches in misconduct.
Although it is the first, the European Commission will not be the only governing body to enact AI regulations. While the US is still in its early stages of creating restrictions to AI and is still overcoming hurdles, companies should not stall in updating their systems to comply with these new protocols. Recent research has shown that it is not negligence which is stopping companies from regulating their AI models, but a lack of skills and knowledge needed to properly regulate the technology in a way that is efficient and appropriate without compromising the integrity of the company.
Although these regulations may instill more public trust in the adoption of AI, tech leaders and policymakers should know these emerging laws will not decrease the use of AI, nor its future across industries. The world is far too advanced and with the recent events in the past year, AI has proved to be a necessity in almost every business. It’s now on regulators and AI companies to work together to ensure the tech is being used in a safe, unbiased way.