In a rejection of a central regulatory body governing the laws of artificial intelligence (AI), as has been put forward by the EU’s ‘AI Act’, the UK Government has announced proposals that adopt a more flexible, case-by-case approach to the use and regulation of the technology.
New plans for regulating the use of AI aim to develop consistent rules to promote innovation within the sector while continuously protecting the public.
Instead of giving responsibility for AI governance to a central regulatory body, the government’s proposals will allow different regulators to take a tailored approach to the use of AI in a range of settings; better reflecting the growing use of AI in a range of sectors.
It comes as the Data Protection and Digital Information Bill was introduced by Parliament, shifting the UK’s data laws to boost innovation in technologies such as AI. The Bill promotes a high standard of protection for people’s privacy and personal data while hoping to deliver close to £1billion in business savings.
AI refers to machines that use data to perform tasks normally performed by humans. Its use is particularly widespread within the financial services industry. For example, its application can help identify patterns in financial transactions that could indicate fraud and money laundering.
The new AI paper outlines the government’s approach to regulating the technology in the UK, with proposed rules addressing future risks and opportunities so businesses are clear on how they can develop and use AI systems and while keeping consumers confident about its use and presence.
The proposals focus on supporting growth and avoiding unnecessary barriers being placed on businesses. This could see businesses sharing information about how they test their AI’s reliability as well as following the guidance set by UK regulators to ensure AI is safe and avoids unfair bias.
“We want to make sure the UK has the right rules to empower businesses and protect people as AI and the use of data keeps changing the ways we live and work,” digital minister Damian Collins said.
“It is vital that our rules offer clarity to businesses, confidence to investors and boost public trust. Our flexible approach will help us shape the future of AI and cement our global position as a science and tech superpower.”
The UK is already home to a thriving AI sector, leading Europe and third in the world for levels of private investment after domestic firms attracted $4.65billion last year.
Research this year predicted more than 1.3 million UK businesses will be using artificial intelligence and investing over £200billion in the technology by 2040.
The extent to which existing laws apply to AI can be hard for organisations and smaller businesses to navigate. Overlaps, inconsistencies and gaps in the current approaches by regulators can also confuse the rules, making it harder for organisations and the public to have confidence in where AI is used.
If rules around AI in the UK fail to keep up with fast-moving technology, innovation could be stifled and it will become harder for regulators to protect the public.
The government’s new approach will create proportionate and adaptable regulation so that AI continues to be rapidly adopted in the UK to boost productivity and growth.
The approach is based on six core principles that regulators must work to apply, including:
- Ensure that AI is used safely
- Ensure that AI is technically secure and functions as designed
- Make sure that AI is appropriately transparent and explainable
- Consider fairness
- Identify a legal person to be responsible for AI
- Clarify routes to redress or contestability
Regulators, such as Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority and the Medicine and Healthcare Products Regulatory Agency, will be asked to interpret and implement the principles.
Regulators will be encouraged to consider ‘lighter touch’ options which could include guidance and voluntary measures or creating sandboxes – such as a trial environment where businesses can check the safety and reliability of AI tech before introducing it to market.
Industry experts, academics and civil society organisations focusing on this technology will be able to share their views on putting this approach into practice through a call for evidence which has been launched in tandem with the bill.
Responses collected from the call for evidence will be considered alongside further development of the framework in the forthcoming AI whitepaper which will explore how to put the principles into practice.
The government will consider ways to encourage coordination between regulators as well as looking at their capabilities to ensure that they are equipped to deliver a practical AI regulatory framework.
Professor Dame Wendy Hall, acting chair of the AI Council, said: “We welcome these important early steps to establish a clear and coherent approach to regulating AI. This is critical to driving responsible innovation and supporting our AI ecosystem to thrive. The AI Council looks forward to working with government on the next steps to develop the whitepaper.”
To show how it is delivering against the National AI Strategy and identifying new priorities for the year ahead, the government also published its first ‘AI Action Plan’.
The government has invested more than £2.3billion in AI since 2014. Since publishing the National AI Strategy last year, it has announced various new investments in the long term needs of the sector, including funding for up to 2,000 new AI and data science scholarships, and opened up new visa routes so the industry has the skills and talent to continue to thrive.
As part of the strategy, the AI Standard Hub was unveiled at the start of this year. The Hub will provide users across industry, academia and regulators with practical tools and educational materials to effectively use and shape AI technical standards. The interactive hub platform, led by the Alan Turing Institute with the support of the British Standards Institution and National Physical Laboratory, is scheduled to launch in the Autumn of 2022.
Speaking on the government’s most recent announcement, Harry Weber-Brown, CEO at TISA Digital, said: “It is encouraging to see the inclusion of smart data in the Data Protection and Digital Information Bill, published by the Government.
“Smart data and open finance will revolutionise the way we access financial services. We are therefore pleased that the Government is determined to support this while maintaining privacy standards, thus addressing the chief barrier to wider adoption of open finance in practice.
“A recent report into open finance published by TISA in collaboration with EY confirmed 90 per cent of consumers surveyed would use dashboards that consolidate the view of pensions, savings, and investments in one place.
“However, our research found that the single greatest barrier to use was shown to be the protection of private consumer data. We welcome the Bill as published today, because it achieves the twin aims of enabling an effective digital finance ecosystem while retaining privacy standards.
“The UK was at the forefront of open finance when open banking was launched. However, recent years have seen other jurisdictions adopt more effective data standards and move ahead with open finance implementation. Smart data receiving Government backing in such a significant way should provide the boost necessary to finally allow UK consumers to reap the benefits of open finance.
“As TISA, we convene the open savings, investments and pensions (OSIP) working group to create a set of API standards for savings, investment and pensions products and thus significantly widen the perspectives for open finance in the UK, and smart data is key to the implementation of the OSIP standards we have developed.
“We look forward to continuing to work with the government, regulators, and the industry to secure the UK financial services industry position as a world leader and pioneer in digitisation, enhancing competition and consumer financial wellbeing.”