AI guidelines
AI Europe North America Trending

‘Secure by Design’: New UK and US Guidelines Put Cybersecurity at the Forefront of AI Development

Uncertainty reigned supreme across the globe as the extent of the developments of artificial intelligence (AI) came to light in 2023. AI appears to offer both limitless advantages and disadvantages – leading to widespread levels of excitement and scepticism surrounding the long-term impacts of the field.

Hoping to maximise potential by safeguarding AI technology from cyber threats and bad actors, the UK National Cyber Security Centre (NCSC) and the US Cybersecurity and Infrastructure Security Agency (CISA) have collaborated to create guidelines for secure AI system development.

The new guidelines look to support developers of AI systems when making cybersecurity decisions at every stage and level of the development process.

The new UK-led guidelines have also become the first of their kind agreed globally; with agencies from 17 other countries confirming they will endorse and co-seal the new guidelines.

NCSC explained that the new guidelines will help developers ensure that cybersecurity is both an “essential pre-condition of AI system safety” and must be considered as a priority through every part of development, known as the ‘secure by design’ approach.

Lindy Cameron, CEO of NCSC, explained: “We know that AI is developing at a phenomenal pace and there is a need for concerted international action, across governments and industry, to keep up.

“These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout.”

Primarily, the guidelines focus on enhancing the security of new AI technology and leave the ethical questions to each jurisdiction to decide for themselves

Keeping bad actors at bay
Dr John Woodward, head of Computer Science at Loughborough University
Dr John Woodward, head of Computer Science at Loughborough University

Dr John Woodward, head of computer science at Loughborough University, discussed the need for increased oversight in the world of AI: “AI will have many benefits that we are aware of, but there will also be some hidden dangers.

“One of the major challenges of regulation regarding artificial intelligence is obtaining agreement between countries. Of course, each country wants to have a competitive edge over other countries and we will all see the risks and benefits of artificial intelligence differently.

“Behind closed doors, how will we know how artificial intelligence is actually being used? In some circumstances, it will be very difficult to monitor the development of products supported by artificial intelligence.”

Although the new guidelines are non-binding, they have been launched to keep the space safer as the evolution of AI continues to accelerate across the globe.

Alejandro Mayorkas, Secretary of Homeland Security
Alejandro Mayorkas, Secretary of Homeland Security

Alejandro Mayorkas, US Secretary of Homeland Security, also commented on the significance of the new guidelines: “We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy.

“By integrating ‘secure by design’ principles, these guidelines represent a historic agreement that developers must invest in, protecting customers at each step of a system’s design and development.

“Through global action like these guidelines, we can lead the world in harnessing the benefits while addressing the potential harms of this pioneering technology.”

‘The critical role of cybersecurity in the rapidly evolving AI landscape’

Dan Morgan, senior government affairs director for Europe and APAC at information security firm SecurityScorecard, explained the importance of the new AI guidelines: “This agreement marks a significant step towards harmonising global efforts to safeguard AI technology from potential misuse and cyber threats.

Dan Morgan, senior government affairs director for Europe and APAC at SecurityScorecard
Dan Morgan, senior government affairs director for Europe and APAC at SecurityScorecard

“The emphasis on monitoring AI systems for abuse, protecting data integrity, and vetting software suppliers aligns with our mission to provide comprehensive cyber risk ratings and insights.

“While the agreement is non-binding and primarily carries general recommendations, it represents a collective acknowledgement of the critical role of cybersecurity in the rapidly evolving AI landscape. The focus on integrating security in the design phase of AI systems is particularly noteworthy, as it aligns with our approach of preemptive and comprehensive risk assessment.

“As a global leader in cybersecurity ratings, SecurityScorecard recognises the challenges posed by the rise of AI technology, including risks to democratic processes, the potential for fraud, and impacts on employment.

“We believe that collaborative efforts like this international agreement are essential to address these challenges effectively.

“We look forward to seeing how this framework will evolve and how it will influence AI development and cybersecurity practices. SecurityScorecard remains committed to partnering with global stakeholders to advance cybersecurity standards and practices, particularly in the AI domain, to foster a safer digital world for everyone.”


Related posts

Cryptocurrency payments will be a reality on the high street within two years, say SME owners

Manisha Patel

21% of UK to Ditch Physical Wallets in Next Five Years, Finds Mastercard

Tom Bleach

GHash Mining Powers Up 25 Crypto Data Centres to Supplement Mini-Grids in Rural Africa

Francis Bignell