Eighteen countries including the United States and the United Kingdom have signed a 20-page agreement which states companies must develop artificial intelligence (AI) that keeps people “secure by design”.
The pact, which was revealed on Sunday, has been called the first international agreement aimed at ensuring the safety of AI against misuse. Although the agreement is non-binding, it outlines general recommendations, such as monitoring AI systems for abuse and securing data against tampering. The US Cybersecurity and Infrastructure Security Agency (CISA) Director, Jen Easterly, highlighted the significance of countries collectively endorsing the idea.
🎉Exciting news! We joined forces with @NCSC and 21 international partners to develop the “Guidelines for Secure AI System Development”! This is operational collaboration in action for secure AI in the digital age: https://t.co/DimUhZGW4R#AISafety #SecureByDesign pic.twitter.com/e0sv5ACiC3
— Cybersecurity and Infrastructure Security Agency (@CISAgov) November 27, 2023
“This is the first time that we have seen an affirmation that these capabilities should not just be about cool features and how quickly we can get them to market or how we can compete to drive down costs,” Easterly said, adding the guidelines represented “an agreement that the most important thing that needs to be done at the design phase is security”.
US Secretary of Homeland Security Alejandro N. Mayorkas said he believed the world is at an “inflection point” right now:
“We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy.”
He continued: “The guidelines jointly issued today by CISA, NCSC, and our other international partners, provide a commonsense path to designing, developing, deploying, and operating AI with cybersecurity at its core. By integrating ‘secure by design’ principles, these guidelines represent a historic agreement that developers must invest in, protecting customers at each step of a system’s design and development. ”
Governments scramble to stay ahead of AI developments
The deal was signed by agencies representing all members of the G7 countries, as well as others like the Czech Republic, Singapore, Israel, Nigeria, Singapore, Australia, and Chile.
On paper, the idea is a good one. The integration of AI technology into our lives will be disruptive and potentially dangerous if it is hijacked by those with bad intentions.
The charter aims to deal with how AI can be kept out of the hands of hackers and provides guidance for ensuring new software is rigorously tested before release.
Despite noble goals, the reality is non-binding agreements are devilishly hard to enforce on the world stage, even more so when major global players like China, Russia, and India are not signatories. Historically, competing countries team up in blocs for major international projects – like space travel. Getting nations outside one bloc to cooperate with another is not easy.
AI is not a new field but it exploded into the public consciousness in November 2022 when ChatGPT went live. The large language model (LLM) chatbot from OpenAI has sparked a surge in AI product development and generated real concern the technology could be used to drastically cut human jobs, manipulate voting processes, and commit crimes.
Featured image: Pexels