Click to open contact form.
Your Global Partners in the Business of Innovation

Global Initiatives in the Regulation of AI

Client Updates / Nov 29, 2023

Written by Haim Ravia and Dotan Hammer

The G7 leaders, under the auspices of the Hiroshima process, the Organization for Economic Co-operation and Development (OECD), and the Global Partnership on Artificial Intelligence (GPAI), have endorsed a set of global standards for the ethical development of advanced AI systems by organizations, adaptable across sectors and regions. This alignment underscores a worldwide commitment to responsible AI innovation.

The G7 nations encourage organizations to adhere to international guiding principles such as risk management in AI systems, transparency and public reporting of AI-related incidents, global information sharing, data protection and cyber security standards, and bias minimization.

Based on these principles, the International Code of Conduct offers voluntary guidelines for organizational actions, including the following:

  • Continuous risk assessments in the AI lifecycle, focusing on safety, security, and societal impacts.
  • Tracking vulnerabilities and misuse in AI systems, informing third parties about issues and incidents for comprehensive risk management.
  • Publicly disclosing the capabilities and limitations of advanced AI systems, focusing on safety, security, and societal impacts. Regularly update these reports for transparency and accountability.
  • Encouraging responsible information sharing among AI developers.
  • Developing and regularly updating AI governance and risk management policies, including privacy measures.
  • Implementing robust security for AI systems, covering physical and cyber security.
  • Developing content authentication mechanisms like watermarking for AI-generated content.
  • Ensuring data quality, privacy, and intellectual property protection, including bias mitigation and rights respect.

The Bletchley Declaration serves as another example of global efforts towards responsible and ethical AI development. Signed by 29 countries at the 2023 World Summit on AI Safety in the UK, the Declaration emphasizes a collaborative global approach to tackling AI’s challenges and risks. Recognizing AI’s broad applications in sectors like employment, health, and education, the signatories aim to align AI deployment with the UN’s Sustainable Development Goals. This commitment encompasses areas such as health, education, food security, clean energy, biodiversity, and climate protection. The Declaration highlights the need for international cooperation in addressing AI’s potential adverse impacts on sustainability.

Key areas of focus include safeguarding human rights, ensuring transparency, fairness, accountability, effective regulation, safety, ethics, bias mitigation, privacy, and content integrity. The commitments of the signatory countries include:

  • Gaining a scientific understanding of AI risks and improving detection methods.
  • Developing safety policies that are standardized yet adaptable to different national contexts.
  • Enhancing transparency among private AI developers.
  • Promoting AI safety research and supporting international initiatives for sustainability.
  • Fostering international dialogues on AI, sustainability, and research, aimed at responsibly utilizing AI for societal benefits.

On November 27, the UK announced the development of new global guidelines for AI security, in cooperation with the United States. These new guidelines have already been endorsed by agencies from 17 other countries, including the Israeli National Cyber Directorate, the Canadian Centre for Cyber Security, and the Australian Signals Directorate’s Australian Cyber Security Centre.

The guidelines are made up of four key areas: secure design, secure development, secure deployment, and secure operation and maintenance. In the area of secure design, companies are expected to raise staff awareness of threats and risks, model the threats to the AI-based systems, design the system for security as well as functionality and performance, and consider security benefits and trade-offs when selecting the AI model.

To enhance secure development, companies should secure their supply chain, identify, track, and protect assets, document their data, models, and prompts, and manage technical debts. For secure deployment, companies should develop incident management procedures and release AI responsibility, among other requirements. Finally, secure operation and maintenance include monitoring system behavior and input, following a secure-by-design approach to updates, and collecting and sharing lessons learned.

Click here to read the International Code of Conduct for Organizations Developing Advanced AI Systems.

Click here to read The Bletchley Declaration.

Click here to read the Guidelines for secure AI system development.

 

 

 

 

MEDIA HIGHLIGHTS