Click to open contact form.
Your Global Partners in the Business of Innovation

Tech Giants Issue Voluntary Safe AI Development Guidelines

Client Updates / May 04, 2025

Written by: Haim Ravia, Dotan Hammer

Google and OpenAI have recently introduced updated frameworks to ensure the safe and ethical development of advanced artificial intelligence (AI) systems. These initiatives signal the growing industry-wide recognition of the potential risks associated with AI and a commitment to proactive mitigation strategies.

Google’s “Approach to Technical AGI Safety & Security” outlines the company’s strategy regarding Artificial General Intelligence (AGI) and focuses on four key risk areas: misuse, misalignment, mistakes, and structural risks. The paper provides a particular focus on technical solutions for misuse and misalignment such as robust security measures, access controls, and the development of aligned models through enhanced oversight and training. Google’s strategy also involves identifying dangerous capabilities, implementing robust security and access controls, and developing aligned models through amplified oversight and robust training.

Similarly, OpenAI updated its “Preparedness Framework” which outlines its process for tracking and preparing for advanced AI capabilities. The update introduces a structured AI risk assessment process, based on five criteria: plausibility, measurability, severity, novelty, and immediacy or irreversibility. The framework distinguishes between “High” and “Critical” capability levels, each requiring specific safeguards before deployment. Additionally, OpenAI has implemented scalable evaluations to support more frequent testing and has committed to publishing detailed Safeguards Reports alongside each frontier model release, a move that may also lead to a possible relaxation of requirements.

Click here to read Google’s paper “Taking a Responsible Path to AGI”.
Click here to read OpenAI’s updated preparedness Framework.

MEDIA HIGHLIGHTS