Written by: Haim Ravia, Dotan Hammer
Israel’s Privacy Protection Authority has published a comprehensive guide for implementing Privacy-Enhancing Technologies (PETs) in artificial intelligence systems. Released in late December 2025, the document aims to help organizations mitigate privacy risks throughout the AI lifecycle while maintaining system functionality.
The guidance addresses the fundamental tension between AI’s data-intensive nature and privacy protection requirements. AI systems require large volumes of data—often including personal information—during both the development/training phase and operational use, creating significant privacy challenges that PETs can help address.
The document categorizes privacy-enhancing technologies into three main groups based on their operational principles. The first group involves data modification techniques, including anonymization, synthetic data generation, and differential privacy. These approaches alter or obscure personal information while preserving its analytical utility.
The second category encompasses distributed computing methods such as federated learning, multi-party computation, and private set intersection. These technologies enable data processing across multiple parties without exposing raw personal data to any single entity – particularly valuable for cross-organizational collaboration in sensitive sectors like healthcare and finance.
The third group focuses on separation and encryption technologies, including homomorphic encryption and trusted execution environments. These allow data to remain encrypted even during processing, providing protection throughout the computational pipeline.
The guidance emphasizes that these technologies can be combined to create multi-layered privacy protection. For example, differential privacy can be implemented within trusted execution environments, or synthetic data can be generated following private set intersection to identify relevant records across organizations.
Targeted at Data Protection Officers, legal advisors, product managers, and AI developers, the document includes practical case studies from various countries demonstrating real-world implementations in healthcare, finance, and consumer data contexts.
The Authority notes that technology selection should consider data sensitivity, processing requirements, regulatory context, and organizational capabilities.
Click here to read the Israeli Privacy Protection Authority’s Guidance on Implementation of Privacy Enhancing Technologies in AI Systems (in Hebrew).