Written by: Haim Ravia, Dotan Hammer
On April 20, 2026, Spain’s data protection authority (Agencia Española de Protección de Datos, AEPD) published its second guidance document on GDPR compliance when using AI-powered voice transcription tools, building on its initial January 2026 guidance. The AEPD confirms a risk-based approach and warns that organizations should not treat AI transcription as a purely technical feature but as a processing activity requiring continuous governance, clear transparency, and proactive safeguards.
The guidance addresses the allocation of processing responsibilities under the GDPR, emphasizing that when an organization decides to incorporate an AI voice transcription service into its processes, it acts as the data controller, since it determines the purposes and means of the processing — regardless of whether the tools used are proprietary or third-party. The controller and the processor must exercise due diligence when selecting AI transcription products, limiting their choice to those that demonstrate their capacity to enable informed decision-making and that offer sufficient guarantees of GDPR compliance. This diligence must be maintained throughout the entire lifecycle of the processing, not merely during the procurement phase.
The AEPD stresses that a transcription is not a neutral text — it is a representation attributed to a specific individual. Since errors in AI transcription are known and foreseeable, the principle of accountability requires the controller to take a proactive approach, adopting measures to prevent, detect, and correct inaccuracies. These measures may include informing data subjects of the system’s possible limitations, human oversight of transcriptions, and implementing mechanisms for the exercise of the right to rectification. Organizations using AI transcription must also ensure continuous transparency — informing participants throughout the session that transcription is occurring, not merely at the beginning.
In California, Senate Bill 574 would impose specific duties on attorneys who use generative AI and would restrict how arbitrators may use such tools in decision-making. Attorneys would be required to ensure that confidential, personally identifying, or other nonpublic information is not entered into a public generative AI system; take reasonable steps to verify the accuracy of AI-generated material; correct any erroneous or hallucinated output; and remove biased, offensive, or harmful content. The bill would prohibit any brief, pleading, motion, or other paper filed in court from containing citations that the responsible attorney has not personally read and verified, including citations provided by generative AI.
For arbitrators, SB 574 would prohibit the delegation of any part of the decision-making process to a generative AI tool. Arbitrators could not rely on information generated by AI outside the record without making appropriate disclosures to the parties beforehand and, as far as practical, allowing the parties to comment. Arbitrators would be required to assume responsibility for all aspects of an award, regardless of any use of AI tools.
Click here to read the AEPD’s guidance on AI-powered voice transcription.
Click here to read California SB 574 on AI use by attorneys and arbitrators.