Written by Guy Milhalter
In September 2025, the Joint Commission, a nonprofit accreditation organization, in collaboration with the Coalition for Health AI (CHAI), released new guidance on the Responsible Use of AI in Healthcare (RUAIH). This framework is designed to help healthcare organizations implement AI tools in ways that promote safety, transparency, equity, and accountability. While not a regulatory mandate, the guidance is expected to influence future accreditation standards and procurement practices by healthcare organizations, making it highly relevant for health IT and digital health companies that develop or deploy AI-enabled solutions.
The RUAIH framework outlines seven core elements that healthcare organizations should consider when adopting AI tools:
AI Policies and Governance Structures
Healthcare organizations are encouraged to establish formal AI governance structures to oversee the responsible use of AI tools. This includes designating individuals with appropriate technical expertise to lead implementation efforts. Governance teams should manage the selection, deployment, risk assessment, and lifecycle oversight of AI tools, whether developed internally or by third parties. The guidance recommends that governance teams include representatives from clinical, operational, IT, compliance, safety, and stakeholder groups reflecting the needs of impacted populations (e.g., patients, providers, and staff). Policies should align with internal standards and external regulations and be regularly reviewed and updated.
Patient Privacy and Transparency
Organizations should implement policies that protect patient data and ensure transparency around AI use. Patients should be informed when AI tools directly impact their care, and consent should be obtained where appropriate. Transparency should extend to both patients and staff, including disclosures about how data is used and the role AI plays in decision-making. These practices are essential for maintaining trust and supporting informed adoption.
Data Security and Data Use Protections
The guidance emphasizes strong data protection measures, including encryption of data in transit and at rest, strict access controls, and regular security assessments. Even when HIPAA requirements do not apply, such as when data has been de-identified and no longer constitutes protected health information (PHI), organizations should still apply robust protections and contractual safeguards. These can include data use agreements that clearly define permitted uses, prohibit re-identification of de-identified data, and include audit rights.
Unsurprisingly, the Joint Commission also recommends that healthcare organizations adopt its Responsible Use of Health Data (RUHD™) framework, which provides guardrails for secondary data use, including oversight structures and algorithm validation. Vendors seeking to license their AI products to healthcare organizations would benefit from studying and adhering to this framework.
Ongoing Quality Monitoring
Healthcare organizations should monitor AI tools both before and after deployment. This includes requesting validation data from vendors, assessing performance in the local context, and evaluating for bias. Post-deployment monitoring should be risk-based and scaled according to the tool’s proximity to patient care. Organizations should establish feedback loops with vendors and use dashboards or other tools to track performance, updates, and adverse events.
Voluntary, Blinded Safety Reporting
The framework encourages confidential reporting of AI-related safety events to independent entities such as Patient Safety Organizations (PSOs). This approach supports learning and improvement without imposing regulatory burdens. Organizations should treat AI-related incidents, such as unsafe recommendations or performance degradation, as patient safety events and report them through existing channels when possible.
Risk and Bias Assessment
Healthcare organizations should evaluate AI tools for risks and biases across populations and use cases. This includes requesting vendor disclosures, using representative datasets, and conducting internal assessments. Bias can occur at any stage of the AI lifecycle, so monitoring should be ongoing. As healthcare organizations ramp up their risk and bias assessments, vendors should be prepared to demonstrate that their AI tools are designed to minimize bias to the greatest extent possible.
Education and Training
Organizations should provide role-specific training and general AI literacy to staff and providers. This includes documentation on how AI tools work, their intended use, and any limitations. Education initiatives should promote a shared understanding of AI principles and terminology, helping staff use these tools safely and effectively.
For health technology companies, this guidance offers a clear roadmap for aligning with the expectations of healthcare organizations. Vendors that build tools with governance, transparency, and monitoring in mind will be better positioned to support their clients, differentiate in the market, and prepare for future compliance requirements.
Read the full RUAIH guidance from the Joint Commission here: https://www.jointcommission.org/en-us/about-us/key-initiatives/ai-data-analytics-research