The Joint Commission (TJC) and Coalition for Health AI (CHAI) recently published the Guidance on the Responsible Use of Artificial Intelligence in Healthcare (Guidance), which outlines strategies for health care organizations to optimize their integration of health AI tools. The Guidance defines health AI tools broadly as clinical, administrative, or operational solutions that apply algorithmic methods to tasks involved in direct or indirect patient care, care support services, and care-relevant operations and administrative services. Given this inclusive definition, the Guidance identifies a wide range of potential AI-related risks, including errors, lack of transparency, threats to data privacy and security, and the overreliance on AI tools. To address these concerns, the Guidance outlines suggested practices that health care organizations can undertake in implementing AI tools. These practices are organized into seven elements, which are summarized below. While the Guidance is not limited to health care delivery organizations, the Guidance focuses primarily on these organizations. It is also important to note that the Guidance is not binding on health care organizations, although TJC indicates that a voluntary “Responsible Use of AI” certification program is forthcoming.
The Seven Elements of Responsible Use of AI Tools for Health Care Organizations:
- AI Policies and Governance Structures. The Guidance recommends that organizations establish formal AI-usage policies and a governance structure. According to TJC and CHAI, the policies should set expectations, including rules or procedures on the use of AI, and the governance committee should be composed of qualified individuals, including representatives from compliance, IT, clinical programs, operations, and data privacy. The Guidance also suggests regular reporting on AI usage to the organization’s board of directors or other fiduciary governing body.
- Patient Privacy and Transparency. Organizations are encouraged to adopt specific policies on data access, use, and transparency consistent with applicable laws and regulations. To promote transparency, organizations should inform patients about AI’s role in their care, including how their data may be used and how AI may benefit their care. Organizations may also need to secure informed consent to use AI tools, if applicable. The Guidance reminds organizations that transparency with staff members on the use of AI tools cannot be overlooked.
- Data Security and Data Use Protections. The Guidance stresses that all uses of patient data with AI tools must comply with HIPAA. Providers can support compliance by leveraging current data protection strategies, including encrypting patient data, limiting data access, regularly assessing security risks, and developing an incident response plan. TJC and CHAI recommend that organizations enter into data use agreements that outline permitted uses, minimize data exports, prohibit re-identification, require third parties to comply with the organization’s security and privacy policies, and provide the organization with audit rights.
- Ongoing Quality Monitoring. In addition to privacy risks, the Guidance advises organizations to regularly monitor AI quality by looking for changes in outcomes and testing the AI tools against known standards. Externally developed AI tools may not receive consistent review, and the dynamic nature of AI renders it prone to “drift” from its intended purpose; therefore, the Guidance calls for an internal reporting system to identify risks and maintain quality of care. TJC and CHAI suggest a risk-based approach to monitoring AI tools, such that AI tools that inform or drive clinical decisions should be prioritized. Additionally, the Guidance advises that organizations create a process to report adverse events to leadership and relevant vendors.
- Voluntary Reporting. The Guidance urges organizations to establish a process for confidential, anonymous reporting of AI safety incidents to an independent organization. By reporting through confidential channels to third parties, such as federally listed Patient Safety Organizations, voluntary reporting may improve the quality of AI usage without compromising patient privacy.
- Risk and Bias Assessment. The Guidance also recommends that organizations implement processes for categorizing and documenting AI bias or risk. In clinical use, AI may be unable to generalize diseases to certain populations, leading to misdiagnosis and inefficient care. TJC and CHAI recommend that organizations verify whether AI tools are appropriately tuned to the population to which they are applied and that the AI tools were developed using representative, unbiased data sets.
- Education and Training. Finally, to ensure that AI benefits the organization, the Guidance advocates for education and training of health care providers and staff on the use of AI tools, including any limitations on, and risks of, their use. The Guidance directs organizations to limit AI tool access to specific roles on a need-to-use basis, and to advise all staff where to find relevant information about AI tools and organizational policies and procedures.
Implications
In the absence of comprehensive federal laws governing AI, the Guidance (along with existing resources such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework and the Bipartisan House Task Force Report on Artificial Intelligence) may help health care organizations evaluate and implement AI tools in a safe and compliant manner.
Similar to the NIST RMF AI Playbook, TJC and CHAI plan to release a series of practical “playbooks” to operationalize recommended practices in the Guidance. Health care institutions seeking actionable guidance may want to take note of these playbooks because they will inform TJC’s future AI certification program.
Overall, the Guidance’s strategies can help health care organizations minimize AI risks and foster an adaptive health care environment.
Lauren Ludwig contributed to this article