New Joint Commission Guidance On The Use Of Artificial Intelligence In Healthcare
On September 17, 2025, the Joint Commission, in collaboration with the Coalition for Health AI (“CHAI”), issued its first high-level framework on the responsible use of artificial intelligence (“AI”) in healthcare. The Guidance on the Responsible Use of AI in Healthcare (“Guidance”) is intended to help hospitals and health systems responsibly deploy, govern, and monitor AI tools across organizations. The goal of the Guidance is to help “…the industry align elements that enhance patient safety by reducing risks associated with AI error and improving administrative, operational, and patient outcomes by leveraging AI’s potential.”
The Guidance applies broadly to “health AI tools” (“AI tools”), which are defined as:
…clinical, administrative, and operational solutions that apply algorithmic methods (predictive, generative, combined methods) to a suite of tasks that are part of direct or indirect patient care (e.g., decision support, diagnosis, treatment planning, imaging, laboratory, patient monitoring), care support services (e.g., clinical documentation, scheduling, care coordination/management, patient communication), and care-relevant healthcare operations and administrative services, (e.g., revenue cycle management, coding, prior authorization, care quality management, etc.)
The Guidance is not intended to direct the development of AI tools, or to validate the effectiveness of AI tools themselves; rather, it provides broad direction to healthcare organizations on structures and processes for safe implementation and use. This Guidance is positioned as an initial, high-level standard that will be operationalized through non-binding forthcoming governance playbooks and a voluntary certification program. The Guidance sets forth seven core elements that healthcare organizations should address to manage the risks and realize the benefits of AI systems in the clinical, operational, and administrative spaces. Each element focuses on practical controls, accountability, and continuous learning. The seven core elements articulated by the Guidance are:
AI Policies and Governance Structures: The Guidance calls for the establishment of formal, risk-based governance to oversee the implementation and usage of AI across third-party, internally developed, and embedded tools. Governance should be staffed with individuals who have appropriate technical expertise, ideally in AI, and include clinical, operational, IT, compliance, privacy/security, safety/incident reporting, and representatives reflecting impacted populations. Policies should align with internal standards and external regulatory and ethical frameworks and be reviewed regularly. The governing body and/or fiduciary board should receive periodic updates on AI use, and outcomes, and potential adverse events. The Guidance states that “…governance creates accountability which will help to drive the safe use of AI tools.”
Patient Privacy and Transparency: The Guidance calls for organizations to implement policies addressing data access, use, and protection, coupled with mechanisms to disclose AI use and to educate patients and families. When AI directly impacts care, patients should be notified, and when appropriate, consent should be obtained. Transparency and education should extend to staff as well, clarifying how AI tools function, their role in decision-making, and data handling practices. The Guidance aims to protect patient data and to preserve trust while enabling AI’s benefits, recognizing that AI often relies on sensitive, large-scale datasets.
Data Security and Data Use Protections: The Guidance emphasizes robust security controls and contractual guardrails. At a minimum, organizations should encrypt data in transit and at rest, enforce strict access controls, perform regular security assessments, and maintain an incident response plan. Data use agreements should define permitted uses, require data minimization, prohibit re-identification of de-identified datasets, impose third-party security obligations, and preserve the rights of the organization to audit third-party vendors for compliance. HIPAA obligations apply whenever PHI is involved, so policies should be tailored to comply with HIPAA, particularly the HIPAA Privacy Rule. Even for properly de-identified data, organizations should maintain strong technical and contractual protections given re-identification risks and downstream use in model development and tuning.
Ongoing Quality Monitoring: The Guidance highlights that AI performance can drift as data inputs or algorithms change, vendor updates roll out, or workflows evolve. The Guidance urges pre-deployment validation, and post-deployment risk-based and context-appropriate monitoring. During procurement, organizations should request validation evidence, understand bias evaluations, and where possible, secure vendor support for tuning/validation of a sample that is representative of the deployment context. Comprehensive policies should be developed identifying the responsible parties for the monitoring and evaluation of AI tools. This monitoring and evaluation should consist of regular validation, evaluations of quality and reliability of data and outputs, assessments of use-case relevant outcomes, ensuring the AI tools rely on current data, the development of an AI dashboard, and creating a process for reporting adverse events and/or errors to the relevant parties. Monitoring responsibility should also be discussed as part of third-party procurement and contracting.
Voluntary, Blinded Reporting of AI Safety-Related Events: The Guidance promotes the dissemination of knowledge across the industry to help healthcare providers stay informed about potential risks and best practices. To avoid imposing new regulatory burdens, the Guidance encourages confidential, blinded, reporting of AI-related safety events to independent entities (such as Patient Safety Organizations). Organizations should capture AI-related near misses and harms (e.g., unsafe recommendations, major performance degradation after an update, biased outputs, etc.) within internal incident systems and share de-identified details through existing channels both internally and externally as appropriate. This approach enables pattern recognition and rapid, field-wide learning while protecting patient privacy.
Risk and Bias Assessment: The Guidance stipulates that organizations should proactively identify and address risks and biases in AI tools, both prospectively and through ongoing monitoring. They should seek vendor disclosures on known risks, limitations, and bias (including specifically how bias was evaluated). The Guidance states that healthcare organizations should determine if the AI tools are fit for purpose, if they have undergone the appropriate bias detection assessment during development, whether the algorithms have been tested for the specific populated they serve (and ensure that they are tuned/tested on local data), and that the AI tools must be audited and monitored to identify, mitigate, and/or manage biases when appropriate. The aim is to prevent safety errors, misdiagnoses, administrative burdens, and inequities that can arise when AI tools are applied outside their validated context.
Education and Training: The Guidance states that clinicians and staff members must receive training in AI tools to ensure safe implementation and integration of AI tools into clinical workflows. The Guidance recommends role-specific training for clinicians and staff on each AI tool’s intended use, limitations, and monitoring obligations, supported by accessible documentation. Broader AI literacy and change management initiatives should be considered to help create a shared vocabulary and understanding of AI principles, risks, and benefits. Organizations should determine when pre-implementation and periodic training are necessary for clinicians and staff.
The Guidance does not elaborate on the actual implementation of the broad elements discussed above; rather, it solicits additional feedback on the high-level guidance provided and requests further input from stakeholders in the development of “Responsible Use of AI Playbooks” (“Playbooks”), these Playbooks will serve as the practical resources to guide health systems toward aligning with the Guidance. Once the Playbooks have been developed, a voluntary Joint Commissions Responsible Use of AI certification program will be developed based on the Playbooks. While not binding, the Guidance is likely to influence how AI tools are used across the healthcare industry, and could serve as a model for future accreditation-related expectations related to AI governance in the healthcare sector.
It is of paramount importance that healthcare organizations employing AI not only consider this Guidance, but also adapt it to their circumstances and understand the specific AI tools to be employed from both an operational and technical perspective, and the potential unanticipated consequences of the AI tools use. Organizations that most successfully employ AI tools will be those that are best able to recognize the unanticipated consequences and recalibrate the AI tools accordingly.
Privacy Tip #464 – Pitfalls of Dating a Bot
Dating sure has changed since I was in the market decades ago. Some of us can’t imagine online dating, let alone dating a bot. Get over it—it’s now reality.
According to Vantage Point, a counseling company located in Texas, it surveyed 1,012 adults and a whopping 28% of them admitted to having “at least one intimate or romantic relationship with an AI system.” Vantage Point recently released “Artificial Romance: A Study of AI and Human Relationships,” that found:
28.16% of adults claim to have at least one intimate or romantic relationship with an AI.
Adults 60 years and older are more likely to consider intimate relationships with AI as not cheating.
More than half of Americans claim to have some kind of relationship with an AI system.
ChatGPT is the #1 AI platform adults feel they have a relationship with, Amazon’s Alexa is #3, Apple’s Siri is #4, and Google’s Gemini is #5.
Adults currently in successful relationships are more likely to pursue an intimate or romantic relationship with an Artificial Intelligence.
The article explores whether having an intimate or romantic relationship with a bot is cheating on your partner or not, which we will not delve into here. The point is that it appears that a lot of adults are involved in relationships with bots.
According to Gizmo, younger generations, including 23% of Millennials and 33% of Gen Z report having romantic interactions with AI.
For adults, the pitfalls and “dangers” associated with dating a bot are thoroughly outlined in an informative article in Psychology Today. Some of the experts believe that dating a bot:
Threatens our ability to connect and collaborate in all areas of life.
In most cases, users actually create the characteristics, both physical and “emotional,” that they want in their bot. Some users lose interest in real-world dating because of intimidation, inadequacy, or disappointment.
AI relationships will potentially displace some human relationships and lead young men to have unrealistic expectations about real-world partners.
Sometimes the bots are manipulative and can be destructive. This can lead to feelings of depression, which can lead to suicidal behavior.
What is more alarming is the “astonishing proportion of high schoolers [who] have had a ‘romantic’ relationship with an AI” bot. According to the article by the same name, “this should worry you.”
Presently, one in five high school students say that “they or a friend have used AI to have a romantic relationship” according to a recent report from the Center for Democracy and Technology. This is consistent with other studies noting the high percentage of teens that are forming relationships with AI bots. The concerns for youngsters forming relationships with bots include the fact that they can “give dangerous advice to teens…encourage suicide, explaining how to self-harm, or hide eating disorders. Numerous teens have died by suicide after developing a close and sometimes romantic relationship with a chatbot.”
The Report found that 42% of highschoolers use AI “as a friend, or to get mental health support, or to escape from real life.” Additionally, 16% percent say they converse with an AI bot every day. It is also being used for AI-fabricated revenge porn, producing deepfakes, sexual harassment and bullying.
Like social media usage, parents need to be aware of the prevalence of kids interacting with AI bots for romantic relationships or mental health advice, and discuss the risks presented with them.
Deepfakes Problematic for Detection + Response
OpenAI recently published research summarizing how criminal and nation-state adversaries are using large language models (LLMs) to attack companies and create malware and phishing campaigns. In addition, the use of deepfakes has increased, including audio and video spoofs used for fraud campaigns.
Although “most organizations are aware of the danger,” they “lag behind in [implementing] technical solutions for defending against deepfakes.” Security firm Ironscales reports that deepfakes are increasing and working well for threat actors. It found that the “vast majority of midsized firms (85%) have seen attempts at deepfake and AI-voice fraud, and more than half (55%) suffered financial losses from such attacks.”
CrowdStrike’s 2025 Threat Hunting Report projects that audio deepfakes will double in 2025, and Ironscales reports that 40% of companies surveyed had experienced deepfake audio and video impersonations. Although companies are training their employees on deepfake schemes, they have “been unsuccessful in fending off such attacks and have suffered financial losses.”
Ways to mitigate the effect of deepfakes include:
Training employees on how to detect, respond to, and report deepfakes;
Creating policies that reduce the impact that one person can cause a compromise;
Embedding multiple levels of authorization for wire transfers, invoice payments, payroll and other financial transactions; and
Employing tools to detect threats that may be missed by employees.
Threat actors will continue to use AI to develop and hone new strategies to evade detection and compromise systems and data. Understanding the risk, responding to it, educating employees, and monitoring can help mitigate the risks and consequences.
From Pumpkin to Pie- Transforming Raw AI Potential into Results
Adopting AI successfully requires disciplined project management and realistic expectations.
Too many businesses approach AI implementation haphazardly, leading to failed pilots, wasted resources, and organizational skepticism. Treat AI adoption as a carefully managed project with clear goals, defined scope, realistic timelines, and measurable success criteria.
Assign dedicated team members to build AI expertise and lead projects, avoiding the common mistake of adding AI responsibilities to already overloaded staff. Integrate AI learning into professional development programs, creating career paths for AI specialists.
Allocate responsibilities and resources appropriately, ensuring teams have both the technical expertise and business acumen needed for success. Plan for ongoing evaluation, implementation, metrics tracking, and improvement cycles. Celebrate successes publicly while treating failures as learning opportunities. Building an AI-positive culture requires consistent leadership support and tangible recognition of achievements.
Reality Check: AI Isn’t Magic (And That’s Actually Good News)
AI represents a powerful tool, not a replacement for human judgment and expertise. Generative and analytical models resemble highly capable but inexperienced assistants. They process information rapidly and identify patterns humans might miss, but lack real-world experience, contextual understanding, and common sense. They can hallucinate convincing but incorrect answers, produce irrelevant or biased outputs, and fail to recognize when they’re operating outside their competence boundaries.
Understanding these limitations is liberating rather than disappointing. It clarifies that AI augments rather than replaces human capabilities, allowing businesses to deploy AI strategically where it adds most value.
Always verify AI outputs through human review, especially for critical decisions or external communications. Remember that AI doesn’t truly think or feel emotions. It processes patterns in data to generate statistically likely responses. This understanding helps set appropriate expectations and implement necessary safeguards.
High-Impact Use Cases Worth Pursuing
Ask how AI could automate, augment, or enhance specific tasks across your organization. Start with use cases offering clear value propositions and manageable implementation complexity. Consider both efficiency gains and strategic advantages when evaluating potential applications.
Document review and management: AI excels at processing large volumes of information, summarizing dense reports, flagging problematic areas, extracting key information, and classifying documents by type, date, or relevance. Business teams use AI for contract analysis and due diligence, while compliance teams employ it for regulatory monitoring and reporting.
Policy development and communications: GenAI can draft customized communications for different audiences, generate training materials, and ensure consistency across organizational documents. While human review remains essential, AI dramatically accelerates the drafting process.
Presentations and content creation: AI tools help produce compelling presentations, generate relevant graphics and visualizations, create marketing content, and adapt materials for different audiences and contexts. Creative professionals find AI particularly valuable for ideation and rapid prototyping.
Research and analysis: For unfamiliar concepts or emerging trends, AI provides rapid synthesis of available information, identifies relevant sources, and generates plain-language explanations of complex topics. However, outputs must be verified for accuracy and completeness, particularly for technical or regulated subjects.
Regulatory compliance tools: Specialized AI assistants help with data-protection impact assessments, marketing claim analysis, sustainability reporting, export control compliance, and tender response preparation. These tools reduce compliance burdens while improving accuracy and consistency.
Internal knowledge sharing and workflow automation: AI can power chatbots that handle routine employee questions, streamline procurement and contract approvals, direct requests to the right teams, and automatically fill forms using existing data. These implementations offer quick productivity improvements with minimal deployment risk.
Smart Risk-Benefit Analysis
Before implementing any AI use case, conduct a thorough risk-benefit analysis considering multiple dimensions. Evaluate ethical implications including potential bias, fairness concerns, and impact on stakeholders. Assess accuracy requirements and error tolerance for the specific application. Consider copyright and intellectual property risks from both input and output perspectives. Analyze data privacy implications and regulatory compliance requirements. Plan for potential tool failures or performance degradation over time.
Use quantitative metrics wherever possible to measure expected improvements and justify investments. Conduct limited pilot projects with carefully selected user groups before broad deployment. Gather feedback systematically and iterate based on real-world experience. Decide whether free or low-cost tools suffice for experimentation or whether enterprise licenses are needed for production use. Enterprise versions typically offer superior data protection, administrative controls, and vendor support, justifying higher costs for critical applications.
Building on What You Already Have
Many Businesses already have AI capabilities embedded in existing productivity platforms and enterprise software. Major productivity suites include AI-powered features for document creation, data analysis, and workflow automation. These embedded capabilities offer a low-risk starting point for AI adoption, leveraging familiar interfaces and existing data security measures.
Evaluate specialized tools for specific business functions by considering output quality, ease of use, integration capabilities, data handling practices, vendor stability, and total cost of ownership. Request demonstrations focused on your actual use cases rather than generic capabilities. Pilot tools with real business problems before committing to enterprise deployments. Ensure vendor contracts address AI-specific concerns including data usage rights, model updating procedures, and liability allocation.
Critical Limitations and Security Considerations
GenAI tools may hallucinate or produce erroneous content with high confidence, yet you remain responsible for verifying accuracy and appropriateness. Assess disclosure risks carefully, understanding that information entered into AI tools might be stored, analyzed, or used for model training. Check whether data remains confined to your organization’s instance or becomes accessible to vendors or other users. Evaluate cybersecurity measures, including encryption, access controls, and incident response procedures.
Scrutinize vendor trustworthiness through financial stability assessment, security certification review, and reference checking. Review terms and conditions carefully, paying particular attention to data rights, liability limitations, and termination provisions. Consider whether customer consent is required before using AI with their data, particularly in regulated industries or when handling sensitive information. Implement appropriate technical and organizational measures to protect data throughout the AI lifecycle.
Mastering the Art of Prompt Engineering
Effective prompt engineering dramatically improves AI output quality and relevance. Use direct, focused prompts that clearly specify desired outcomes. Provide comprehensive context, including role, audience, tone, format, and constraints. Give examples of your organization’s voice and style to help models match expectations. Iterate on prompts based on results, refining instructions for clarity and completeness.
Different AI models respond better to different prompting strategies, and so users should adapt their approach based on the specific tool. Save successful prompts as templates for reuse and training. Document prompt patterns that work well for specific use cases. Share prompt libraries across teams to accelerate learning and ensure consistency. Remember that prompt engineering is an evolving skill that improves with practice and experimentation.
AI Regulatory Update – California’s SB 243 Mandates Companion AI Safety and Accountability
On October 13, 2025, Governor Gavin Newsom signed Senate Bill 243 into law, making California the first state to mandate specific safety safeguards for AI companion chatbots used by minors. The legislation is a direct response to mounting public health concerns and several high-profile incidents involving teen self-harm and suicide allegedly linked to interactions with conversational AI. With an effective date of January 1, 2026, SB 243 establishes a new regulatory baseline for the companion AI industry.
Key Regulatory Requirements
The law imposes affirmative duties across three critical areas: Disclosure, Safety Protocols, and Accountability.
Disclosure and Break Reminders
AI Disclosure (General Users): If a “reasonable person” would be misled to believe they are interacting with a human, operators must issue a “clear and conspicuous notification” that the companion chatbot is artificially generated and not human.
AI Disclosure (Minors): For users the operator knows are minors, operators must disclose that the user is interacting with artificial intelligence and provide clear and conspicuous notifications at least every three hours during continuing interactions reminding the user to take a break and that the chatbot is AI-generated.
Suitability Warning: Operators must disclose on the application, browser, or other access format that companion chatbots may not be suitable for some minors.
Content and Safety Protocols
Crisis Prevention Protocol: Operators must maintain a protocol for preventing the chatbot from producing suicidal ideation, suicide, or self-harm content to the user.
Crisis Referrals: The protocol must include providing notifications that refer at-risk users to crisis service providers (including suicide hotlines or crisis text lines) when users express suicidal ideation, suicide, or self-harm.
Protocol Publication: Operators must publish details of their crisis prevention protocol on their website.
Content Guardrails for Minors: Operators must institute reasonable measures to prevent chatbots from producing visual material of sexually explicit conduct or directly stating that minors should engage in sexually explicit conduct.
Evidence-Based: Operators must use evidence-based methods for measuring suicidal ideation.
Reporting and Accountability
Annual Reporting (Beginning July 1, 2027): Operators must submit an annual report to the California Department of Public Health’s Office of Suicide Prevention detailing:
The number of times the operator issued crisis service provider referral notifications in the preceding calendar year;
Protocols put in place to detect, remove, and respond to instances of suicidal ideation by users; and
Protocols put in place to prohibit companion chatbot responses about suicidal ideation or actions with the user.
Private Right of Action: The law creates a private right of action allowing any person who suffers injury in fact as a result of a violation to pursue:
Injunctive relief;
Damages equal to the greater of actual damages or $1,000 per violation; and
Reasonable attorney’s fees and costs.
Business Implications
Passed with overwhelming bipartisan support (Senate 33-3, Assembly 59-1), SB 243 establishes California as a significant regulatory trendsetter in AI governance. For companies operating companion chatbot platforms, immediate action is required:
Initial Compliance Assessment
Scope Analysis: Carefully review whether your AI systems fall within the narrow statutory definition of “companion chatbot” or qualify for the statutory exclusions (customer service bots, video game characters with limited dialogue, voice-activated virtual assistants without sustained relationships).
Operational and Technical Directives
Compliance Review: Review and update all platform protocols to comply with new disclosure and break reminder requirements, paying particular attention to the different standards for general users versus minors.
Crisis System Development: Develop and rigorously document protocols for preventing chatbots from producing harmful content related to suicide and self-harm. Ensure protocols include mandatory crisis referral mechanisms and publish protocol details publicly on company websites.
Age Detection and Content Filtering: Implement or refine age detection mechanisms to identify minor users and content filtering systems to prevent minors from being exposed to sexually explicit content or prompts.
Data Systems: Establish data collection and tracking systems to accurately capture and report the required metrics to the Department of Public Health starting in 2027, while ensuring no personal identifiers are included in reports.
Broader Context
California’s action follows similar legislative efforts in states like Utah and Texas focused on regulating AI interactions with minors. This law carries particular weight given California’s status as a global hub for AI companies and its history of setting de facto national standards for technology regulation.
The legislation has received industry support, with companies like OpenAI praising the measure as a “meaningful move forward” for AI safety standards. Governor Newsom also signed a comprehensive package of related bills on the same day, including AB 1043 (age verification for app stores) and AB 56 (social media warning labels), signaling California’s broad commitment to youth digital safety.
As AI governance frameworks continue to evolve, SB 243 represents a significant shift toward mandating affirmative safety measures rather than relying solely on post-harm liability. Companies should closely monitor similar legislative proposals across the country and prepare for potential federal action in this rapidly emerging regulatory space.
UK Government Urges Leading Businesses to Strengthen Cybersecurity Measures
On October 14, 2025, the UK government announced a coordinated effort by senior ministers and security officials to urge top UK businesses to improve their cybersecurity defenses. In a letter sent to all FTSE100 and FTSE250 companies, as well as other prominent UK businesses, officials emphasized the need for immediate and robust action to confront evolving cyber threats.
In the letter, the UK government stressed that cyber attacks are becoming increasingly more sophisticated and frequent, and have the potential to inflict substantial damage on UK businesses and the wider public. In the letter, the UK government suggests that cyber resilience is a critical enabler of economic growth and strongly encourages UK businesses to make cyber resilience a strategic priority to better protect themselves, their stakeholders, and the UK economy from escalating digital threats.
The UK government’s letter encourages organizations to take action in the following three areas to bolster their resilience against cyber attacks:
Elevate Cyber Risk to Board-level Priority: Organizations are urged to integrate cyber risk management into strategic decision-making. In particular, the UK government encouraged organizations to use the Cyber Governance Code of Practice as a framework, and regularly plan and run exercises to ensure they can maintain operations and recover swiftly following a severe cyber incident.
Enroll in the NCSC’s Early Warning Service: Organizations should sign up for the free National Cyber Security Centre (“NCSC”) Early Warning service, which provides registered organizations with alerts of potential cyber attacks on their network.
Adopt Cyber Essentials Certification Across Supply Chains: Organizations are encouraged to require Cyber Essentials certification within their supply chains to ensure suppliers have sufficient cyber protections in place to guard against common attacks.
California Governor Vetoes Bill That Would Have Required Employers to Provide Notice of AI Use
On October 13, 2025, California Governor Gavin Newsom vetoed legislation that would have regulated employers’ use of artificial intelligence (AI) and other automated decisionmaking technologies in employment-related decisions, citing concerns about “overly broad restrictions” on employers’ use of the technology. The legislation, passed on the heels of the finalization of two separate sets of state AI regulations, would have added to California’s increasingly complicated web of overlapping and sometimes inconsistent laws and regulations concerning such technology.
Quick Hits
Governor Newsom vetoed the latest bill passed by lawmakers to regulate employers’ use of AI.
The legislation would have required employers to provide written notice to employees and job applicants that AI was being used to make employment-related decisions and would have prohibited employers from relying solely on such tools to discipline or discharge employees.
The veto would have added to California’s growing set of AI laws and regulations after California already finalized two separate sets of AI regulations in 2025.
Governor Newsom vetoed Senate Bill (SB) No. 7, known as the “No Robo Bosses Act,” saying the legislation would impose “overly broad restrictions” on how employers use AI and automated decisionmaking technologies and that it “fail[ed] to directly address incidents of misuse.”
The legislation would have prohibited employers from relying “solely” on what the legislation referred to as “automated decision systems” (ADS) to make disciplinary decisions or decisions to terminate employment, and it would have required employers to provide written notice to employees and job applicants regarding AI’s use in making employment-related decisions.
“I share the author’s concern that in certain cases unregulated use of ADS by employers can be harmful to workers,” Governor Newsom stated in his veto message. “However, … the bill imposes unfocused notification requirements on any business using even the most innocuous tools.”
The veto comes even after the legislature sought to tone down certain aspects of the bill that would have restricted employers’ use of AI and other technologies without human oversight. Still, the legislation faced opposition from business groups over concerns of the increasingly burdensome compliance obligations related to the emerging technology in the state, particularly after recent regulations.
Governor Newsom added that “[b]efore enacting new legislation in this space, we should assess the efficacy of these regulations to address these concerns.”
Recent California AI Regulations
California recently finalized two sets of regulations concerning AI. In June 2025, the California Civil Rights Department (CRD) finalized new regulations prohibiting employers from using an “automated decision system” to discriminate against applicants or employees on a basis protected by the California Fair Employment and Housing Act (FEHA). Those regulations took effect on October 1, 2025.
In July 2025, the California Privacy Protection Agency finalized regulations on automated decisionmaking technologies (ADMT), risk assessments, and cybersecurity audits pursuant to the California Consumer Privacy Act (CCPA), with the requirement for businesses that use ADMT to take effect on January 1, 2027.
SB 7 – ‘No Robo Bosses Act’
While SB 7, as amended, toned down certain aspects of the bill that would have required employers to maintain human oversight over the technology, the legislation would have imposed several requirements for employers’ use of ADS.
Specifically, SB 7 would have:
required employers to provide prior written notice to employees and job applicants when an ADS would be used to make an employment-related decision;
imposed potentially burdensome employee data access requirements;
prohibited employers from relying “solely” on an ADS to discipline employees or terminate their employment;
prohibited employers from using ADS to prevent compliance with state, federal, or local labor, workplace safety, or civil rights laws;
prohibited employers from using an ADS to “[i]nfer a worker’s protected status” or to identify, profile, predict, or take adverse action against a worker for exercising their legal rights; and
prohibited employers from using an ADS to collect worker data for undisclosed purposes.
Contrasting Definitions
Further, SB 7 had used several definitions with subtle and potentially conflicting distinctions from the recent regulations that likely would have contributed to further confusion amid the overlapping regulatory schemes.
Specifically, SB 7 had defined ADS broadly to include computational processes, machine learning, data analysis, or AI “that is used to assist or replace human discretionary decisionmaking and materially impacts natural persons.” This differed from the CRD AI regulations, which more narrowly focus on technology used to make decisions “regarding an employment benefit,” and the CCPA AI regulations, which define “automated decisionmaking technology,” or “ADMT,” as technology that “processes personal information and uses computation to execute a decision, replace human decisionmaking, or substantially replace facilitate human decisionmaking.” (Emphasis added.)
Additionally, SB 7 applies to “workers,” defined as both employees and independent contractors. The CCPA AI regulations have a similarly broad definition. By contrast, the CRD AI regulations apply only to “applicants” and “employees,” and expressly exclude independent contractors from coverage.
Next Steps
Governor Newsom’s veto of SB 7 highlights the concern regarding overregulation of employers’ use of AI and automated decisionmaking technologies, which have the potential to improve efficiency and decisionmaking. Still, the governor’s veto message indicated a shared concern over the potential discriminatory impact of such technology and its unregulated use, suggesting that future legislation could be possible. The governor urged lawmakers to first consider the efficacy of the state’s new AI regulations and focus on specific potential harms.
Even with the veto of SB 7, employers may want to review their use of ADS tools and similar technology and how they impact their employment-related decisions, including performance monitoring and hiring. They may also wish to implement internal safeguards and adopt policies that promote responsible AI use across their organizations.
From Abstract to Applied: How Desjardins Can Reframe Patent Protection for AI in Health Care
For a decade, innovators at the intersection of artificial intelligence (AI) and precision medicine have faced a stubborn paradox: the very breakthroughs in software and machine learning that enable early cancer detection and personalized therapy recommendations are often denied U.S. patent protection. Under the unpredictable Alice/Mayo framework, patent examiners and courts frequently categorize adaptive AI models as “abstract ideas,” equating them to mathematical exercises rather than technological advances deserving protection.
The result has been a chilling effect on investment and disclosure in one of health care’s most promising frontiers and, potentially, a threat to the United States’ leadership in biomedical AI.
The USPTO’s September 25, 2025, Ex parte Desjardins,[i] rehearing marks the clearest acknowledgment that AI innovations, including those with health care applications, can be patent-eligible. The Appeals Review Panel (“ARP”) vacated a § 101 rejection against DeepMind’s continual learning framework, holding that it integrated a mathematical concept into a practical application by improving the model’s own functionality. Notably, not only did the ARP reverse the Board and find the claims patent-eligible,[ii] but the decision was also authored by John A. Squires, the new Director of the US Patent and Trademark Office.
The Rejected Claims
The claims under consideration relate to a computer-implemented method of training a machine learning model. Representative independent claim 1[iii] recites:
1. A computer-implemented method of training a machine learning model,
wherein the machine learning model has at least a plurality of parameters and has been trained on a first machine learning task using first training data to determine first values of the plurality of parameters of the machine learning model, and wherein the method comprises:
determining, for each of the plurality of parameters, a respective measure of an importance of the parameter to the first machine learning task, comprising:
computing, based on the first values of the plurality of parameters determined by training the machine learning model on the first machine learning task, an approximation of a posterior distribution over possible values of the plurality of parameters, assigning, using the approximation, a value to each of the plurality of parameters, the value being the respective measure of the importance of the parameter to the first machine learning task and approximating a probability that the first value of the parameter after the training on the first machine learning task is a correct value of the parameter given the first training data used to train the machine learning model on the first machine learning task;
obtaining second training data for training the machine learning model on a second different machine learning task; and training the machine learning model on the second machine learning task by training the machine learning model on the second training data to adjust the first values of the plurality of parameters to optimize performance of the machine learning model on the second machine learning task while protecting performance of the machine learning model on the first machine learning task;
wherein adjusting the first values of the plurality of parameters comprises adjusting the first values of the plurality of parameters to optimize an objective function that depends in part on a penalty term that is based on the determined measures of importance of the plurality of parameters to the first machine learning task.
Legal Analysis
The ARP followed the Alice/Mayo two-step test and the MPEP § 2106 analytical framework.[iv] The panel confined its analysis to Step 2A (Alice Step 1) because the issue was dispositive. Under Step 2A, Prong 1, the inquiry focuses on whether the claim recites an abstract idea. Here, the ARP did not dispute the Board’s position that computing an approximation over parameters constitutes a mathematical calculation, and thus an abstract idea.[v]
The panel then proceeded to Step 2A, Prong 2, where the inquiry focuses on whether the abstract idea is integrated into a practical application. This is where the ARP disagreed with the Board on a number of grounds. First, the ARP found that the claims provided technical improvements in how the learning model itself operates by preserving earlier knowledge while reducing storage needs and system complexity.[vi] Second, these improvements are technical, not merely field of use limitations.[vii]
The ARP cited to Federal Circuit case law to ground the decision, notably Enfish, LLC. V. Microsoft Corp., 822 F.3D 1327, 1339 (Fed. Cir. 2016) and McRO, Inc. v. Bandai Namco Games Am. Inc., 837 F.3d 1299, 1315 (Fed. Cir. 2016), that found that software-based structural or logical improvements can be patent-eligible.[viii] The ARP specifically emphasized Enfish, stating the case is “among the Federal Circuit’s leading cases on the eligibility of technological improvements” and quoted the decision in Enfish stating “[s]oftware can make non-abstract improvements to computer technology, just as hardware improvements can.” The ARP proceeded to point to language in the specification describing how the claimed invention uses less storage capacity and enables reduced system complexity as reflecting a patent eligible technical improvement. Each of the points the ARP raised in view of Step 2A, Prong 2 of the Alice/Mayo test appears to establish that improvements to machine learning models or algorithms themselves are improvements to technology and are therefore patent-eligible.
In summary, Desjardins sets a new tone for the application of current Federal Circuit § 101 jurisprudence and highlights and recognizes the importance of AI to the United States’ technological innovation:
Categorically excluding AI innovations from patent protection in the United States jeopardizes America’s leadership in this critical emerging technology. Yet, under the panel’s reasoning, many AI innovations are potentially unpatentable-even if they are adequately described and nonobvious-because the panel essentially equated any machine learning with an patentable “algorithm” and the remaining additional elements as “generic computer components,” without adequate explanation. Examiners and panels should not evaluate claims at such a high level of generality.[ix]
In addition to vacating the rejection under 35 U.S.C. 101, and making a bold statement as to the importance of AI-related technologies, Director Squires places 35 U.S.C. § 101 in its proper role in patentability analysis, noting that “[t]his case demonstrates that §§ 102, 103 and 112 are the traditional and appropriate tools to limit patent protection to its proper scope. These statutory provisions should be the focus of examination.”[x]
Closing Thoughts and Recommendations[xi]
AI in personalized medicine often integrates multi-omics, imaging and clinical data and learns from prior patients while adapting to new ones. Under Desjardins, if one ties those methods to technical improvements in the model’s architecture or training, one can argue that continual learning improves how the model functions. In addition, AI tools that improve model generalization, interpretation or training efficiency, or use hybrid architectures, or reduce drift or overfitting across patient populations, are not mere abstract ideas. In sum, personalized-medicine AI that changes how the computer learns, not just what it learns, may be a new safe harbor for patent-eligibility for some AI health care inventions.
[i] Ex parte Desjardins, Appeal 2024-000567, September 26, 2025.
[ii] The ARP did not review or reverse the Board’s rejection of the claims as obvious over 35 U.S.C. § 103.
[iii]Ex parte Desjardins, at 2-3.
[iv]Id. at 4-6.
[v]Id. at 6-7.
[vi]Id. at 8-9.
[vii]Id.
[viii]Id.
[ix]Id .at 9.
[x]Id., internal citations omitted.
[xi] While Desjardins is not precedential in the same way as a decision issued by the courts, ARP decisions are binding on the USPTO under Director authority. Thus, examiners and PTAB panels must follow this reasoning unless overruled by the Federal Circuit or a future ARP decision.
Categorically excluding AI innovations from patent protection in the United States jeopardizes America’s leadership in this critical emerging technology. Yet, under the panel’s reasoning, many AI innovations are potentially unpatentable-even if they are adequately described and nonobvious-because the panel essentially equated any machine learning with an unpatentable “algorithm” and the remaining additional elements as “generic computer components,” without adequate explanation. Examiners and panels should not evaluate claims at such a high level of generality.
View referenced article
Using AI in NEPA Analysis and Permitting
Key Takeaways
Federal and state policymakers are beginning to integrate AI technology for use in National Environmental Policy Act (NEPA) reviews and permitting processes. A recent Department of the Interior (DOI) order envisions expanding AI use in agency decision-making, subject to safeguards like maintaining a “human in the loop.”
This initiative promises to expedite NEPA and permitting processes. But reliance on AI comes with some risks, both that materials underlying decision-making may contain errors and that project opponents may seek to exploit concerns about agencies’ AI use as part of a challenge.
Businesses involved in AI-facilitated agency decisions should seek to ensure that agency AI use is well documented, properly supervised, and follows recognized best practices.
The Secretary of the Interior, Doug Burgum, recently issued a Secretarial Order on Artificial Intelligence that addresses the use of AI across a number of domains, which include “energy and resource development” and “permitting efficiency.” The Order asserts that DOI is “already seeing results,” which include “streamlined environmental reviews.” It directs DOI staff to ensure that DOI retains “oversight and accountability” and requires a “human-in-the-loop,” a safeguard often applied in AI systems.
The Administration’s efforts to expedite agency reviews and expand resource development and infrastructure projects have created increasing strain on agency resources, especially as agencies cut staff. DOI’s AI initiative aligns with other Administration efforts to bridge this gap by streamlining agency processes and seeking to make those processes more efficient. It is also part of a broader Administration effort to support and enhance American AI dominance, as set forth in Executive Order 14179 and accompanying OMB guidance.
In April, a Presidential Memorandum, “Updating Permitting Technology for the 21st Century,” directed agencies to “make maximum use of technology in environmental review and permitting processes.” The Council on Environmental Quality (CEQ) then released a “Permitting Technology Action Plan.” That document built upon CEQ’s earlier “E-NEPA Report to Congress,” which recommended technological options for streamlining NEPA processes. Several agencies have invested in related technologies, including the Department of Energy, the Federal Permitting Council, and the Air Force. States are also experimenting with AI tools, including a Minnesota project to streamline environmental permitting and a California project focused on permits for reconstruction after the Los Angeles fires.
AI models promise to simplify document drafting, data analysis, and review of public comments, potentially shortening federal review timelines. But their adoption raises concerns about error rates, bias, and explainability. For example, commenters suggested 1 that the Trump administration’s high-profile “Make America Healthy Again” report contained errors likely attributable to use of AI tools. Even in the absence of such errors, project opponents may seek to exploit concerns about the use of AI tools in litigation. It remains to be seen if and how courts will give deference to agency decision-making reliant on AI. Early adopters should ensure that contractors and agencies have safeguards in place and well documented so that the administrative record for any litigation will provide sufficient data and explanation of (human) decision-making to survive judicial review.
Next Steps
As public and private actors alike integrate AI into the NEPA process, businesses and project proponents should engage with agencies and support the use of recognized best practices 2 to mitigate legal and technical risks. These best practices include:
Establishing and adhering to guidelines that ensure transparency and internal accountability.
Ensuring that any use of AI tools is well documented and is explainable to third parties.
Properly supervising AI use to prevent errors such as hallucinations, bias, and data gaps.
Avoid the increasingly frequent disclosure of sensitive data by safeguarding confidential information when using AI tools.
1 White House Acknowledges Problems in RFK Jr’s “Make America Healthy Again” Report. (2025, May 29). NPR. Retrieved from https://www.npr.org/2025/05/29/nx-s1-5417346/white-house-acknowledges-problems-in-rfk-jr-s-make-america-healthy-again-report
2 For example, the ABA has released a formal ethics opinion on generative AI and professional responsibility. See https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf.
U.S. Department of Energy Requests Public Input About How to Power the AI Boom and Onshoring of U.S. Manufacturing
Key Takeaways
The U.S. Department of Energy (DOE) is starting a new program called “Accelerating Speed to Power/Winning the Artificial Intelligence Race” and has issued a Request for Information (RFI). DOE seeks public input on actions it can take to strengthen the federal government’s role in accelerating critical generation and transmission projects.
The priorities established through Accelerating Speed to Power will impact energy developers and large electricity customers. The RFI is an important opportunity for stakeholders to provide input to help frame the scope of this new DOE focus and shape the programs DOE develops. The deadline for public comments is November 21, 2025.
The “Accelerating Speed to Power” RFI
Through Accelerating Speed to Power, DOE aims to leverage several of its programs, including the Transmission Facilitation Program (TFP), the Grid Resilience and Innovation Partnerships (GRIP) program, the DOE Loan Programs Office loans and loan guarantees, and the DOE National Laboratories’ technical assistance capabilities.
The focus of the RFI is for DOE to gather public input on leveraging its programs for Accelerating Speed to Power. The RFI emphasizes energy projects of at least 3 gigawatts in generation capacity or transmission projects with at least 500 MVA in incremental load serving capability. This can include bringing retired generation back into service or upgrading existing transmission lines.
The RFI requests public input in five specific areas:
Large-scale generation and transmission projects to enable load growth;
High-priority geographic areas for targeted DOE investment;
Best use of or opportunities for DOE funding, financing, and technical assistance;
Load growth trends; and
Grid infrastructure constraints.
Data Centers and Manufacturers
Data center developers and owners are the target audience for DOE’s RFI and are well-positioned to respond and voice their opinions. As the race to build data centers continues, these large load entities should analyze and consider commenting on:
National and local permitting and regulatory regimes to determine where significant barriers to entry exist;
Environmental considerations and clean power commitments, as well as compliance with water and air quality standards that may pose long-term operational constraints and generate friction in light of this administration’s support of traditional thermal resources;
Regional grid constraints to ensure their always-on operations have consistent, reliable power and can reduce their reliance on on-site, emergency backup generation; and
DOE’s existing financial, technical, and programmatic support, and whether any additional types of programs would be beneficial to facilitate data center development and reliable operations.
Energy Developers and Suppliers
Accelerating Speed to Power cites four executive orders (EO) supporting the initiative:
EO 14154 – Unleashing American Energy: Establishes policy to support development of firm resources and rescinds environmental policies including, electric vehicle “mandate,” clean energy, and climate goals.
EO 14179 – Removing Barriers to American Leadership in Artificial Intelligence: Establishes policy to develop AI systems “free from ideological bias,” solidifies the United States’ role as a global AI leader, and rescinds Biden administration EO 14110 – “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”
EO 14262 – Strengthening the Reliability and Security of the United States Electric Grid: Establishes policy to strengthen the reliability and security of the grid and directs the Secretary of Energy to establish anticipated reserve margins using a methodology that accredits firm resources.
EO 14302 – Reinvigorating America’s Nuclear Industrial Base: Establishes policy to expedite and promote deployment of nuclear resources.
Although this suite of EOs generally establishes a preference for traditional generation sources, all power producers should take interest in responding to this RFI. Grid technologies and storage options are evolving rapidly in a way that substantially improves the capacity value of renewable resources like wind and solar, creating an opportunity to make the case to DOE that these resources can provide grid stability.
The RFI also presents an opportunity to influence where and how major transmission projects are sited. In addition, it is an opportunity to engage directly with the Trump administration and shape policy priorities.
Joint Commission Releases Guidance for AI in Health Care
The Joint Commission (TJC) and Coalition for Health AI (CHAI) recently published the Guidance on the Responsible Use of Artificial Intelligence in Healthcare (Guidance), which outlines strategies for health care organizations to optimize their integration of health AI tools. The Guidance defines health AI tools broadly as clinical, administrative, or operational solutions that apply algorithmic methods to tasks involved in direct or indirect patient care, care support services, and care-relevant operations and administrative services. Given this inclusive definition, the Guidance identifies a wide range of potential AI-related risks, including errors, lack of transparency, threats to data privacy and security, and the overreliance on AI tools. To address these concerns, the Guidance outlines suggested practices that health care organizations can undertake in implementing AI tools. These practices are organized into seven elements, which are summarized below. While the Guidance is not limited to health care delivery organizations, the Guidance focuses primarily on these organizations. It is also important to note that the Guidance is not binding on health care organizations, although TJC indicates that a voluntary “Responsible Use of AI” certification program is forthcoming.
The Seven Elements of Responsible Use of AI Tools for Health Care Organizations:
AI Policies and Governance Structures. The Guidance recommends that organizations establish formal AI-usage policies and a governance structure. According to TJC and CHAI, the policies should set expectations, including rules or procedures on the use of AI, and the governance committee should be composed of qualified individuals, including representatives from compliance, IT, clinical programs, operations, and data privacy. The Guidance also suggests regular reporting on AI usage to the organization’s board of directors or other fiduciary governing body.
Patient Privacy and Transparency. Organizations are encouraged to adopt specific policies on data access, use, and transparency consistent with applicable laws and regulations. To promote transparency, organizations should inform patients about AI’s role in their care, including how their data may be used and how AI may benefit their care. Organizations may also need to secure informed consent to use AI tools, if applicable. The Guidance reminds organizations that transparency with staff members on the use of AI tools cannot be overlooked.
Data Security and Data Use Protections. The Guidance stresses that all uses of patient data with AI tools must comply with HIPAA. Providers can support compliance by leveraging current data protection strategies, including encrypting patient data, limiting data access, regularly assessing security risks, and developing an incident response plan. TJC and CHAI recommend that organizations enter into data use agreements that outline permitted uses, minimize data exports, prohibit re-identification, require third parties to comply with the organization’s security and privacy policies, and provide the organization with audit rights.
Ongoing Quality Monitoring. In addition to privacy risks, the Guidance advises organizations to regularly monitor AI quality by looking for changes in outcomes and testing the AI tools against known standards. Externally developed AI tools may not receive consistent review, and the dynamic nature of AI renders it prone to “drift” from its intended purpose; therefore, the Guidance calls for an internal reporting system to identify risks and maintain quality of care. TJC and CHAI suggest a risk-based approach to monitoring AI tools, such that AI tools that inform or drive clinical decisions should be prioritized. Additionally, the Guidance advises that organizations create a process to report adverse events to leadership and relevant vendors.
Voluntary Reporting. The Guidance urges organizations to establish a process for confidential, anonymous reporting of AI safety incidents to an independent organization. By reporting through confidential channels to third parties, such as federally listed Patient Safety Organizations, voluntary reporting may improve the quality of AI usage without compromising patient privacy.
Risk and Bias Assessment. The Guidance also recommends that organizations implement processes for categorizing and documenting AI bias or risk. In clinical use, AI may be unable to generalize diseases to certain populations, leading to misdiagnosis and inefficient care. TJC and CHAI recommend that organizations verify whether AI tools are appropriately tuned to the population to which they are applied and that the AI tools were developed using representative, unbiased data sets.
Education and Training. Finally, to ensure that AI benefits the organization, the Guidance advocates for education and training of health care providers and staff on the use of AI tools, including any limitations on, and risks of, their use. The Guidance directs organizations to limit AI tool access to specific roles on a need-to-use basis, and to advise all staff where to find relevant information about AI tools and organizational policies and procedures.
Implications
In the absence of comprehensive federal laws governing AI, the Guidance (along with existing resources such as the National Institute of Standards and Technology (NIST) AI Risk Management Framework and the Bipartisan House Task Force Report on Artificial Intelligence) may help health care organizations evaluate and implement AI tools in a safe and compliant manner.
Similar to the NIST RMF AI Playbook, TJC and CHAI plan to release a series of practical “playbooks” to operationalize recommended practices in the Guidance. Health care institutions seeking actionable guidance may want to take note of these playbooks because they will inform TJC’s future AI certification program.
Overall, the Guidance’s strategies can help health care organizations minimize AI risks and foster an adaptive health care environment.
Lauren Ludwig contributed to this article
California Leads the Way with New Regulations Addressing Use of AI in Employment Decisions
California’s updated regulations regarding the use of artificial intelligence-based (AI) Automated-Decision Systems in employment took effect on October 1, 2025. These regulations address the use of “Automated-Decision Systems,” which the regulations define as computational processes that make decisions or facilitate human decision-making regarding an employment benefit. According to the regulations, “Automated-Decision Systems may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.”
The regulations make it unlawful to use an Automated-Decision System that discriminates against an applicant or employee based on a protected category under California’s Fair Employment and Housing Act (FEHA). The regulations are aimed at scenarios such as: an employer uses an Automated-Decision System to take an initial pass at applicants’ resumes, and the Automated-Decision System eliminates all applicants with disabilities (or in any other protected class such as based on age, gender, etc.). One example provided by the regulations involves an Automated-Decision System that “analyzes an applicant’s tone of voice, facial expressions or other physical characteristics or behavior [which] may discriminate against individuals based on race, national origin, gender, disability, or other characteristics.”
Notably, the regulations provide that “evidence, or the lack of evidence, of anti-bias testing or similar proactive efforts to avoid unlawful discrimination, including the quality, efficacy, recency, and scope of such effort, the results of such testing or other effort, and the response to the results” is relevant to any claim brought under the regulations.
This language provides a roadmap for employers regarding prevention against claims based on the use of Automated-Decision Systems. Employers using Automated-Decision Systems should take protective measures, such as auditing these systems for bias and requiring vendors to certify that such systems have been thoroughly tested to detect bias and that any bias issues have been sufficiently addressed. As is often the case, California is one of the first states to address this type of technology in the context of its employment laws. We expect other jurisdictions will soon follow suit.