Website Tracker Litigation Continues to Pose Compliance Headache: Updates on CIPA and Related Litigation

Plaintiffs’ attorneys have been relentlessly targeting companies that deploy common website tracking tools with demand letters, arbitration, and litigation arguing that their use of chatbots, analytics, and targeted advertising tools violate applicable privacy laws, including but not limited to the California Invasion of Privacy Act (CIPA). This trend has shown no sign of slowing down, and it is likely only to accelerate with the increasing adoption of new “black box” artificial intelligence (AI) tools across a range of business applications.

Quick Hits

Litigation remains steady and costly. Hundreds of lawsuits and arbitration demands continue to allege that website-tracking technologies—such as pixels, analytics tools, and chat features—violate the California Invasion of Privacy Act and related privacy laws. Despite mixed judicial outcomes, the volume of filings has not slowed.
Courts are deeply divided. Some judges have dismissed cases on standing or “contents” grounds, while others allow claims to proceed where third-party vendors can access or use data for their own purposes. The lack of uniform interpretation continues to fuel new filings.
AI is reshaping the conversation. Plaintiffs are beginning to extend these same theories to generative-AI and chatbot tools, arguing that AI systems “listen” to or repurpose user inputs without appropriate consent.
Legislative clarity is still out of reach. Senate Bill 690, which would have excluded routine commercial tracking from the California Invasion of Privacy Act’s scope, failed to advance in 2025—leaving businesses with the same patchwork of inconsistent rulings and few near-term answers.

While the legal theories are evolving, the trend is clear: courts are testing how older privacy laws apply to a digital-marketing and increasingly connected world. Organizations that collect or analyze user interaction data may want to closely examine how their tracking technologies function, understand how existing privacy laws may apply, and implement measures to manage associated risks.
Evolving Litigation Landscape
Recent complaints often combine claims under CIPA §§ 631(a) and 632.7 (California’s wiretap and eavesdropping provisions) with “trap-and-trace” allegations under § 638.51. In some cases, plaintiffs also append claims under the federal Wiretap Act or the Video Privacy Protection Act (VPPA). These hybrid filings seek to capture nearly any instance in which a website, through an embedded pixel, session-replay tool, or chatbot, transmits user interaction data to a third-party vendor.
Courts have taken inconsistent approaches to these claims. Some have dismissed them at the pleading stage, reasoning that a website operator cannot “intercept” its own communications with a visitor, or that metadata such as IP addresses and click paths do not reveal the “contents” of a communication. Others have allowed cases to proceed where the technology captured free-text inputs, chat messages, or search queries that arguably constituted the substance of a user’s communication. The result is an increasingly fragmented body of CIPA decisions and uncertainty for companies trying to comply.
As companies integrate AI-powered chat and personalization tools, plaintiffs have begun to test whether these tools “record” or “repurpose” user inputs in a way that triggers CIPA’s consent requirements. Because many AI models process prompts in opaque ways and may rely on vendor-hosted infrastructure, they raise additional questions about whether a “third party” is effectively accessing user communications.
Recognizing the way in which CIPA was being weaponized against businesses, California lawmakers introduced Senate Bill 690, which would clarify that “routine commercial tracking” does not violate the statute. However, the bill ultimately stalled and was unable to pass during the 2025 legislative session, leaving the uncertain status quo in place for businesses trying to navigate this area.
What’s Next for Tracker Litigation?
The parabolic trajectory of website-tracking litigation shows no signs of slowing. Plaintiffs’ firms continue to aggressively file new cases, experimenting with overlapping theories of liability and often amending once their initial theory is shot down. Even as some courts have rejected CIPA claims at the pleading stage, the persistence and volume of filings have turned this into one of the most active areas of privacy litigation nationwide. And sometimes plaintiffs are permitted to amend several times before their complaints are ultimately tossed out for good.
What began as a California phenomenon has quickly spread beyond the state’s borders. Plaintiffs are invoking CIPA against companies that may have little or no connection to California other than maintaining a website accessible to its residents. A number of California and federal courts have dismissed cases on personal-jurisdiction grounds, but others have allowed them to proceed, creating an additional layer of uncertainty for companies that operate nationally or globally. The result is a litigation landscape that remains inconsistent, but not unpredictable: even when the legal theories are thin, the cost of defense and the potential exposure both continue to drive risk.
At the same time, the next wave of claims is beginning to take shape around artificial intelligence. Generative-AI tools—particularly chatbots and recommendation engines that rely on user inputs—are becoming the focus of new demand letters and early-stage complaints. Plaintiffs contend that these “AI listeners” intercept or repurpose communications without sufficient consent, borrowing the same reasoning that fueled earlier suits against pixels and session-replay tools. These theories remain largely untested, but they reflect how quickly the focus of privacy litigation adapts to new technology.
Legislative efforts to resolve this uncertainty have so far stalled. Senate Bill 690, which would have clarified that “routine commercial tracking” does not violate CIPA, did not advance during the 2025 legislative session. Its failure leaves businesses to navigate the same patchwork of inconsistent rulings that has characterized the past two years. Until greater clarity emerges, companies can expect continued filings and evolving pleadings designed to exploit the lack of uniformity in how courts interpret key CIPA provisions.
Ultimately, tracker litigation appears poised to remain a fixture in the privacy landscape for the foreseeable future. For businesses, the near-term focus is less about legal claim certainty and more about preparation—understanding data flows, documenting (and adjusting, if appropriate) consent mechanisms, and maintaining defensible governance practices while the law continues to evolve.
Practical Considerations
What began as “nuisance litigation” has, at the very least, spurred broader—and indeed, important—conversations around website tracker governance and the precise nature of tools present on a company’s website. Any meaningful review typically requires cross-functional input from marketing, legal, and IT teams to ensure that website and AI tracking tools are implemented in a way that both complies with clear privacy law requirements and, in the case of vague CIPA boundaries, adequately mitigates litigation risk. At the same time, businesses often (and understandably) want to avoid “overcompliance,” or kneejerk reactions to CIPA or other pixel litigation in a way that overcorrects and harms the business’s ability to reach its customers and potential customers with valuable marketing efforts. Visibility into what data is collected, how it is used, and with whom it is shared is the important first step in determining the appropriate middle ground for accomplishing that goal.
As part of this comprehensive digital marketing review, businesses often take a variety of steps, which often includes some combination of the following:

Taking stock of tracking and analytics tools. Many businesses are conducting internal assessments to understand the range of cookies, pixels, analytics scripts, chat functions, and AI features operating across their digital properties. These reviews often focus on identifying where data originates, where it flows, and whether third parties have access.
Comparing policy to practice. Companies continue to review their public-facing privacy disclosures and cookie notices to confirm they accurately reflect operational practices. A clear alignment between what is disclosed and what occurs in practice can reduce the risk of misrepresentation claims and support defensibility if challenged.
Examining consent and user interface design. Organizations are exploring ways to enhance transparency around data collection and user choice. This may include clarifying banner language, implementing timing or scope limitations on tracking, or maintaining records of consent where applicable.
Revisiting vendor relationships. Vendor contracts are receiving closer scrutiny to confirm roles and restrictions are clearly defined—particularly when vendors deploy tracking or analytics tools on a company’s behalf. Key focus areas include data-use limitations, obligations to delete or return data, and cooperation in the event of an inquiry or claim.
Integrating AI tools into existing governance programs. As AI-driven features become more common, many companies are expanding their privacy review processes to include questions about how those tools capture, process, and transmit user inputs. In some cases, this involves cross-functional review among privacy, IT, and marketing teams.
Monitoring developments and adjusting accordingly. Given the pace of legislative and judicial activity—including the potential enactment of Senate Bill 690 or similar legislation—organizations are maintaining visibility into new rulings and guidance to inform future risk assessments.

CIPA and pixel litigation remain a moving target, but the underlying message is clear: transparency, documentation, and disciplined vendor management are no longer optional. Businesses that treat digital tracking as a regulated data-processing activity rather than a purely marketing function will be best positioned to demonstrate good-faith compliance and minimize litigation risk.

The American Legal Technology Awards Name 2025 Winners

The sixth annual American Legal Technology Awards were presented on Wednesday, October 15th, at Suffolk University Law School (Boston), recognizing winners across ten categories. There were 211 nominees who were evaluated by 27 judges. 
The honorees on the night included: 
2025 Awards Winners by Category

Access to Justice: Maryland Justice Passport “We really set out to make the delivery of legal services more humane.” The Passport coordinates intake across 10 Maryland organizations and is achieving an 81% case placement rate. 
Court: Ohio Legal HelpOhio Legal Help showed how mobile‑first courts can meet people where they are. As their team put it, “We talk about access to justice, but if we can’t get them to the finish line, it’s incomplete.” 
Education: Sarah Mauet Sarah Mauet’s UX4Justice course uses a research‑driven framework that trains students and professionals to design trauma‑informed tools that truly serve court users. 
Enterprise: Onit (Unity)Onit’s acceptance captured the legal‑ops zeitgeist: “AI [at Onit] is conceived by lawyers for lawyers… we’re allergic to inefficiency.” Unity embeds intelligence across the legal workflow—proactive insights, automated decisions, and predictive analytics—transforming how departments operate.  
Individual: Nick Rishwain Nick invests where impact multiplies: mentorship, introductions, funding, and access for underrepresented founders and for markets incumbents overlook. 
Journalism: Marlene Gebauer (The Geek in Review) “Podcasting’s about telling stories,” Marlene said, thanking a community that’s grown with the show for seven years. Journalism that illuminates without hype is a public good; we’re grateful. 
Startup: ClaimScore ClaimScore reminded us that integrity and accessibility can coexist. “We’ve seen up to 99% of claims [in a single matter] be fraudulent… These [bad] actors are stealing millions… from class members who deserve this,” co‑founder Brian explained. Their real‑time fraud detection protects settlements while smoothing the path for legitimate claimants. 
Law Firm: Gunderson Dettmer (ChatGD+)Gunderson’s ChatGD+ isn’t a pilot; it’s a culture. Built on a modern research/workflow stack, it puts AI into research, drafting, and routine tasking while attorneys feed continuous feedback to make the tools better. 
Artificial Intelligence: Free Law Project Jennifer Whiston spoke to Free Law Project’s mission: “We believe that the law should be accessible to everyone, not just those with resources or representation… Everything we do is open source… We keep a human in the loop in everything that we do.” 
Lifetime Achievement: Jim Calloway For three decades, Jim has been the legal profession’s most trusted technology mentor. Jim reminded us that technology training is a form of social justice. His closing challenge reverberated: “If the people in this room would reach out, even if it’s just teaching one enrichment course, you could change a lawyer’s life.”

Runners‑up noted in the official summaries were: Nora Cregan (Access to Justice); Maryland Center for Legal Assistance (Court); Rebecca Fordon (Education); BigHand (Enterprise); Colin Lachance (Individual); Stephen Embry (Journalism); New Era ADR (Startup); Janice Dantes / Pinay Law (Law Firm); and Descrybe.ai (Artificial Intelligence). 
Program Notes
The organization’s co-founders came together to underscore how the legal technology community continues to serve both the profession and a larger public good.

Building Together: Co‑founder Tom Martin framed this year’s theme as “Time to Build.” “One thing that’s more important than technology is us: human beings. When we work together and collaborate… we all want better lives for ourselves and our families.” That’s why we build. 
Progress and Principles: Co‑founder Patrick Palace underscored the profession’s oath to the Constitution and the importance of judicial independence.  
Shining a Light: Co‑founder Cat Moon introduced the Lifetime Achievement Award and emphasized that highlighting and sharing work across the community is the best way to accelerate progress. She spoke about shining a bright light on the work so we can “be inspired by each other, learn from each other, and make each other better.” 

Host & Partners: Suffolk University Law School hosted the ceremony; event sponsors included 8am, Clio, and ARAG Legal. 
For category descriptions and short profiles of each honoree, see the Summaries of Winners, Runners Up, and Honorable Mentions booklet distributed with the event. The full-length video of the awards ceremony is now available on YouTube.

EDPB Adopts Opinions on EU-UK Adequacy Decisions

On October 20, 2025, the European Data Protection Board (“EDPB”) adopted two opinions on the European Commission’s draft decisions to extend the validity of the UK’s adequacy status under the EU General Data Protection Regulation (“EU GDPR”) and the Law Enforcement Directive (“LED”) until December 2031. The existing decisions are set to expire on December 27, 2025. When considering the UK’s adequacy status, the European Commission and EDPB must take into consideration the recently passed UK Data (Use and Access) Act 2025 (see our previous blog for further details).
The EDPB expressed general approval of the UK’s continued efforts to mirror the EU’s data protection standards, noting that the majority of proposed amendments to the UK data protection regime are designed to enhance clarity and support compliance for both organizations and individuals. That said, the EDPB did highlight several areas which, in its view, require further examination and ongoing monitoring by the European Commission:

International Data Transfers: The EDPB stated that the UK’s new adequacy test omits key elements previously considered essential, such as safeguards for government access, redress for individuals, and oversight by an independent supervisory authority. The EDPB recommends the European Commission further elaborate and closely observe the implementation and practical effects of these changes.
Secretary of State’s New Powers: The UK Secretary of State now holds expanded authority to revise UK data protection rules via secondary legislation, potentially affecting areas such as international transfers, automated decision-making, and governance of the UK Information Commissioner’s Office (“ICO”). The EDPB calls for the European Commission to specify in the final decision which areas will be closely monitored to guard against divergence from EU standards.
ICO Structure and Complaints Handling: The EDPB urged the European Commission to conduct a thorough assessment and ongoing monitoring of recent changes to the structure and governance of the ICO. This includes evaluating the rules for appointment and dismissal of board members, as well as the introduction of a new triage system for complaints handling. The EDPB praised the ICO for its transparency policy and availability of statistical and analytical data on enforcement activities.
Changes to “Inherited” EU Law: The Retained EU Law (Revocation and Reform) Act 2023 removes the principle of EU law primacy and its direct application, including the right to privacy and data protection as derived from the Charter of Fundamental Rights. The EDPB urges the European Commission to provide a detailed assessment of these changes and their broader impact on the UK’s legal and data protection frameworks.
Automated Decision-Making and Human Review: The EDPB stressed that the UK’s adoption of a more permissive stance on automated decision-making should be analyzed and monitored by the European Commission and urged the European Commission to consider the practical impacts of this change in its final assessment of the UK’s adequacy decisions.
National Security and Law Enforcement Exemptions: The EDPB stated that it is essential that the European Commission carefully assess the extended national security exemptions under the law enforcement framework to ensure that such exemptions are necessary and genuinely intended to meet objectives of general interest recognized by the EU or to protect the rights and freedoms of others.
Distinction Between Law Enforcement and National Security Processing: The EDPB highlighted the need for clear boundaries between processing for law enforcement and national security, to prevent legal frameworks from being stretched beyond their intended scope.

AI Due Diligence in Healthcare Transactions

Healthcare organizations of every shape and size are rapidly expanding their use of artificial intelligence solutions from high-risk applications like clinical decision-support interventions, ambient listening, and charting to lower-risk administrative activities like automated patient communications and scheduling. While adoption is widespread and increasing in depth and breadth across the industry, not every healthcare organization has established governance around AI or a monitoring process for exploration and adoption of new tools – including those contemplating a sale of assets or equity. For buyers in healthcare mergers and acquisitions today, AI diligence needs to be a focus, given the potential risk of compliance and class action concerns related to high-risk AI solutions, particularly those that have any interaction with protected health information (“PHI”) regulated under the Health Insurance Portability and Accountability Act, as amended and pursuant to its implementing regulations (collectively, “HIPAA”).
Understanding AI Risks in Healthcare Transactions
As mentioned, not every seller in a healthcare transaction is fully aware of the scope of its use and deployment of AI and may not have a comprehensive AI governance and monitoring strategy. For a buyer, understanding how the seller uses AI and assessing the potential risk level posed by its existing applications is the best way to identify and mitigate potential problems and plan for success in the integrations process post-closing. Once buyers identify what AI applications are in use at their target, they and their advisors might look at potential HIPAA and intellectual privacy risks in addition to the assessing the seller’s related vendor arrangements, particularly with respect to data ownership and use, security/data privacy, indemnification and reporting obligations, etc. Having a good sense of where AI arrangements may involve a high degree of compliance or contractual risk will allow the buyer to negotiate effectively to avoid assuming potential liabilities above its risk tolerance and be clear about areas for mitigation and improvement after closing.
State Laws and Evolving AI Regulations in Healthcare
In addition to purely operational and contractual risks, buyers need to understand whether applicable state laws impact the seller’s use of AI applications (since AI is not currently subject to federal regulation). The highest risk areas relate to issues such as required disclosures of AI use in decision-making for activities such as prior authorization, patient consent and authorization (in compliance with HIPAA), including for purposes of ambient listening, and consumer privacy protections. Buyers may consider relevant resources to monitor regulatory developments related to AI in states where they operate or are targeting potential acquisitions.
For example, in California, the governor signed Assembly Bill 489 on October 11, 2025. This bill prohibits AI systems and chatbots that communicate directly with patients from suggesting that the advice they give is coming from a licensed health professional.[1] The prohibition applies not only to direct statements by the AI, but also to any implication that the medical advice has come from a licensed person. Similarly, in Illinois, the Wellness and Oversight for Psychological Resources Act prohibits anyone—even licensed providers—from using AI in the decision-making process for mental health and therapy, including recommendations that AI might make to diagnose, treat, or improve someone’s mental or behavioral health (with carve outs for administrative support).[2] We expect that states will continue to expand on regulation in this space and that enforcement activities are likely to increase in response to industries where use of AI may pose inordinate risk for the public, particularly healthcare.
What Buyers Can Examine During AI Due Diligence
Staying ahead of the many challenges that accompany AI use in healthcare means conducting due diligence of a target’s AI use with an eye towards identifying areas of highest risk and planning for potential mitigation strategies. Buyers may structure their AI diligence to cover the following areas for AI risk management:

Understanding AI oversight in the target (e.g., AI governance committee, Chief Information Officer, or Chief AI Officer);
Assessing the degree to which the target has developed and implemented AI oversight activities (e.g., through a formal AI governance survey and strategy or other informal assessments);
If the target has adopted an AI governance program, assessing its implementation and any AI-specific policies and procedures (i.e., pilot programs, use of approved technologies, bias controls, data validation, audits, etc.);
Confirming the target’s approved uses of AI technologies and the level of potential risk (e.g., clinical decision support interventions, patient monitoring, diagnostic assistance, ambient listening technologies, etc.) and vendor relationships;
Examining a list and descriptions of all AI tools and AI models used by, developed by, or trained by the target company, including detailed information related to the use cases for each AI tool/model, scope of use, and methods of access;
If the target company relies on third-party AI developers or vendors to support its AI implementation, reviewing all third-party AI vendor/developer model cards and contracts (including contracts involving AI use for clinical research purposes); and
Reviewing the target company’s standard terms for its AI vendors (e.g., with respect to data ownership, auditing, reporting, service level agreements, and indemnity terms) and any material open claims.

Collaborating Across Legal, IT, and Clinical Teams
The buyer’s legal counsel might have specific expertise not only in healthcare but with respect to healthcare data privacy, security, and AI to assess risk related to the target’s operations, structure, and potential high-risk areas and make practical recommendations on risk for go forward operations and integrations. Further, the buyer might have its counsel in the loop to coordinate the diligence review process.
By virtue of their responsibilities, the buyer’s IT, operations, and clinical employees will bring valuable insights into how the seller’s AI use may impact the buyer’s go-forward operations, including with respect to integrating with the buyer’s AI strategy. Advisors might focus on potential quality of care and privacy concerns and work together to provide a comprehensive evaluation of potential concerns and high-quality recommendations for the buyer’s executive team.
Building a Post-Closing AI Governance and Compliance Strategy
In conjunction with the due diligence review, the buyer may consider developing a strategy for how it and its target will manage AI risks post closing (e.g., determining whether and to what extent the target’s existing vendor agreements may be assigned or amended in connection with closing, ensuring appropriate integration planning for AI tools, IT capabilities, and planning for go forward AI governance, oversight and monitoring, patient care and safety, etc.). To the extent a buyer does not have its own existing governance plan, it may consider undertaking an AI use survey and adopting a formal AI governance strategy, which allows for data protection and access controls, long-term compliance protections, streamlined assessment and adoption of potential AI tools and vendor negotiations, and oversight of ongoing AI activities.[3]
Key Takeaways for Healthcare Buyers and Investors
AI is an evolving legal and operational risk area in healthcare transactions. Conducting effective due diligence review of AI in a proposed transaction calls for a buyer’s and its counsel’s detailed understanding of the technology itself, as well as potential risks and liabilities surrounding its use. This rapidly developing area of law will continue shaping the regulatory landscape of the healthcare field, but with the right preparation, the diligence process will minimize a buyer’s exposure and best position it for post-closing success.
FOOTNOTES
[1] A.B. 489, State Leg. 2025–26 Sess. (Cal. 2025) https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260AB489
[2] H.B. 1806, 104th Gen. Assemb. (Ill. 2025) https://www.ilga.gov/legislation/PublicActs/View/104-0054
[3] Sheppard Mullin Healthcare Law Blog, Key Considerations Before Negotiating Healthcare AI Vendor Contracts, (March 2024)

Behind the Pixel- Not Always Personal Information Under VPPA

Many courts have held that that information gathered by video-related pixels are not “personal” for purposes of the Video Privacy Protection Act. Nevertheless, plaintiff class action attorneys continue to file these VPPA actions in federal court.
This issue came up in a recent case against the National Basketball Association (Salazar v. NBA). The plaintiff argued that video-related pixels used by the NBA gathered personally identifiable information and sent it to third parties. The New York federal court, looking at the case on remand, disagreed. It held that the information gathered – lines of computer code – was not personal. In reaching its decision, the court relied on Second Circuit precedent (Solomon v. Flipps Media). Namely, that personal information is limited to what an ordinary person – as opposed to a sophisticated technology company – can use to identify someone.
Putting It Into Practice: This decision is helpful and good news for those who have video pixels on their sites. However, the ongoing litigation in this area is a reminder that to be prepared. Have a full picture of your site’s tracking tools. This means more than just asking IT, as the tools may be placed by different internal teams or outside vendors. You will likely need a working relationship across many groups, not only IT, legal, and compliance.

Cut the AI Bait-and-Switch – Tips for Employers to Spot Fake Job Applicants

As deepfake tools and voice cloning become cheaper and more convincing, employers are increasingly encountering a troubling trend: fraudsters using artificial intelligence (AI) to create fake appearances, voices, or profiles to land remote jobs. In addition to the risk of hiring unqualified applicants, the practice raises significant concerns about individuals trying to steal company trade secrets, install malware on company-owned devices, or engage in other subversive activities.

Quick Hits

Some scammers are using AI to fake their voice and image during video interviews.
This trend raises the risk that employers could experience poor performance by unqualified workers, cyberattacks, theft of sensitive data, or embezzlement.
Careful hiring strategies can help employers prevent these schemes.

The last thing employers want to do is to hire a person with a fake identity—whether the person’s goal is to obtain a job for which they are not qualified, steal data or money, or install spyware or ransomware on company devices. If the hiring process is rushed or inconsistent, it is easy for companies to fall victim to this kind of scheme. In January 2025, the Federal Bureau of Investigation (FBI) warned employers about the growing threat from North Korean IT workers infiltrating U.S. companies to steal sensitive data and extort money.
Online job postings have made it easier for employers to reach a wide pool of candidates across the United States, but it also has led to an environment where one job posting might draw thousands of applications, making it more difficult for hiring managers to sort through and find the best talent. The rise of remote work since 2020 has further complicated matters, as it can make it more difficult to detect when a new hire previously faked his or her voice or image during the interview process.
Risk Reduction Strategies
To reduce the risk of hiring someone with a fake identity, employers may wish to consider these strategies:

relying on in-person interviews whenever possible; otherwise, using live video with cameras and applying simple, neutral authenticity checks, such as turning the head, waving a hand, or reading a randomly selected sentence to detect overlay artifacts;
conducting multiple interview rounds with role-specific questions designed to elicit concrete details;
asking interview questions designed to elicit specific details about the applicant’s location and personal background (while, of course, avoiding questions prohibited by employment discrimination laws);
scrutinizing resumes and applications for typos, unusual terminology, and inconsistencies with public profiles;
verifying identity, work authorizations, education, and employment history through legally compliant methods, and making job offers contingent upon successful verification;
contacting and verifying the applicant’s professional references; and
training hiring managers to spot red flags in video interviews (e.g., lip-sync issues, abnormal lighting, or lagging inconsistent with audio).

Ironically, there are AI tools that can help employers spot fake job applicants, but employers may want to use those tools cautiously with vendor diligence and human review.
Employers may want to ensure that any screening, background checks, and AI-assisted tools are used in compliance with applicable federal, state, and local laws. This includes “ban-the-box” rules on criminal history inquiries and timing; background check disclosures, authorizations, and pre-adverse/adverse action procedures; automated decision-making regulations; and biometric identifier rules. In addition, employers may wish to coordinate recruitment policies and practices with IT security and privacy professionals.

California Finalizes Groundbreaking Regulations on AI, Risk Assessments, and Cybersecurity, Part III: Risk Assessments

On September 23, 2025, the California Office of Administrative Law (OAL) approved regulations under the California Consumer Privacy Act (CCPA) that specifically address the use of artificial intelligence (AI) and automated decisionmaking technologies (ADMTs), requirements for completing risk assessments, and annual cybersecurity audits. This article is the third and final installment of a three-part series exploring the new requirements. Our first article focused on the ADMT provisions, and our second article addressed the requirements surrounding cybersecurity audits.

Quick Hits

The recently finalized California Consumer Privacy Act regulations include requirements to perform risk assessments for any processing that presents a significant risk to California residents’ privacy.
Every business that must conduct a risk assessment will also need to submit to the California Privacy Protection Agency information regarding their assessments—including a designated contact, the time period covered by the submission, the number of risk assessments conducted or updated by the business during the time period, and more.
Risk assessments should not be a mere formality or an isolated exercise by one person or department, but rather an integrated, cross-functional pillar of a company’s data governance structure.
For ongoing high-risk activities, risk assessment submissions for 2026 and 2027 will be due by April 1, 2028.

What Is a Risk Assessment?
Risk assessments analyze and document potential privacy harms arising from a business’s processing of personal information. As a reminder, “processing” is an incredibly broad term that means performing any action or “set of actions” on personal information. Risk assessments must be completed before a business initiates covered processing and must be updated at least once every three years, or sooner if there is a material change to the processing that affects risks or safeguards. Every risk assessment acts as a thorough risk-benefit analysis, and must include:

a clear and specific explanation of the purpose of the processing that is not described in generic terms;
a description of the categories of personal information involved, including how the data is collected and from what sources, and the methods of collection, use, disclosure, and storage to be employed;
context about consumer interactions (e.g., web, app, offline), the approximate number of consumers affected, and the disclosures that will be provided (e.g., just-in-time notices);
use of ADMTs, if applicable, including the logic of and outputs where ADMT is used to make a significant decision and how those outputs are used;
the specific benefits of the processing to the business, consumers, other stakeholders, or the public, which again must not be described in generic terms;
all reasonably foreseeable privacy risks and potential negative impacts to consumers that could result from the processing, including unauthorized access, discrimination, threats to physical safety, and more;
safeguards the business will implement to mitigate identified risks, including technical, procedural, and organizational measures; and
whether the business will initiate the processing after weighing benefits, risks, and safeguards, including the date of review and approval and the names and positions of the individuals who reviewed or approved the assessment, as well as the individuals who provided information for the assessment (excluding legal counsel).

The regulations also require documenting the names or categories of service providers, contractors, or third parties involved in the processing and the purposes for which information is made available to them. Given the breadth of required inputs—ranging from technical architecture and data flows to workforce impact and vendor dependencies—effective assessments necessarily draw on expertise from security, IT, data and analytics, product, HR, procurement/third-party risk, and the relevant business owners, in addition to representatives from legal and privacy. The regulations emphasize the involvement of all relevant internal stakeholders in the risk assessment process, including oversight by a member of senior management. Additionally, businesses are permitted to involve third-party experts in the risk assessment process.
As businesses with experience dealing with the CCPA likely know, risk assessments should not be siloed exercises carried out solely by a lawyer or privacy officer, nor should they be treated as a mere formality. These extensive requirements are designed to ensure a thorough evaluation of a project’s privacy impact.
As a practical matter, businesses may consider developing an internal template covering these points to guide their teams through the assessment. Many companies may want to integrate this process into existing governance structures—by, for example, folding the CCPA risk-assessment questions into a broader privacy impact assessment (PIA) process or an AI ethics review process, especially if the business is also subject to similar requirements under other privacy or AI laws. The regulations explicitly allow companies to rely on an existing risk assessment (such as a PIA conducted pursuant to another law), so long as the assessment covers all the elements required by California’s regulations (or is supplemented to fill any gaps). If the existing assessment lacks certain components, a business can supplement it with additional analysis rather than starting from scratch, which makes conducting an analysis to identify any gaps in coverage essential.
A business must retain risk assessments for as long as the data processing continues or for five years after an assessment is completed, whichever is longer. Companies may want to consider ensuring a system exists to track all required risk assessments and their review dates.
Who Must Complete a Risk Assessment?
The regulations require businesses to complete a risk assessment before initiating any processing of consumers’ personal information that presents “significant risk” to the consumer’s privacy. Significant risk arises for risk assessment purposes if a business:

“sells” or “shares” personal information (as those terms are defined, which apply to a surprisingly broad range of activities, such as many commonplace website tracking technologies);
uses sensitive data—such as precise geolocation, health, or financial information—for a nonexempt purpose;
deploys ADMTs to make a “significant decision” concerning a consumer—such as decisions affecting financial services, housing, employment, education, or access to healthcare;
uses automated processing to infer or extrapolate certain sensitive traits about a consumer based on systematic observation while in certain sensitive contexts, such as those involving job applicants, students, and employees;
uses automated processing to infer or extrapolate certain sensitive traits about a person based on the person’s presence in a sensitive location; or
intends to develop or train ADMTs or AI using individuals’ personal information (including training facial-recognition, emotion-recognition, or other technologies that verify identity or conduct physical or biological identification or profiling).

These categories likely mean a wider scope of organizations that will need to perform risk assessments, potentially including advertising technology companies, data brokers, companies using algorithms or AI models to determine eligibility, and businesses that engage in consumer profiling. Inventorying data processing activities against these criteria will help determine whether a formal risk assessment is required under the regulations.
If an organization is indeed required to conduct risk assessments, these assessments will need to be reviewed at least once every three years. In addition, the organization will be required to submit annual reports (as explained in further detail below), and, if there is a material change in the relevant processing activity, update the risk assessment within forty-five calendar days. For certain HR use cases, there is a narrow carveout: processing sensitive personal information of employees or independent contractors solely for specified employment-related purposes (e.g., administering compensation, verifying work authorization, administering benefits, providing reasonable accommodation, or wage reporting) does not require a risk assessment; any other processing of sensitive personal information remains in scope. However, for processing activities that began before the regulations’ effective date, organizations will enjoy a grace period. For these activities, the regulations require that a risk assessment be conducted no later than December 31, 2027. This ramp-up period is intended to give companies that may be new to risk assessments time to comply before the reporting period begins.
Annual Reporting to the California Privacy Protection Agency
Finally, the regulations require that businesses formally report their risk assessment activity to the Agency annually. Businesses are not required to submit the full text of each risk assessment. Instead, at a higher level, each report and certification must include:

the business’s name and the point of contact’s information;
the time period covered by the report;
the number of risk assessments the business conducted or updated in that period;
whether the risk assessments accounted for the processing of certain categories of personal information and sensitive personal information covered by the CCPA;
an attestation of compliance; and
the name and title of the executive submitting the attestation, who must be a member of executive management directly responsible for risk-assessment compliance and knowledgeable about the assessments.

While the lighter reporting requirement may seem like a boon at first glance, proactive reporting means the Agency could use submissions to target audits or for enforcement. Organizations may thus want to treat reports not just as a bureaucratic exercise but as public-facing documents that must be accurate and supported by legitimate business justifications. Submissions must be made via the Agency’s website, and the Agency or the attorney general may require submission of the underlying risk assessment reports within thirty calendar days of a request.
For ongoing high-risk activities, risk assessment submissions for 2026 and 2027 will be due by April 1, 2028. Each April 1 thereafter, companies must submit reports for the prior calendar year. If a business did not engage in any high-risk processing activities for the calendar year, an annual report is not required.
Practical Steps
With the regulations poised to take effect, businesses may want to start laying the groundwork for compliance now. Depending upon the exact nature of the processing, this may include:

conducting internal reviews or data-mapping exercises to identify any current or planned processing that falls into the high-risk categories;
engaging IT, data, and business unit leaders to identify projects that might implicate high-risk categories;
updating product development and data initiative approval processes to include a privacy risk assessment checkpoint;
establishing a privacy and ethics review board for new data uses to centralize and document assessments;
creating standard risk assessment templates and tools aligned with the Agency’s requirements to guide assessors through a thorough analysis;
clarifying roles and responsibilities within the organization for participating in risk assessments;
training relevant staff on how to identify when a risk assessment is needed and how to contribute to one;
consolidating existing security audits and AI ethics reviews to leverage existing compliance efforts; and
preparing for regulatory scrutiny by ensuring each risk assessment is timely, complete, and accurate.

Conclusion
For general counsel and privacy officers, the message is clear: more businesses will soon be required to conduct formal privacy risk assessments for high-risk data processing activities and to report on those activities annually to the Agency. From a compliance planning perspective, this means maintaining well-organized records and internal controls around risk assessments will be vital.
California’s new risk assessment regulations may signal a significant evolution in U.S. privacy law—moving companies from purely reactive privacy compliance (such as responding to consumer requests and breaches) and toward a proactive, accountability-based model of privacy management. By understanding these requirements and planning ahead, businesses can meet their legal obligations and enhance data governance and consumer trust.

AI Tools in Use – Time for an AI Policy

Every RIA that uses artificial intelligence (“AI”) tools as part of their day-to-day operations should have an AI policy that outlines appropriate use of these tools in the firm’s practice. 
New AI tools are rapidly being integrated into firms’ operations. From productivity and search tools like ChatGPT and Gemini to meeting transcription tools like Microsoft CoPilot, AI tools are changing how we work.
While these tools can help improve efficiency by assisting with tasks, such as conducting online research, summarizing meetings, and automating routine workflows, the use of these tools heightens the need for Firms to adhere to all applicable regulatory requirements. This includes safeguarding client data, privacy and confidentiality considerations, maintaining accurate books and records, conducting due diligence of vendors, and potentially being prepared to provide search or transcript history to regulators during an examination.
Without a formal policy, there is a risk that AI tools may be adopted and implemented without appropriate due diligence and oversight. This could lead to errors, data breaches, or failures to comply with regulatory requirements.
Furthermore, the AI policy should indicate to employees which AI tools are authorized for use and specify appropriate uses of AI tools by establishing these guidelines, the Firm can help ensure that AI is integrated in a way that aligns with its objectives, while maintaining strong compliance standards and safeguarding client privacy.
An AI policy is essential to help ensure the Firm complies with regulations, protects clients, and responsibly integrates AI tools into its operations.

Trump Administration Executive Order Tracker

Below is a tracker of healthcare-related executive orders (EOs) issued by the Trump administration, including overviews of each EO and the date each EO was signed. We will regularly update this tracker as additional EOs are published.
It is important to note that EOs, on their own, do not effectuate policies. Rather, in most cases, they put forth policy goals and call on federal agencies to examine old or institute new policies that align with those goals.

Date Signed
Executive Order Title
Summary

October 15, 2025
Ensuring Continued Accountability in Federal Hiring
The Executive Order implements controls on federal hiring, with the goal of having an efficient federal workforce aligned with the administration’s priorities. Agencies must follow the Merit Hiring Plan issued by OPM in May 2025, establish Strategic Hiring Committees, and submit Annual Staffing Plans with quarterly updates to OPM and OMB. Exceptions apply for national security, public safety, and other designated roles, and the EO states it shall not adversely impact Medicare benefits.

September 30, 2025
Unlocking Cures for Pediatric Cancer With Artificial Intelligence
This Executive Order directs the MAHA Commission, HHS, the Assistant to the President for Science and Technology (APST), and the Special Advisor for AI and Crypto to use artificial intelligence to improve pediatric cancer diagnosis, treatment, and prevention. Agencies shall prioritize enhancing clinical trial design, data infrastructure, and biological analysis through AI and seek ways to increase investments in pediatric cancer research—including doubling funding for the Childhood Cancer Data Initiative. The EO encourages private sector innovation and mandates integration of AI into health data systems to improve research and trial outcomes. HHS is also tasked with finalizing interoperability standards to ensure secure, privacy-compliant use of patient data in AI applications.

September 29, 2025
Continuance of Certain Federal Advisory Committees 
This Executive Order extends the following Department of Health and Human Services advisory committees through September 30, 2027: the Presidential Advisory Council on HIV/AIDS, the President’s Committee for People with Intellectual Disabilities, the Advisory Board on Radiation and Worker Health, and the President’s Council on Sports, Fitness, and Nutrition.

August 13, 2025
Ensuring American Pharmaceutical Supply Chain Resilience by Filling the Strategic Active Pharmaceutical Ingredients Reserve
This EO directs HHS to strengthen domestic pharmaceutical supply chains by stockpiling Active Pharmaceutical Ingredients (APIs) for critical medicines. Within 30 days, The Office of the Assistant Secretary for Preparedness and Response (ASPR) must identify approximately 26 especially critical drugs and assess available funding to acquire a 6-month supply of APIs, prioritizing domestic sources. Within 120 days, ASPR must prepare the Strategic API Reserve (SAPIR) repository to begin receiving APIs and fill it within 30 days of readiness. Within 90 days, ASPR must update the 2022 list of 86 essential medicines and develop a plan to source, store, and maintain APIs for those not covered in the critical drugs list. The plan must also include a proposal and cost estimate for opening a second SAPIR repository within one year.

August 7, 2025
Improving Oversight of Federal Grantmaking
This EO aims to ensure federal funding does not go to what the administration considers “wasteful grants.” It requires each agency to designate an appointee of President Trump to be responsible for reviewing new funding opportunity announcements and discretionary grants to ensure they are consistent with agency priorities. It states specific items must include, such as review by subject matter experts and interagency coordination to determine if the funding opportunity has been addressed by another  announcement. The EO lists principles for this review, including that preference be given to institutions with lower indirect cost rates, awards be given to a broad range of recipients, recipients commit to complying with Gold Standard Science, and that awards are not used to fund or promote racial preferences, denial of the sex binary, or illegal immigration. The EO directs agencies to designate appointees of President Trump to review discretionary awards on an annual basis for consistency with agency priorities, including an accountability mechanism for officials responsible for selection and granting of awards. OMB must revise the Uniform Guidance to require all discretionary grants to permit termination for convenience, including when the award no longer advances agency priorities. Within 30 days, agencies must submit a report to OMB detailing if its standard terms and conditions for discretionary awards permit termination for convenience and must revise terms and conditions of existing discretionary grants to permit such termination.

July 31, 2025
President’s Council on Sports, Fitness, and Nutrition, and the Reestablishment of the Presidential Fitness Test
This EO modifies a Bush Administration EO to reestablish the Presidential Fitness Test, to be administered by HHS. It establishes the President’s Council on Sports, Fitness, and Nutrition, which will consist of 30 individuals appointed by the President and will be funded by HHS. The Council shall advise the President, including on strategies to address the growing national security threat posed by the increasing rates of childhood obesity, chronic diseases, and sedentary lifestyles.

July 24, 2025
Ending Crime and Disorder on America’s Streets
This EO directs the Attorney General to collaborate with HHS to seek the reversal, if appropriate, of judicial precedents and the termination of consent decrees that impede the encouragement of civil commitment of individuals with mental illness who pose a risk to themselves or the public and who are homeless. It directs HHS to prioritize discretionary federal assistance for states and cities that enforce prohibitions against open drug use, urban camping, loitering, and squatting and that provide individuals with serious mental illness, a substance use disorder, or who are homeless with assisted outpatient treatment or move them into a treatment center. The EO also directs HHS to ensure SAMHSA discretionary grants do not fund harm reduction or safe consumption efforts; to provide technical assistance to assisted outpatient treatment programs on the civil commitment process; and ensure federal funding for Federally Qualified Health Centers and Certified Community Behavioral Health Clinics reduces homelessness.  The EO also calls for the end of federal grants that support “housing first” policies.

July 23, 2025
Accelerating Federal Permitting of Data Center Infrastructure
This EO directs the Secretary of Commerce and Director OSTP to launch an initiative that will provide financial support to “qualifying projects,” as defined in the EO, to support data center infrastructure. All agencies shall also provide OSTP with existing financial mechanisms to assist with these efforts. The EO revokes a January 2025 Biden-era EO, Advancing United States Leadership in Artificial Intelligence Infrastructure. The EO requires the Council on Environmental Quality to establish new categorical exclusions to cover projects funded by this initiative that do not have a significant effect on the human environment, with the goal of expediting construction. The EO directs the Environmental Protection Agency (EPA) to expedite permitting by modifying regulations. EPA shall also identify Brownfield Sites and Superfund Sites for use by these projects and issue guidance on such sites within 180 days. The EO directs federal agencies, such as the Department of the Interior, to identify federal land that can be authorized for data center construction.

July 23, 2025
Promoting the Export of the American AI Technology Stack
This EO directs the Secretary of Commerce to establish and implement the American AI Exports Program within 90 days. The Secretary of Commerce will issue a 90-day public call for proposals to be included in the program, and, in consultation with the Secretary of State, Secretary of Defense, Secretary of Energy, and Director of OSTP, evaluate and select proposals for inclusion in the program. The EO directs the Economic Diplomacy Action Group (EDAG) to coordinate mobilization of federal financing tools for AI export packages. These tools include direct loans, loan and credit guarantees, equity investments, co-financing, political risk insurance, technical assistance, and feasibility studies. The EO also delegates authority to appoint EDAG members to the Administrator of the Small Business Administration (SBA) and the Director of OSTP. The EO specifies that the Secretary of State will be responsible for developing and executing federal strategy to promote the export of American AI technologies and standards, coordinating US participation in multilateral initiatives and country-specific partnerships for AI deployment and export promotion, supporting partner countries in fostering pro innovation regulatory, data, and infrastructure environments, analyzing market access, including barriers to trade, and coordinating with SBA’s Office of Investment and Innovation to facilitate investment in US small businesses that develop AI technologies and manufacture AI infrastructure, hardware, and systems.

July 23, 2025
Preventing Woke AI in the Federal Government
This EO requires agency heads to only procure large-language models (LLMs) that abide by “unbiased AI principles,” which are truth-seeking and ideological neutrality. The EO directs the Director of the Office of Management and Budget (OMB), in consultation with the Administrator for Federal Procurement Policy, the Administrator of General Services, and the Director of Office of Science and Technology Policy (OSTP) to issue guidance implementing the requirements related to unbiased AI principles within 120 days. Each agency head must include terms in each new LLM federal contract requiring that the LLM comply with the unbiased AI principles and outlining procedure for noncompliance, revise existing contracts to include the terms outlined above, and within 90 days of OMB issuing the specified guidance, adopt procedures to ensure compliance with unbiased AI principles.

July 17, 2025
Creating Schedule G in the Expected Service
This EO creates a new classification of non-career federal employees (Schedule G). These employees will engage in policymaking or policy-advocating work. The aim of the new schedule is to “enhance government efficiency and accountability.” The new schedule specifically aims to improve operations at the Department of Veterans Affairs.

May 23, 2025
Restoring Gold Standard Science in America
This EO directs the Office of Science and Technology Policy (OSTP) to issue guidance for agencies within 30 days for adopting new “gold standard” science principles, including that science is reproducible, transparent, collaborative, and without conflicts of interest. Federal agencies shall update their processes and report to OSTP within 60 days about implementation progress. Within 30 days, federal agencies shall also ensure that employees are not engaging in scientific misconduct, publicly report certain data, and communicate uncertainty. The EO states that the Biden Administration politicized science by encouraging agencies to incorporate diversity, equity, and inclusion , and it reinstates previous scientific integrity policies. The Trump Administration encourages American research organizations to adopt these standards.

May 12, 2025
Delivering Most-Favored Nation Prescription Drug Pricing to American Patients
This EO instructs the U.S. Trade Representative and the Secretary of Commerce to ensure foreign countries are not engaged in practices that lead to high drug prices in the United States. Second, the EO instructs the Secretary of Health and Human Services (HHS) to facilitate direct-to-consumer purchasing programs for drug manufacturers that sell their products to Americans at the most-favored-nation price. Third, the EO provides the Secretary of HHS, in coordination with other relevant agencies, 30 days to bring prices for pharmaceutical drugs in line with comparable developed nations. If significant process toward most-favored-nation pricing is not delivered at that time, HHS at that point, in conjunction with the Centers for Medicare and Medicaid Services, must develop rule-making to impose most-favored-nation pricing. Fourth, HHS and the Food and Drug Administration (FDA) must consider certifying that the importation of certain prescription drugs from other developed countries is safe, and if such certification is made, FDA must create a waiver process to allow for the importation of prescription drugs. Fifth, a number of federal agencies, including the Federal Trade Commission, the Attorney General, and the Department of Commerce, are instructed to investigate any anti-competitive practices leading to higher prices. Finally, the FDA is instructed to review and potentially modify or revoke approvals granted for drugs, for drugs that maybe be unsafe, ineffective, or improperly marketed.

May 5, 2025
Improving The Safety and Security of Biological Research
This EO directs the Office of Science and Technology Policy (OSTP), in consultation with HHS, to establish guidance to halt Federal funding of gain-of-function research conducted in certain countries of concern where agencies believe there to be inadequate oversight. The EO directs HHS to include new enforcement terms in every life-science research contract or grant, including a requirement that recipients do not operate or fund gain-of-function research in foreign countries, of which violation would lead to a revocation of funding and up to a 5-year ban of HHS funding. Within 180 days, OSTP shall develop and implement a strategy to limit and track gain-of-function research that occurs in the US without federal funding. OSTP shall also ensure that federal funding recipients have  a mechanism to report gain-of-function research, including research funded without federal dollars. Such information shall be made public, along with research funding that has been halted pursuant to this EO.

May 5, 2025
Regulatory Relief to Promote Domestic Production of Critical Medicines 
This EO aims to streamline regulations and eliminate barriers to domestic pharmaceutical manufacturing to enhance the timeliness and predictability of agency reviews, making the US more competitive in producing safe and effective medicines. The FDA and EPA are tasked, within 180 days, with reviewing and updating domestic pharmaceutical manufacturing regulations to ensure efficient inspections and approvals of new and expanded manufacturing capacities and to enable US-based manufacturing. Within 90 days, the FDA must develop and advance improvements to the risk-based inspection regime of overseas manufacturing facilities involved in the supply of US medicines, funded by increased fees on foreign manufacturing facilities. The order also designates the EPA as the lead agency for coordinating environmental permits, in coordination with OMB.

April 24, 2025
Strengthening Probationary Periods in the Federal Service
This EO issues a new Civil Service Rule, through the Office of Personnel Management, that will require agencies to approve a probationary employee as a tenured federal employee, instead of an automatic process at the end of a probationary period. Agencies shall consider the employee’s performance and conduct, the needs and interest of the agency, and if the employee’s continued employment would advance the agency’s goals and efficiency. The EO notes it is the responsibility of the employee to demonstrate why their continued employment would serve the public’s interest. Within 15 days of this EO, each agency shall identify employees whose probationary or trial period ends in 90 days or more. Agencies shall meet with probationary employees at least 60 days before their probationary period ends to assess if their employment shall continue, and the agency shall then shall make a decision at least 30 days before a probationary period ends and put the decision in writing.

April 23, 2025
Reforming Accreditation to Strengthen Higher Education
This EO directs the Secretary of Education to deny, monitor, suspend, or terminate accreditation recognition for accrediting agencies if they impose diversity, equity, and inclusion (DEI) requirements as part of their accreditation standards. Additionally, it instructs the Attorney General and the Secretary of Education, in consultation with the Secretary of Health and Human Services, to discontinue DEI-related practices in medical and graduate schools.

April 16, 2025
Ensuring Commercial, Cost-Effective Solution in Federal Contracts
This EO states that agencies should prioritize procurement of commercially available products and services, as opposed to non-commercial goods (such as highly specialized, Government-unique systems, custom-developed products or services, or research and development requirements where the agency has not identified a satisfactory commercial option). It directs agencies within 60 days to conduct a review of all open solicitations and notices for non-commercial products and services, consolidate them into one application, and send it, including the market research and price analysis used to justify the procurement of a non-commercial product or service, to the agency’s approval authority. The approval authority will then make recommendations to advance commercial procurement within 30 days. If an agency is proposing to solicit a non-commercial product or service, the contracting officer shall detail the specific reasons a non-commercial good is needed, and a final decision may be made with consultation from the Office of Management and Budget.

April 15, 2025
Restoring Common Sense to Federal Procurement
This EO notes that the Federal Acquisition Regulation (FAR), which established uniform procedures for acquisitions across agencies, is an overcomplicated regulatory framework, at 2,000 pages. The EO directs agencies and the Office of Federal Public Procurement Policy, within 180 days, to amend the FAR to only include provisions required by statute or essential to sound procurement. The EO notes this will advance the goals of the January EO, Unleashing Prosperity Through Deregulation. Within 15 days, agencies with procurement authority shall designate an official to work on FAR reform and provide recommendations. Within 20 days, the Office of Management and Budget shall issue implementation guidance and propose new agency supplemental regulations and internal guidance that promote expedited and streamlined acquisitions.

April 15, 2025
Lowering Drug Prices by Once Again Putting Americans First
This EO directs HHS within 60 days to propose and seek comment on revisions to the Medicare Drug Price Negotiation Program for initial price applicability in 2028. HHS should also work with Congress to address the timing disparity between small-molecule and biologic drugs, conduct a survey to determine hospital acquisition costs for outpatient drugs and propose adjustments to align Medicare reimbursement with those costs, condition health center grant funding on providing insulin and injectable epinephrine at or below the 340B acquisition cost plus a minimal administrative fee, and address payment incentives that encourage shifting of drug administration volume from physician office settings to hospital outpatient departments. The EO directs the CMS Innovation Center to develop a prescription drug payment model for high-cost Medicare-covered prescription drugs and biologicals, including those outside of the Medicare Drug Price Negotiation Program. The EO directs the FDA to accelerate the review of generics, biosimilars, and over-the-counter conversions and streamline the drug importation program under section 804 of the Federal Food, Drug, and Cosmetic Act to help states reduce costs. The EO directs the Department of Labor to propose new transparency requirements for pharmacy benefit manager compensation under the Employee Retirement Income Security Act of 1974. The EO directs HHS, the Department of Justice, the Department of Commerce, and the Federal Trade Commission to develop recommendations to “reduce anti-competitive behavior from pharmaceutical manufacturers.”

April 9, 2025
Reducing Anti-Competitive Regulatory Barriers
This EO directs agencies to identify regulations that create monopolies, impose unnecessary barriers to market entry, or limit competition and recommend recission or modification. Agencies should send their findings to the Attorney General and the  Federal Trade Commission (FTC) within 70 days. The EO also directs the FTC, within 10 days, to issue an RFI seeking public input on anti-competitive regulations and, within 90 days, create a list of anti-competitive regulations to be rescinded or modified. The proposed recissions or modifications could be incorporated into the Unified Regulatory Agenda.

March 20, 2025
Stopping Waste, Fraud, and Abuse by Eliminating Information Silos
This EO requires agencies to ensure federal officials have access to all unclassified agency records, data, software systems, and information technology systems (including data from State programs that receive Federal funding) for purposes of eliminating waste, fraud, and abuse. The goal of the EO is to facilitate intra- and inter-agency sharing of records. The EO also requires agencies, within 30 days, to rescind or modify all agency guidance that serves as a barrier to the inter- or intra-agency sharing of unclassified information and, within 45 days, catalogue classified information policies to assess if they are unnecessarily classified.

March 20, 2025
Eliminating Waste and Saving Taxpayer Dollars by Consolidating Procurement
This EO requires agencies to allow the General Services Administration (GSA) to conduct domestic procurement with respect to common goods and services for the agency. Within 90 days, GSA shall submit a comprehensive plan to OMB for the GSA to procure common goods and services. Further, within 30 days, OMB shall designate the GSA as the executive agent for all Government-wide acquisition contracts for information technology, which GSA can then defer or decline when necessary to ensure continuity of service or as otherwise appropriate. The EO directs GSA, on an ongoing basis, to rationalize Government-wide indefinite delivery contract vehicles for information technology for agencies, including as part of identifying and eliminating contract duplication, redundancy, and other inefficiencies.

March 18, 2025
Achieving Efficiency Through State and Local Preparedness
This EO seeks to expand the role of states and localities in national resilience and preparedness, which will likely have impacts on drug supply chain issues and future pandemic response. The EO directs the Assistant to the President for National Security Affairs (APNSA), in coordination with other relevant agencies, to publish a National Resilience Strategy within 90 days. APNSA should also review critical infrastructure policies (including previous EOs on supply chain) and recommend a risk-informed approach within 180 days. APNSA shall review all national continuity policies and recommend options to modernize and streamline the current approach within 180 days. APNSA shall review the findings of the Federal Emergency Management Agency Council and all national preparedness and response policies and provide recommendations to edit policies so that they can reformulate the process and metrics for federal responsibility within 240 days. APNSA shall also create a National Risk Register within 240 days.

March 14, 2025
Continuing the Reduction of the Federal Bureaucracy
This EO continues efforts to reduce the Federal government. The EO directs certain governmental entities to be eliminated to the maximum extent consistent with applicable law, including the United States Interagency Council on Homelessness, the Community Development Financial Institutions Fund, and the Minority Business Development Agency. Within 7 days of this EO, the head of each governmental entity listed shall submit a report to the Office of Management and Budget (OMB) confirming  compliance and explaining which functions of the entity are statutorily required. The EO also directs the OMB to reject funding requests for governmental agencies that do not comply with this order.

March 7, 2025
Restoring Public Service Loan Forgiveness
This EO directs the Secretary of Education to edit the Public Service Loan Forgiveness Program to exclude organizations that engage in “child abuse, including the chemical and surgical castration or mutilation of children or the trafficking of children to so-called transgender sanctuary States for purposes of emancipation from their lawful parents.”

February 26, 2025
Implementing The President’s “Department of Government Efficiency” Cost Efficiency Initiative
This EO requires agencies to build a centralized technological system to record every payment issued by the agency pursuant to each contract and grant, along with a written justification for each payment. The system must include a mechanism to pause and rapidly review any payment for which the approving employee has not submitted a written justification. Agencies, working with their DOGE team lead, shall issue guidance on the written justification requirement. The EO requires all agencies, in consultation with DOGE, to review existing contracts and grants and terminate or modify them to “promote efficiency and advance the policies of the current Administration” within 30 days. Agencies must also review contracting policies, procedures, and personnel and issue guidance on signing new contracts or modifying existing contracts to promote efficiency and the policies of the current administration. The EO also makes changes to non-essential travel justifications and credit cards, along with requiring agencies to submit information on property subject to the agency’s administration.

February 25, 2025
Making America Healthy Again by Empowering Patients with Clear, Accurate, and Actionable Healthcare Pricing Information
The EO references work from the first Trump administration on healthcare price transparency, and it states that the federal government will continue to promote universal access to clear and accurate healthcare prices and will take all necessary steps to improve existing price transparency requirements, increase enforcement of price transparency requirements, and identify opportunities to further empower patients with meaningful price information, potentially including through the expansion of existing price transparency requirements. The EO directs the Departments of Treasury, Labor, and Health and Human Services to “rapidly implement and enforce” price transparency regulations within 90 days. Actions specified include: requiring the disclosure of the prices of items and services, not estimates, issuing updated guidance or proposed regulatory action ensuring pricing information is standardized and comparable across hospitals and plans, and issuing guidance or proposed regulatory action updating enforcement policies designed to ensure compliance.

February 19, 2025
Ending Taxpayer Subsidization of Open Borders
This EO directs agencies to identify all federally funded programs that currently allow undocumented immigrants to obtain cash or non-cash benefits, and take action, consistent with applicable law, to align those programs with existing federal laws, including the Personal Responsibility and Work Opportunity Reconciliation Act of 1996 (PRWORA). It directs agencies to ensure that eligibility verification systems ensure that taxpayer-funded benefits exclude ineligible immigrants. It directs DOGE to identify other sources of federal funding for undocumented individuals and recommend additional agency actions to align with this EO.

February 19, 2025
Commencing the Reduction of the Federal Bureaucracy
In accordance with this EO, HHS shall terminate the Secretary’s Advisory Committee on Long COVID, and CMS shall terminate the Health Equity Advisory Committee. The EO calls for the termination of the Presidential Management Fellows program. This EO also directs non-statutory components and functions of certain foreign affairs governmental entities to be eliminated, as allowed under applicable law, and directs such entities to submit a report stating if components of their entity are statutorily-required.

February 19, 2025
Ensuring Lawful Governance and Implementing the President’s “Department of Government Efficiency” Deregulatory Initiative
This EO directs agency heads to work in coordination with DOGE team leads and OMB to review all regulations subject to their jurisdiction for consistency with law and  Administration policy. Within 60 days, agencies shall submit to OMB a list of certain regulations, including those that the agency believes are unconstitutional, are not authorized by statutory authority, and impose undue burdens on small businesses, among other types of regulations. It directs agencies to de-prioritize enforcement actions that enforce regulations that go beyond the powers vested by the Constitution, and ensure enforcement actions are compliant with law and Administration policy. The EO directs OMB to issue implementation guidance.

February 18, 2025
Ensuring Accountability for All Agencies
This EO requires independent agencies, including the Federal Trade Commission, to submit proposed regulations to the Office of Information and Regulatory Affairs (OIRA)  before publication in the Federal Register. The EO directs the Office of Management and Budget (OMB) to establish performance standards and management objectives for independent agencies and to review independent agency actions for consistency with the President’s priorities. The EO states that only the President and Attorney General can provide interpretations of law for the executive branch. It directs independent agencies to regularly consult with and coordinate policies and priorities with OMB, DPC, and the White House National Economic Council.

February 18, 2025
Expanding Access to In Vitro Fertilization
This EO directs the Domestic Policy Council (DPC) to submit a list of policy recommendations on protecting IVF access and reducing out-of-pocket and health plan costs for IVF treatment.

February 14, 2025
Keeping Education Accessible and Ending COVID-19 Vaccine Mandates in Schools
This EO directs the Secretary of Education to issue guidelines to elementary schools, local educational agencies, State educational agencies, secondary schools, and institutions of higher education regarding those entities’ legal obligations with respect to parental authority, religious freedom, disability accommodations, and equal protection under law for COVID-19 vaccine school mandates. It directs the Secretary of Education, in consultation with the Secretary of Health and Human Services, to develop a plan to end COVID-19 vaccine school mandates. The plan should also include a list of discretionary Federal grants and contracts provided to schools that are non-compliant with the guidelines issued and each agency’s process for preventing Federal funds from being provided to, and rescinding Federal funds from, entities that are non-compliant with the guidelines.

February 13, 2025
Establishing the President’s Make America Healthy Again Commission
The EO states that agencies that address health must focus on reversing chronic diseases, including mental health disorders, obesity, diabetes, and other chronic diseases. The EO specifically states that all federally funded health research should be transparent and have open-source data, directs the NIH to prioritize research on why Americans are getting sick, directs all agencies to work with farmers to ensure food is healthy and affordable, and directs agencies to ensure expanded treatment options are available, including with flexible health insurance coverage. The EO establishes the President’s Make America Healthy Again Commission, which will be chaired by RFK Jr. The first mission of the Commission will be to address childhood chronic diseases, with actions including studying contributing causes, assisting the President with public education, and providing government-wide recommendations on how to address childhood chronic diseases. Within 100 days, the Commission must submit a Make Our Children Healthy Again Assessment, which should include specific items such as comparing childhood chronic diseases in the US to other countries and reporting on best practices for prevention. Within 180 days, the Commission must submit a strategy on how to restructure the government’s response to childhood chronic diseases.

February 11, 2025
Implementing The President’s “Department of Government Efficiency” Workforce Optimization Initiative
This EO requires agencies to implement a workforce optimization initiative, stating each agency can hire no more than one employee for every four employees that depart and agency heads, in consultation with their DOGE team lead, must develop a hiring plan. The hiring plan requires: new career appointment hiring decisions be made in consultation with the agency’s DOGE team lead; no vacancies for career appointments be filled that the DOGE team lead deems should not be filled unless the agency head decides otherwise; DOGE team leads must provide the US DOGE Service (USDS) Administrator with a monthly hiring report. Agency heads should prepare for large-scale reductions in force (RIFs), particularly in offices that perform functions not mandated by statute and including employees working in DEI initiatives. Agency heads must submit a report identifying statutes that establish the agency, or subcomponents of the agency, as statutorily required entities.

January 31, 2025
Unleashing Prosperity through Deregulation
This EO requires that whenever an agency promulgates a new rule, regulation, or guidance, it must identify at least 10 existing rules, regulations, or guidance documents to be repealed.  The Director of the Office of Management and Budget will ensure standardized measurement and estimation of regulatory costs. It requires that for fiscal year 2025, the total incremental cost of all new regulations, including repealed regulations, be significantly less than zero. It is unclear what this 10-to-1 ratio means in practice. A rule, regulation, or a guidance document could be one thousand pages, or it could be one paragraph. It could represent a significant policy, or it could be a minor, technical requirement.

January 28, 2025
Protecting Children from Chemical and Surgical Mutilation
This EO states that the policy of the US is to “not fund, sponsor, promote, assist, or support the so-called “transition” of a child from one sex to another.” The EO directs agencies to rescind or amend any guidance that relies on guidance from the World Professional Association for Transgender Health (WPATH). It directs HHS to publish a literature review on best practices for promoting the health of children who assert gender dysphoria. It directs agencies who provide research or education grants to medical institutions to ensure grantees are not performing any care that is prohibited under this EO. It directs HHS, TRICARE, and the federal employee health benefits program to not cover this care, and it directs HHS to take such action through vehicles such as Medicare or Medicaid conditions of participation, section 1557, or mandatory drug use reviews. It directs the Attorney General to investigate any companies that provide medications related to transition that may have long-term side effects, and urges the Attorney General to pass legislation with Congress that enacts a private right of action for children and their parents.

January 27, 2025
Reinstating Service Members Discharged Under the Military’s COVID-19 Mandate
This EO reinstates service members that were discharged for refusing to comply with the COVID-19 vaccine mandate that was imposed in August 2021 and rescinded January 2023. The EO directs them to be reinstated at their former rank and receive full back pay.

January 24, 2025
Enforcing the Hyde Amendment
This EO revokes two Biden-era reproductive health EOs. The EO also directs the Office of Management and Budget to issue guidance ensuring agencies comply with the Hyde Amendment, which is passed by Congress annually and prohibits federal funding for abortion.

January 23, 2025
Removing Barriers to American Leadership in Artificial Intelligence
The EO directs the Assistant to the President for Science and Technology (APST), the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs (APNSA), in coordination with the heads of relevant agencies, to develop and submit to the President an action plan to achieve the policy set forth in this EO. It also directs those officials to review all policies, directives, regulations, orders, and other actions taken pursuant to the revoked Executive Order 14110 of October 30, 2023 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence) and propose suspending, revising, or rescinding any actions inconsistent with this new EO.

January 23, 2025
President’s Council of Advisors on Science and Technology
This EO establishes the President’s Council of Advisors on Science and Technology (PCAST), that will be composed of no more than 24 members and will be co-chaired by the Assistant to the President for Science and Technology (APST) and the Special Advisor for AI & Crypto. The remaining members will be appointed by the President and include individuals and representatives from sectors outside of the Federal Government. The Co-Chairs may designate up to two Vice Chairs of the PCAST from among the non-Federal members of the PCAST. The PCAST will advise the President on matters involving science, technology, education, and innovation policy. Additionally, the PCAST will provide the President with scientific information that is needed to inform public policy relating to the economy,  workers, national and homeland security, and other topics. The PCAST will meet regularly and solicit information from stakeholders, along with serving in specified advisory committee and panel roles. It also revokes Biden’s EO of the same name that created his PCAST.

January 20, 2025
Withdrawing the US from the World Health Organization
This EO provides notice of intent to withdraw from the World Health Organization (WHO), citing mishandling of the COVID-19 pandemic and an inability to demonstrate independence from the political influence of WHO member states. It directs the State Department and Office of Management and Budget to pause transfer of funds to the WHO and recall any personnel working in any capacity at the WHO.

 

Artificial Intelligence Bias – Harper v. Sirius XM Challenges Algorithmic Discrimination in Hiring

On August 4, 2025, Plaintiff Arshon Harper (“Harper”) filed a class action complaint in the Eastern District of Michigan against Sirius XM Radio, LLC (“Sirius”) asserting claims of both unintentional and intentional racial discrimination under Title VII of the Civil Rights Act.
Harper alleges that Sirius’ use of a commercial AI hiring tool that screens and analyzes resumes resulted in racial discrimination against him and other similarly situated African American applicants.
In his complaint, Harper claims that he applied to approximately 150 job positions, for which Sirius rejected him from all but one, despite allegedly meeting or exceeding the job qualifications. Sirius’ use of AI to screen applications, according to Harper, disproportionately rejected him and other African American applicants by relying on certain inputs—such as educational institutions, employment history, and zip codes—as proxies for race. Similarly, Harper alleges that the AI screening tool improperly removed his resume from further consideration due to factors that were unrelated to his qualifications and Sirius’ business needs.
Harper’s lawsuit is not the first to allege an employer’s use of workplace AI constitutes unlawful discrimination. One of the more noteworthy cases is Mobley v. Workday, pending in the Norther District of California. In that case, Mobley alleges that he applied to over 100 jobs using Workday’s applicant‑screening AI platform and was rejected each time, despite meeting the job requirements. Like Harper, Mobley claims that the AI tool improperly generated its decisions to reject his applications based on protected categories. In May 2025, the Court certified a preliminary collective action for Mobley’s age-discrimination claims.
Both Harper and Mobley raise critical legal questions about the role of workplace AI tools, potentially biased and discriminatory AI, and employer liability. These cases serve as reminders to employers that they are subject to liability under existing federal and state anti-discrimination statutes when they rely on AI, in whole or in part, to make decisions impacting recruiting and hiring decisions, or employees’ terms and conditions of employment. Therefore, when employers use AI, they should audit, monitor, and validate the fairness of their selection processes to ensure compliance with the law.
Plaintiffs will likely continue filing cases similar to Harper and Mobley as employers adopt and implement workplace AI tools. Also, employers’ obligations will likely expand as an increasing number of states are passing laws regulating the use of AI workplace tools. Accordingly, employers should take the following proactive steps to minimize exposure to legal and reputational risk:
1. Assess Best Use Cases for the Company.
Identify areas where AI can increase efficiency with minimal risk by making core human judgments in the recruiting and hiring process.
2. Conduct Due Diligence on Third-Party AI Vendors.
Vet third party vendors by reviewing how the AI model was trained and obtain the datasets that were used. Ask how the vendor deals with potentially discriminatory decision making by the model.
3. Develop a Comprehensive Plan for AI Implementation.
Ensure that a system for checking AI work product is in place to catch any problematic outputs by the AI model.
4. Conduct Internal Workplace AI Audits and Assessments.
Conduct regular audits of all workplace AI to ensure the tools are functioning as intended and are not causing disparities among protected categories.
5. Ensure Compliance with Federal, State, and Local Laws, Regulations, and Guidance.
Regularly check for updates on the ever-changing legal landscape in the AI space and consult with counsel on best practices to remain compliant with the law.
6. Monitor Outcomes and Review for Bias.
Keep detailed records of AI outputs and conduct statistical analyses to identify potential bias.
7. Establish Safeguards and Opt-Out Alternatives.
Provide prospective employees with a clear notice of the use of AI in the hiring process and allow them the ability to opt-out of the use of AI with respect to their application.
8. Use AI as a Support Tool, Not the Final Decision Maker.
Ensure that a human employee always has the final say in employment decisions like recruiting, hiring, firing, or promotions, and is not solely relying on the decision of an AI tool.
9. Create Company Policies.
Provide policies that clarify for employees when the use of AI tools is appropriate and the best practices when using them.
10. Verify Proper Human Oversight.
Train employees who oversee AI tools on how to check the AI’s outputs and identify potential legal risk from these outputs.

AI Police Surveillance Bias – The “Minority Report” Impacting Constitutional Rights

One of my favorite Steven Spielberg movies is the 2002 dystopian thriller Minority Report, in which “precogs” working for the “Precrime” unit predict murders before they happen, allowing arrests for crimes not yet committed. Recently, Pennsylvania attorneys at the Philadelphia Bench-Bar Conference raised an alarm that AI surveillance could soon create such a world – mass rights violations due to biased facial recognition, unregulated predictive policing, and “automation bias,” the dangerous tendency to trust computer conclusions over human judgment.
The comparison may no longer be hyperbole. Although we haven’t created mutant psychics, we’ve built AI surveillance claiming to predict crimes, locations, and violence timing. Unlike Spielberg’s film, where technology worked relatively well, real-world predictive policing has been documented to have bias, opacity, and constitutional issues that alarm organizations considering these tools.
Facial Recognition Bias: The Documented 40-to-1 Accuracy Gap
One Philadelphia criminal defense attorney at the conference emphasized that the core issue with AI in law enforcement isn’t the technology itself but “the physical person developing the algorithm” and “the physical person putting his or her biases in the program.” The data confirms this concern with devastating precision.
The landmark 2018 “Gender Shades” study by Joy Buolamwini of MIT Media Lab and Timnit Gebru (then at Microsoft Research) found that commercial facial recognition systems show error rates of just 0.8% for light-skinned men but 34.7% for darker-skinned women — a 40-fold disparity. A 2019 National Institute of Standards and Technology (NIST) report, which tested 189 facial recognition algorithms from 99 developers, found that African American and Asian faces were between 10 and 100 times more likely to be misidentified than white male faces.
Another panelist highlighted that gait recognition and other biometric identification tools display “reduced accuracy in identifying Black, female and elderly people.” The technical limitations extend beyond demographics: gait recognition systems struggle with variations in clothing, occlusion, viewing angles, and lighting conditions — exactly the real-world circumstances law enforcement officers encounter.
Automation Bias in Criminal Justice: Why Police Trust Algorithms Over Evidence
Panelists also warned about “automation bias,” describing how “people are just deferring to computers,” assuming AI-generated analysis is inherently superior to human reasoning. Research confirms this tendency, with one 2012 study finding that fingerprint examiners were influenced by the order in which computer systems presented potential matches.
The consequences are devastating. At least eight Americans have been wrongfully arrested after facial recognition misidentifications, with police in some cases treating software suggestions as definitive facts — one report described an uncorroborated AI result as a “100% match,” while another said officers used the software to “immediately and unquestionably” identify a suspect. In most cases, basic police work, such as checking alibis or comparing tattoos, would have eliminated these individuals before arrest.
Mass Surveillance Infrastructure: Body Cameras, Ring Doorbells, and Real-Time AI Analysis
Minority Report depicted a surveillance state where retinal scanners tracked citizens through shopping malls and personalized advertisements called out to them by name. But author Philip K. Dick — who wrote the 1956 short story that inspired the film — couldn’t have imagined the actual police surveillance infrastructure: body cameras on every officer, Ring doorbells on every porch, and AI systems analyzing it all in real time. (In fact, consumer-facing companies like Ring have partnered with law enforcement to provide access to doorbell cameras, effectively turning residents’ home security devices into mass surveillance infrastructure – some say without homeowners’ meaningful consent.) Unlike Spielberg’s film, where the Precrime system operated under federal oversight with clearly defined rules, real-world deployment is happening in a regulatory vacuum, with vendors selling capabilities to police departments before policymakers understand the civil liberties implications.
These and other examples reveal a fundamental flaw in predictive policing that Minority Report never addressed: the AI systems cannot distinguish between future perpetrators and future victims. Current algorithms struggle to differentiate the two, and research suggests they would require “a 1,000-fold increase in predictive power” before they could reliably pinpoint crime. Like the film’s Precrime system that operated on the precogs’ visions without questioning their accuracy, real-world police departments are deploying predictive policing tools without proof that they perform better than traditional police work.
What Organizations Must Do Now: AI Surveillance Compliance Requirements
For any entity developing, deploying, or enabling AI surveillance systems in law enforcement contexts, three immediate actions are critical:
Mandate rigorous bias testing before deployment. Document facial recognition error rates across demographic groups. If your system shows disparate accuracy rates similar to those documented in the Gender Shades study — where darker-skinned individuals face error rates forty times higher — you’re exposing yourself to civil rights liability and constitutional challenges under Fourth and Fourteenth Amendment protections.
Require human verification of all consequential decisions. AI-generated results should never serve as the sole basis for arrests, searches, or other rights-affecting actions. Traditional investigative methods — alibi checks, physical evidence comparison, witness interviews — must occur before acting on algorithmic suggestions to comply with probable cause requirements.
Implement transparency and disclosure requirements. Police departments should maintain public inventories of AI tools used in criminal investigations and disclose AI use in police reports to ensure prosecutors can meet their constitutional obligations under Brady v. Maryland to share this information with criminal defendants.
Bottom Line: AI Surveillance Legal Risk
Minority Report ends with the dismantling of Precrime after the conspiracy is exposed — a moral conclusion showing that no amount of security justifies sacrificing individual freedom and due process. Twenty-three years later, law enforcement agencies are making the opposite choice. Police departments worldwide are treating Spielberg’s cautionary tale as an implementation manual, deploying the very systems Dick and Spielberg warned against. Argentina recently announced an “Applied Artificial Intelligence for Security Unit” specifically to “use machine learning algorithms to analyze historical crime data to predict future crimes.” Researchers at the University of Chicago claimed 90% accuracy in predicting crimes a week before they happen. The UK’s West Midlands Police researched systems using “a combination of AI and statistics” to predict violent crimes.
As the Philadelphia lawyers emphasized, the problem isn’t the technology — it’s the people programming it and the legal framework (or lack thereof) governing deployment. Without rigorous bias testing, mandatory human oversight, and transparency requirements, AI surveillance will do precisely what Precrime did in the film: create the appearance of safety while systematically violating the constitutional rights of those the algorithm flags as “dangerous” based on opaque criteria and historical prejudices embedded in training data.

MA Office of Bar Counsel Pens Guidance for Lawyers Using AI

Continuing the weekly blog posts about lawyers using AI and getting in trouble, the Massachusetts Office of Bar Counsel recently issued an article entitled “Two Years of Fake Cases and the Courts are Ratcheting Up the Sanctions,” summarizing the problems encountered by courts when confronted with lawyers citing fake cases, and the subsequent referral to disciplinary counsel.
The article outlines multiple cases of lawyers being sanctioned for filing pleadings containing fake cases after using generative AI tools to draft the pleading. The cases range from lawyers not checking the cites themselves, to supervising lawyers not checking the cites of lawyers they are supervising before filing the pleading.
The article reiterates our professional ethical obligations as officers of the court to always file pleadings that “to the best of the attorney’s knowledge, information and belief, there is a good ground to support it,” that “any lawyer who signs, files, submits, or later advocates for any pleading, motion or other papers is responsible for its content,” and that lawyers are to provide proper supervision to subordinate lawyers and nonlawyers.
The article outlines two recent sanctions imposed upon lawyers in Massachusetts in 2025. The author states, “Massachusetts practitioners would be well-served to read the sanction orders in these matters.” I would suggest that non-Massachusetts practitioners should read the article and the sanctions imposed as they are similar to what other courts are imposing on lawyers who are not checking the content and cites of the pleadings before filing them.
Courts are no longer giving lawyers free passes for being unaware of the risk of using generative AI tools for drafting pleadings. According to the article, sanctions will continue be issued, and practitioners and firms need to address the issue head on.
The article points out several mitigations that lawyers and firms can take to avoid sanctions. My suggestion is that lawyers use caution when using AI to draft pleadings, communicate with any other lawyers involved in drafting the pleadings to determine whether AI is being used (including if you are serving as local counsel), and check and re-check every cite before you file a pleading with a court.