Employees Hiding Use of AI Tools at Work

A new study by Ivanti illustrates that one out of three workers secretly use artificial intelligence (AI) tools in the workplace. They do so for varying reasons, including “I like a secret advantage,” “My job might be reduced/cut,” “My employer has no AI usage policy,” “My boss might give me more work,” “I don’t want people to question my ability,” and “I don’t want to deal with IT approval processes.”
In 2025, a staggering 42% of employees admit to using generative AI (GenAI) tools at work. Another whopping 48% of employees admit to feeling resenteeism (a dislike of one’s job, but stays anyway) and 39% admit to feeling presenteeism (when one comes into the office to be seen, but is not productive).
The secret use of GenAI tools in the workplace poses several risks for organizations, including unauthorized disclosure of company data and/or personal information, cybersecurity risks, bias and discrimination, and misappropriation of intellectual property.
The Ivanti study emphasizes the need for organizations to adopt an AI Governance Program so employees feel comfortable using approved and sanctioned AI tools and don’t keep their use a secret. It also allows the organization to monitor the use of AI tools by employees and implement guidelines and guardrails around their safe use in the organization to reduce risk.

Privacy Tip #443 – Fake AI Tools Used to Install Noodlophile

Threat actors are leveraging the publicity around AI tools to trick users into downloading the malware known as Noodlophile through social media sites. 
Researchers from Morphisec have observed threat actors, believed to originate from Vietnam, posting on Facebook groups and other social media sites touting free AI tools. Users are tricked into believing that the AI tools are free, and unwittingly download Noodlophile Stealer, “a new malware that steals browser credentials, crypto wallets, and may install remote access trojans like XWorm.” Morphisec observed “fake AI tool posts with over 62,000 views per post.”
According to Morphisec, Noodlophile is a previously undocumented malware that criminals sell as malware-as-a-service, often bundled with other tools designed to steal credentials.
Beware of deals that are too good to be true, and exercise caution when downloading any content from social media.

Utah Enacts AI Amendments Targeted at Mental Health Chatbots and Generative AI

Utah is one of a handful of states that has been a leader in its regulation of AI. Utah’s Artificial Intelligence Policy Act[i] (“UAIPA”) was enacted in 2024 and requires disclosures relating to consumer interaction with generative AI with heightened requirements on regulated professions, including licensed healthcare professionals.
Utah recently passed three AI laws (HB 452, SB 226 and SB 332), all of which became effective on May 7, 2025, and either amend or expand the scope of the UAIPA. The laws govern the use of mental health chatbots, revise disclosure requirements for the deployment of generative AI in connection with a consumer transaction or provision of regulated services, and extend the repeal date of the UAIPA.
HB 452
HB 452 creates disclosure requirements, advertising restrictions, and privacy protections for the use of mental health chatbots. [ii] “Mental health chatbots” refer to AI technology that (1) uses generative AI to engage in conversations with a user of the mental health chatbot, similar to communications one would have with a licensed therapist, and (2) a supplier represents, or a reasonable person would believe, can provide mental health therapy or help manage or treat mental health conditions. “Mental health chatbots” do not include AI-technology that only provides scripted output (such as guided meditations or mindfulness exercises).
Disclosure Requirements
A mental health chatbot must clearly and conspicuously disclose that the mental health chatbot is an AI technology and not human. The disclosure must be made (1) before the user accesses features of the mental health chatbot, (2) at the beginning of any interaction with the user, if the user has not accessed the mental health chatbot within the previous 7 days, and (3) if asked or prompted by the user whether AI is being used.
Personal Information Protections
Mental health chatbot suppliers may not sell or share with any third party the individually identifiable health information (“IIHI”) or user input of a user. The prohibition does not apply to IIHI that (1) a health care provider requests with the user’s consent, (2) is provided to a health plan upon the request of the user, or (3) is shared by the supplier as a covered entity to a business associate to ensure effective functionality of the mental health chatbot and in compliance with the HIPAA Privacy and Security Rules.
Advertising Restrictions
A mental health chatbot cannot be used to advertise a specific product or service to a user in a conversation between the user and the mental health chatbot, unless the mental health chatbot clearly and conspicuously (1) identifies the advertisement as an advertisement and (2) discloses any sponsorship, business affiliation or agreement with a third party to promote or advertise the product or service. Suppliers of mental health chatbots may not use a user’s input to (1) determine whether to display advertisements to the user unless the advertisement is for the mental health chatbot itself, (2) customize how advertisements are presented, or (3) determine a product, service or category to advertise to the user.
Affirmative Defense
HB 452 establishes an affirmative defense to violations of the law which requires, among other items, creating, maintaining and implementing a policy for the mental health chatbot that meets specific requirements outlined in the law and filing such policy with the Utah Division of Consumer Protection.
Penalties
Violation of the law may result in administrative fines up to $2,500 per violation and court action by the Utah Division of Consumer Protection.
SB 226
SB 226 pares back UAIPA’s disclosure requirements applicable to a supplier that uses generative AI in a consumer transaction to when (1) there is a “clear and unambiguous” request from an individual to determine whether an interaction is with AI, rather than any request, and (2) an individual interacts with generative AI in the course of receiving regulated services that constitute a “high-risk” AI interaction, instead of any generative AI interaction in the provision of regulated services.[iii]
Disclosure Requirements
If an individual asks or prompts a supplier about whether AI is being used, a supplier that uses generative AI to interact with an individual in connection with a consumer transaction must disclose that the individual is interacting with generative AI and not a human. While this requirement also existed under the UAIPA, SB 226 clarifies that disclosure is only required when the individual’s prompt or question is a “clear and unambiguous request” to determine whether an interaction is with a human or AI.
The UAIPA also requires persons who provide services of a regulated occupation to prominently disclose when a person is interacting with generative AI in the provision of regulated services, regardless of whether the person inquires if they are interacting with generative AI. Under SB 226, such disclosure is only required if the use of generative AI constitutes a “high-risk artificial intelligence interaction.” The disclosure must be provided verbally at the start of a verbal conversation and in writing before the start of a written interaction. “Regulated occupation” means an occupation that is regulated by the Utah Department of Commerce and requires a license or state certification to practice the occupation, such as nursing, medicine, and pharmacy. “High-risk AI interaction” includes an interaction with generative AI that involves (1) the collection of sensitive personal information, such as health or biometric data and (2) the provision of personalized recommendations, advice, or information that could reasonably be relied upon to make significant personal decisions, including the provision of medical or mental health advice or services.
Safe Harbor
A person is not subject to an enforcement action for violation of the required disclosure requirements if the person’s generative AI clearly and conspicuously discloses at the outset of and throughout an interaction in connection with a consumer transaction or the provision of regulated services that it is (1) generative AI, (2) not human, or (3) an AI assistant.
Penalties
Violation of the law may result in administrative fines up to $2,500 per violation and a court action by the Utah Division of Consumer Protection.
SB 332
SB 332 extended the repeal date of the UAIPA from May 1, 2025 to July 1, 2027.[iv]
Looking Forward
Companies that offer mental health chatbots or generative AI in interactions with individuals in Utah should evaluate their products and processes to ensure compliance with the law. Furthermore, the AI regulatory landscape at the state level is rapidly changing as states attempt to govern the use of AI in an increasingly deregulatory federal environment. Healthcare companies developing and deploying AI should monitor state developments.
FOOTNOTES
[i] S.B. 149 (“Utah Artificial Intelligence Policy Act”), 65th Leg., 2024 Gen. Session (Utah 2024), available here.
[ii] H.B. 452, 66th Leg., 2025 Gen. Session (Utah 2025), available here.
[iii] S.B. 226, 66th Leg., 2025 Gen. Session (Utah 2025), available here.
[iv] S.B. 332, 66th Leg., 2025 Gen. Session (Utah 2025), available here.
Listen to this article

5 Key Contracting Considerations for Digital Health Companies Working with AI Vendors

Artificial Intelligence (AI) is rapidly transforming digital health — from patient engagement to clinical decision-making, the changes are revolutionary. Contracting with AI vendors presents new legal, operational, and compliance risks. Digital health CEOs and legal teams must adapt traditional contracting playbooks to address the realities of AI systems handling sensitive and highly regulated health care data.
To assure optimal results, here are five critical areas for digital health companies to address in the contract negotiation process with potential AI vendors:
1. Define AI Capabilities, Scope, and Performance
Your contract should explicitly:

Describe what the AI tool does, its limitations, integration points, and expected outcomes.
Establish measurable performance standards and incorporate them into service-level agreements.
Include user acceptance testing and remedies, such as service credits or termination if performance standards are not met. This protects your investment in AI-driven services and aligns vendor accountability with your operational goals.

2. Clarify Data Ownership and Usage Rights
AI thrives on data, so clarity around data ownership, access, and licensing is essential. The contract should state the specific data the vendor can access and use — including whether such data includes protected health information (PHI), other personal information, or operational data — and whether it can be used to train or improve the vendor’s models. Importantly, your contract should ensure that any vendor use of data aligns with HIPAA, state privacy laws, and your internal policies, including restricting reuse of PHI or other sensitive health data for purposes other than the vendor providing the services to your company or other purposes permitted by law. There is much greater flexibility to license access for the vendor to use your de-identified data to train or develop AI models, if the company has the appetite for such data licensing. 
You should also scrutinize broad data licenses. Be careful not to assume liability for how a vendor repurposes your data unless the use case is clearly authorized in the contract.
3. Demand Transparency and Explainability
Regulators and patients expect transparency in AI-driven health care decisions. Require documentation that explains how the AI model works, the logic behind outputs, and what safeguards are in place to mitigate bias and inaccuracies.
Beware of vendors reselling or embedding third-party AI tools without sufficient knowledge or flow-down obligations. The vendor should be able to audit or explain the tools it licenses from third parties if those AI tools are handling your company’s sensitive health care data.
4. Address Liability and Risk Allocation
AI-related liability, especially from errors, hallucinations, or cybersecurity incidents, can have sizable consequences. Ensure the contract includes tailored indemnities and risk allocations based on the data sensitivity and function of the AI tool.
Watch out for vendors who exclude liability for AI-generated content. This may be acceptable for internal tools but not for outputs that reach patients, payors, or regulators. Low-cost tools with high data exposure can pose a disproportionate liability risk, which is especially true if liability caps are tied only to the contract fees. 
5. Plan for Regulatory Compliance and Change
With evolving rules from federal and state privacy regulators, vendors must commit to ongoing compliance with current and future requirements. Contracts should allow flexibility for future changes in law or best practices. This will better help ensure that the AI tools your company relies on will not fall behind the regulatory curve — or worse, expose your company to enforcement risk due to noncompliance or outdated model behavior.
Incorporating this AI Vendor Contracting Checklist into your vendor selection process will help CEOs systematically manage risks, compliance, and innovation opportunities when engaging with AI vendors.
AI Vendor Contracting Checklist:

Define AI scope, capabilities, and performance expectations.
Clarify data ownership, access, and privacy obligations.
Require transparency and explainability of AI processes.
Set clear liability, risk, and compliance responsibilities.
Establish terms for updates, adaptability, and exit strategy.

AI solutions in the health care space continue to rapidly evolve. Thus, digital health companies should closely monitor any new developments and continue to take necessary steps towards protecting themselves during the contracting process.

BIS Issues Four Key Updates on Advanced Computing and AI Export Controls

On May 13, 2025, the U.S. Department of Commerce’s Bureau of Industry and Security (“BIS”) announced four significant policy developments under the Export Administration Regulations (“EAR”), affecting exports, reexports, and in-country transfers of certain advanced integrated circuits (“ICs”) and related computing items with artificial intelligence (“AI”) applications. These actions reflect the Trump administration’s first moves to address national security risks associated with exports of emerging technologies, and to prevent use of such items in a manner contrary to U.S. policy. Below is a summary of each development and its practical implications.
1. Initiation of Rescission of the “AI Diffusion Rule”
As explained in a press release, BIS has begun the process to rescind the so-called “AI Diffusion Rule,” issued in the closing days of the Biden administration and slated to go into effect on May 15. That rule would have imposed sweeping worldwide controls on specified ICs and set up a three-tiered system for access to such items by countries around the world. The rescission is intended to streamline U.S. export controls and avoid “burdensome new regulatory requirements” and strain on U.S. diplomatic relations. 
It will be important to monitor developments for BIS’s anticipated issuance of the formal rescission and for the control regime that BIS will likely implement in its place. In the meantime, all IC-related controls preceding the AI Diffusion Rule remain in effect. 
2. New End-Use Controls for Advanced Computing Items
BIS has issued a policy statement informing the public of new end-use controls targeting the training of large AI models. Specifically, the statement provides that the EAR may impose restrictions on the export, reexport, and in-country transfer of certain advanced ICs and computing items when there is knowledge or reason to know that the items will be used for training AI models for or on behalf of weapons of mass destruction or military-intelligence end-uses in or end-users headquartered in China and other countries in BIS Country Group D:5. Furthermore, U.S. persons are prohibited from knowingly supporting such activity.
This development underscores the importance of robust due diligence and end-use screening for companies involved in exports, re-exports, and transfers of such items, especially to Infrastructure as a Service providers.
3. Guidance to Prevent Diversion: Newly Specified Red Flags
To assist industry in preventing unauthorized diversion of controlled items to prohibited end-users or end-uses, BIS has published updated guidance identifying new “red flags” that may indicate a risk of such diversion. The guidance provides practical examples and scenarios, such as unusual purchasing patterns, requests for atypical technical specifications, or inconsistencies in end-user information. Companies are encouraged to review and update their compliance programs to incorporate these new red flags and to ensure that employees are trained to recognize and respond to potential diversion risks. 
4. Prohibition of Transactions Involving Certain Huawei “Ascend” Chips Under “General Prohibition Ten”
BIS has released guidance regarding the use of and transactions in certain Huawei “Ascend” chips meeting the parameters for control under Export Control Classification Number (“ECCN”) 3A090, clarifying the application to such activities of “General Prohibition Ten” under the EAR. This prohibition restricts all persons worldwide from engaging in a broad range of dealings in, and use of, specified Ascend chips that BIS alleges were produced in violation of the EAR.
Regarding due diligence in this context, BIS has provided the following guidance:
If a party intends to take any action with respect to a PRC 3A090 IC for which it has not received authorization from BIS, that party should confirm with its supplier, prior to performing any of the activities identified in GP10 to ensure compliance with the EAR, that authorization exists for the export, reexport, transfer (in-country), or export from abroad of (1) the production technology for that PRC 3A090 IC from its designer to its fabricator, and (2) the PRC 3A090 IC itself from the fabricator to its designer or other supplier.
Key Takeaways for Industry
It is important to keep in mind that the BIS actions focus on dealings in ICs and advanced computing items meeting the control parameters of ECCN 3A090 and related ECCNs. With that in mind, the following steps are recommended:

Review and update compliance programs: Impacted companies should promptly assess their export control policies and procedures in light of these developments, with particular attention to end-use and end-user screening.
Monitor regulatory changes: The rescission of the AI Diffusion Rule and the introduction of new end-use and General Prohibition Ten controls may require adjustments to licensing strategies.
Enhance employee training: Incorporate the newly specified red flags and guidance into training materials for relevant personnel.

BIS’s latest actions reflect a dynamic regulatory environment for national security regulation of advanced computing and AI technologies. Companies operating in these sectors should remain vigilant and proactive in managing compliance risks, as there are likely to be more developments in this area in the months ahead.

China’s State Council Releases 2025 Legislative Plan – Amended Trademark Law in the Works

On May 14, 2025, China’s State Council released their 2025 Legislative Plan (国务院2025年度立法工作计划) including considering several IP-related laws and regulations. Specifically, “[i]n terms of implementing the strategy of rejuvenating the country through science and education and building a socialist cultural power, the draft amendment to the Trademark Law will be submitted to the Standing Committee of the National People’s Congress for deliberation… and the regulations on the protection of new plant varieties will be revised… The implementing regulations of the Copyright Law, the collective management regulations of copyright, the Internet Information Service Management Measures, the implementation regulations of the Cultural Relics Protection Law, [and] the integrated circuit layout design protection regulations … will be revised. Legislation work on the healthy development of artificial intelligence will be promoted.”

A list of legislative projects for 2025 follows. The full text of the announcement is available here (Chinese only).
I. Bills to be submitted to the Standing Committee of the National People’s Congress for deliberation (16 items)
1. Draft National Development Planning Law (drafted by the National Development and Reform Commission )
2. Draft Amendment to the Foreign Trade Law (drafted by the Ministry of Commerce )
3. Draft amendment to the Prison Law (drafted by the Ministry of Justice )
4. Draft Medical Insurance Law (drafted by the National Medical Insurance Administration )
5. Draft Social Assistance Law (drafted by the Ministry of Civil Affairs and the Ministry of Finance )
6. Draft Law on Protection and Quality Improvement of Cultivated Land (drafted by the Ministry of Natural Resources and the Ministry of Agriculture and Rural Affairs )
7. Draft Amendment to the Food Safety Law (drafted by the State Administration for Market Regulation )
8. Draft Amendment to the Banking Supervision and Administration Law (drafted by the Financial Supervision Administration )
9. Draft Amendment to the Tendering and Bidding Law (drafted by the National Development and Reform Commission )
10. Draft Amendment to the Certified Public Accountants Law (drafted by the Ministry of Finance )
11. Draft Amendment to the Road Traffic Safety Law (drafted by the Ministry of Public Security )
12. Draft Amendment to the Trademark Law (drafted by the National Intellectual Property Administration)
13. Draft Amendment to the Water Law drafted by the Ministry of Water Resources )
14. Draft Law on National Fire and Rescue Personnel (drafted by the Ministry of Emergency Management and the National Fire and Rescue Administration)
15. Draft Amendment to the Law of the People’s Bank of China (drafted by the People’s Bank of China )
16. Draft Financial Law (drafted by the People’s Bank of China, the State Administration of Financial Supervision, the China Securities Regulatory Commission, and the State Administration of Foreign Exchange )
II. Administrative regulations to be formulated or amended (30 items)
1. Regulations on Securing Payments to Small and Medium-sized Enterprises (Revised) (drafted by the Ministry of Industry and Information Technology)
2. Provisions of the State Council on Regulating the Services Provided by Intermediary Institutions for Public Offering of Stocks by Companies (drafted by the Ministry of Justice, the Ministry of Finance, and the China Securities Regulatory Commission)
3. Regulations on the Protection of Ancient and Famous Trees (drafted by the Ministry of Natural Resources, the Ministry of Housing and Urban-Rural Development, and the National Forestry and Grassland Administration)
4. Housing Lease Regulations (drafted by the Ministry of Housing and Urban-Rural Development)
5. Interim Regulations on Express Delivery (Revised) (Drafted by the Ministry of Justice, the Ministry of Transport, and the State Post Bureau)
6. Provisions for the Implementation of the Anti-Foreign Sanctions Law of the People’s Republic of China (drafted by the Ministry of Justice )
7. Provisions of the State Council on the Settlement of Foreign-Related Intellectual Property Disputes (drafted by the Ministry of Justice, the National Intellectual Property Administration and the Ministry of Commerce)
8. Regulations on the Protection of Important Military Facilities (drafted by the Ministry of Industry and Information Technology, the State Administration of Science, Technology and Industry for National Defense, and the Equipment Development Department of the Central Military Commission)
9. Marriage Registration Regulations (Revised) (Drafted by the Ministry of Civil Affairs)
10. Measures for the Implementation of the Drug Administration Law of the People’s Republic of China by the Chinese People’s Liberation Army (Revised) ( Drafted by the Logistics Support Department of the Central Military Commission and the State Administration for Market Regulation)
11. Regulations on the Protection of New Plant Varieties (Revised) (drafted by the Ministry of Agriculture and Rural Affairs)
12. Measures for the External Use of the National Emblem (Revised) (drafted by the Ministry of Foreign Affairs)
13. Regulations on Government Data Sharing (drafted by the General Office of the State Council)
14. Rural Highway Regulations (drafted by the Ministry of Transport)
15. Miyun Reservoir Protection Regulations (drafted by the Ministry of Natural Resources)
16. Interim Measures for Compensation for the Use of Flood Storage and Detention Areas (revised) (drafted by the Ministry of Water Resources)
17. Commercial Mediation Regulations (drafted by the Ministry of Justice)
18. Regulations on the Procedure for Formulating Administrative Regulations (Revised) (drafted by the Ministry of Justice)
19. Provisions on the Submission of Tax-Related Information by Internet Platform Enterprises (drafted by the State Administration of Taxation)
20. Regulations on the Management of Clinical Research and Clinical Transformation Application of New Biomedical Technologies (drafted by the National Health Commission)
21. Regulations on the Implementation of the Administrative Reconsideration Law (Revised) (drafted by the Ministry of Justice)
22. Regulations on Ecological Environment Monitoring ( drafted by the Ministry of Ecology and Environment)
23. Regulations on Nature Reserves (Revised) (drafted by the Ministry of Natural Resources and the National Forestry and Grassland Administration)
24. Securities Company Supervision and Administration Regulations (Revised) (drafted by China Securities Regulatory Commission)
25. Regulations on Promoting National Reading (drafted by the State Press and Publication Administration)
26. Regulations on Funeral and Interment Management (Revised) (drafted by the Ministry of Civil Affairs)
27. Urban Water Supply Regulations (Revised) (drafted by the Ministry of Housing and Urban-Rural Development)
28. Regulations for the Implementation of the Drug Administration Law (Revised) (drafted by the State Administration for Market Regulation and the National Medical Products Administration)
29. Regulations on Foundation Management (Revised) (drafted by the Ministry of Civil Affairs)
30. Regulations on Forest and Grassland Fire Prevention and Suppression (drafted by the Ministry of Emergency Management, the Ministry of Natural Resources, and the National Forestry and Grassland Administration)
Preparations are underway to revise … the Regulations for the Implementation of the Copyright Law, the Regulations on Collective Management of Copyrights, the Measures for the Administration of Internet Information Services , … the Regulations on the Protection of Integrated Circuit Layout Designs , … the Implementation Rules of the Counter-Espionage Law, …

UK Data (Use and Access) Bill Status Update

As the draft UK Data (Use and Access) Bill (the “DUA Bill”) reaches its final stages, the House of Commons and the House of Lords are still debating several key issues. On May 14, 2025, the House of Commons received a program motion, urging it to deliberate on the amendments proposed by the House of Lords on May 12, 2025. The latest amendments introduced by the House of Lords include:

Scientific Data: Limiting the scope of the ‘scientific data’ provision by setting a higher standard for the reasonableness test such that “scientific research must be conducted according to appropriate ethical, legal and professional frameworks, obligations and standards.” This amendment is contrary to the position taken by the House of Commons, which proposed expanding the scope of the ‘scientific data’ provision by removing the requirement for the processing of ‘scientific data’ to be conducted in the ‘public interest.’
AI Models: Introducing transparency requirements for business data used in relation to AI models. The amendment would require developers of AI models to publish all information used in the pre-training, training, fine-tuning and retrieval-augmented generation of the AI model, and to provide a mechanism for copyright owners to identify any individual works they own that may have been used during such processes. The amendment also introduces transparency obligations in respect of “bots,” including the requirement to disclose information on the (1) name of the bot, (2) responsible legal entity the bot, and (3) specific purpose for which each bot is used.
Sex Data: Introducing requirements for ‘sex data’ to be collected in the context of digital verification services.

The House of Commons will now consider such amendments. With the DUA Bill’s progress accelerating, it is anticipated that the DUA Bill will soon be finalized.
Read the latest amendments proposed by the House of Lords.
For more information on the DUA Bill, read our previous update on the DUA Bill.

California Civil Rights Council Finalizes Regulations Aimed to Curb Employment Discrimination in the Use of AI Tools

Recently, the California Civil Rights Council, which is the arm of the California Civil Rights Department that is responsible for promulgating regulations, voted to approve final “Employment Regulations Regarding Automated-Decision Systems” (“Regulations”). The Regulations attempt to curb discriminatory practices that can arise when using AI tools in the workplace. If they are approved by the Office of Administrative Law, the Regulations will become effective on July 1, 2025. The Regulations have undergone several revisions since they were initially proposed in May 2024, and their adoption would make California one of the first states to implement anti-discrimination regulations pertaining to automated-decision technology.
The updated Regulations define “Automated-Decision Systems” (ADS) as “[a] computational process that makes decisions or facilitates human decision making regarding an employment benefit,” that “may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.” Examples of functions that ADS can perform include resume screening, computer-based assessments, and analysis of applicant or employee data from third parties.
Both employers and “agents” are covered under the Regulations. Agents are defined as “any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity . . . .” Such functions could include applicant recruiting and screening, hiring, or decisions pertaining to leaves of absence or benefits.
The Regulations provide that it is unlawful for a covered entity to use an ADS that discriminates against an applicant, employee, or a class of applicants or employees based on a protected characteristic, but also indicates that discrimination based on accent, English proficiency, height, or weight is prohibited. In defending against claims of such discrimination, the employer can point to any due diligence performed by the company, such as anti-bias testing. Lack of testing is also relevant to determine liability. Under the new Regulations, covered entities must retain personnel records and ADS data for four years.
Given the intense focus on the use of AI in employment in recent years, employers across the country who use AI tools should ensure that they understand how these tools work and whether they have been properly tested for bias. Employers should review their policies to ensure that the use of AI is adequately covered. Prudent employers will also review contracts with any third parties (such as AI developers or any consultants) to determine whether they are protected against liability arising from AI-related discrimination claims.

“Somebody’s Watching Me” – What You Need to Know About California’s Proposed AI Employee Surveillance Laws

California continues to police artificial intelligence (“AI”) in the workplace. Following proposed rulemaking on the use of AI for significant employment decisions, as we reported here, Assemblymember Isaac Bryan introduced Assembly Bill 1221 (“AB 1221”) this legislative session. The bill aims to regulate workplace surveillance tools, including AI, and use of employee data derived therefrom. Applicable to employers of all sizes, AB 1221 could present significant challenges for businesses.
Key Provisions of AB 1221
If enacted, AB 1221 would regulate workplace surveillance tools and the data they collect. The bill broadly defines a “workplace surveillance tool” to encompass any system or device that actively or passively collects or facilitates the collection of worker data, activities, communications, biometrics and behaviors and includes incremental time-tracking tools, geolocation, and photo-optical systems, among others. The bill has several key provisions that will impact businesses:

Notice:  Employers will be required to provide written notice to employees 30 days before using any surveillance tool, detailing the data collected, its purpose, frequency, storage, employment-related decisions, and workers’ rights to access and correct their data.
Data Security: The bill also sets forth robust measures to protect employee data, including required provisions in employer contracts with vendors they engage to analyze or interpret employee data.
Prohibited Technologies:  The bill bans the use of facial, gait, neural data collection, and emotional recognition technologies.
No Collection of Protected Characteristics: AB 1221 prohibits employers altogether from using surveillance tools to infer an employee’s immigration status, veteran status, ancestral history, religious or political beliefs, health or reproductive status, history, or plan, emotional or psychological state, neural data, sexual or gender orientation, disability status, criminal record, credit history, or any other status protected under California’s Fair Employment and Housing Act.
Limited Use in Disciplinary Actions: Employers are prohibited from relying exclusively on surveillance tools to make disciplinary decisions; to the extent they wish to rely on surveillance data at all for such decisions, employers must notify workers, allow data correction, and adjust personnel decisions within 24 hours if data challenged by the employee warrants it.
Penalties and Civil Liability: AB 1221 delegates enforcement to the California Labor Commissioner and provides for a $500 civil penalty per employer violation. In addition, AB 1221 would create a separate private right of action for employees, pursuant to which they could obtain damages, injunctive relief, punitive damages, and attorneys’ fees and costs.

Areas of Uncertainty
While AB 1221 aims to establish a framework for workplace surveillance, several aspects of the bill remain ambiguous. For instance, the requirement to provide notice for “significant updates or changes” to surveillance tools is not clearly defined. Additionally, the bill does not specify who is responsible for determining what constitutes an “up-to-date cybersecurity safeguard.”
Also, “injured” employees presumably would be able to recover their “noneconomic” damages for alleged “physical pain and mental suffering” associated with violations of this statute, which is a common remedy sought in employment cases that could substantially increase the liability for employers. These ambiguities could lead to inconsistent enforcement and legal challenges, creating costly uncertainty for employers.
Current Status
As of May 7, 2025, the bill is headed back to the Assembly Appropriations Committee. If passed and signed by Governor Newsom, AB 1221 would establish some of the broadest workplace privacy regulations in the nation. We will continue to monitor its progress.

Understanding the Scope of “Artificial Intelligence (AI) System” Definition: Key Insights From The European Commission’s Guidelines

With the entry into force of the AI Act (Regulation 2024/1689) in August 2024, a pioneering framework of AI was established.
On February 2, 2025, the first provisions of the AI Act became applicable, including the AI system definition, AI literacy and a limited number of prohibited AI practices. In line with article 96 of the AI Act, the European Commission released detailed guidelines on the application of the definition of an AI system on February 6, 2025.
These non-binding guidelines are of high practical relevance, as they seek to bring legal clarity to one of the most fundamental aspects of the act – what qualifies as an “AI system” under EU law. Their publication offers critical guidance for developers, providers, deployers and regulatory authorities aiming to understand the scope of the AI Act and assess whether specific systems fall within it.
“AI System” Definition Elements
Article 3(1) of the AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives how to generate output, such as predictions, content, recommendations or decisions that can influence physical or virtual environments.
The European Commission emphasizes that this definition is based on a lifecycle perspective, covering both the building phase (pre-deployment) and the usage phase (post-deployment). Importantly, not all definitional elements must always be present—some may only appear at one stage, making the definition adaptable to a wide range of technologies, in line with the AI Act’s future-proof approach.
Machine-based System
The guidelines reaffirm that all AI systems must operate through machines – comprised of both hardware (e.g., processors, memory and interfaces) and software (e.g., code, algorithms and models) components. This includes not only traditional digital systems, but also advanced platforms such as quantum computing and biological computing, provided they possess computational capacity.
Autonomy
Another essential requirement is autonomy, described as a system´s capacity to function with some degree of independence from human control. This does not necessarily imply full automation, but may include systems capable of operating based on indirect human input or supervision. Systems that are designed to operate solely with full manual human involvement and intervention
Adaptiveness
An AI system may, but is not required to, exhibit adaptiveness – meaning it can modify its behavior post-deployment based on new data or experiences. Importantly, adaptiveness is optional and systems without learning capabilities can still qualify as AI if other criteria are met. However, this characteristic is crucial in differentiating dynamic AI systems from static software.
Systems Objectives
AI systems are designated to achieve specific objectives, which can be either explicit (clearly programmed) or implicit (derivate from training data or system behavior). These internal objectives are different from the intended purpose, which is externally defined by its provider and context of use.
Inferencing Capabilities
It is the capacity to infer how to generate output based on input data that defines an AI system. This distinguishes them from the traditional rule-based or deterministic software. According to the guidelines, “inferencing” encompasses both the use phase, where the outputs such as predictions, decisions or recommendations are generated, as well as the building phase, where models or algorithms are derived using AI techniques.
Output That Can Influence Physical or Virtual Environments
The output of an AI system (predictions, content, recommendations or decisions) must be capable of influencing physical or virtual environments. This captures the wide functionality of modern AI, from autonomous vehicles and language models to recommendation engines. Systems that only process or visualize data without influencing any outcome fall outside the definition.
Environmental Interaction
Finally, AI systems must be able to interact with their environment, either physical (e.g., robotic systems) or virtual (e.g., digital assistants). This element underscores the practical impact of AI systems and further distinguishes them from purely passive or isolated software.
Systems Excluded from the AI System Definition
In addition to the wide explanation of AI systems elements of definition, these guidelines provide clarity on what is not considered AI under AI Act, even if some systems show rudimentary inferencing traits:

Systems for improving mathematical optimization – Systems, such as certain machine learning tools, that are used purely to improve computational performance (e.g., to enhance simulation speeds or bandwidth allocation) fall outside the scope unless they involve intelligent decision-making.
Basic data processing tools – Systems that execute pre-defined instructions or calculations (e.g., spreadsheets, dashboards and databases) without learning, reasoning or modelling are not considered AI systems.
Classical heuristic systems – Rule-based problem-solving systems that do not evolve through data or experience, such as chess programs based solely on minimax algorithms, are also excluded.
Simple prediction engines – Tools using basic statistical methods (e.g., average-based predictors) for benchmarking or forecasting, without complex pattern recognition or inference, do not meet the definition’s threshold.

The European Commission concludes by highlighting the following aspects:

It must be noted that the definition of an AI system in the AI Act is broad and must be assessed based on how each system works in practice
There is not an exhaustive list of what is considered AI, each case depends on the system’s features
Not all AI systems are subject to regulatory obligations and oversight under the AI Act
Only those that present higher risks, such as those covered by the rules on prohibited or high-risk AI, will be under legal obligations

These guidelines play an important role in supporting the effective implementation of the AI Act. By clarifying what is meant by an AI system, they provide greater legal certainty and help all relevant stakeholders such as regulators, providers and users understand how the rules apply in practice. Their functional and flexible approach reflects the diversity of AI technologies and offers a practical basis for distinguishing AI systems from traditional software. As such, the guidelines contribute to a more consistent and reliable application of the regulation across the EU.

Colorado’s Historic AI Law Survives Without Delay (So Far)

On May 17, 2024, Colorado Governor Jared Polis signed Colorado’s historic artificial intelligence (AI) consumer protection bill, SB 24-205, colloquially known as “Colorado’s AI Act” (“CAIA”), into law.
As we noted at the time, CAIA aims to prevent algorithmic discrimination in AI decision-making that affects “consequential decisions”—including those with a material, legal, or similarly significant effect with respect to health care services and employment decision-making. The bill is scheduled to take effect February 1, 2026.
The same day he signed CAIA, however, Governor Polis addressed a “signing statement” letter to Colorado’s General Assembly articulating his reservations. He urged sponsors, stakeholders, industry leaders, and more to “fine tune” the measure over the next two years to sufficiently protect technology, competition, and innovation in the state.
As the local and national political climate steers toward a less restrictive AI policy, Governor Polis drafted another letter to the Colorado legislature. On May 5, 2025, Polis—along with Attorney General Phil Weiser, Denver Mayor Mike Johnston, and others—requested that CAIA’s effective date be delayed until January 2027.
“Over the past year, stakeholders and legislators together have worked to find the right path forward on Colorado’s first-in-the-nation artificial intelligence regulatory law,” the letter states, adding that the collaboration took “many months” and “brought many ideas, concerns, and priorities to the table from a wide range of communities.” Nevertheless, “it is clear that more time is needed to continue important stakeholder work to ensure that Colorado’s artificial intelligence regulatory law is effective and implementable.”
The letter came the same day that SB 25-318, a bill that would have amended CAIA, was postponed indefinitely by the state Senate and reportedly killed by its own sponsor. Colorado Senate Majority Leader Robert Rodriguez introduced SB 25-318, entitled “Artificial Intelligence Consumer Protections,” just one week earlier.
On May 6, 2025, the day before the legislative session in Colorado ended, House Democrats made an eleventh-hour attempt to postpone the effective date of CAIA by inserting the delay into another unrelated bill, but that attempt also failed.
Proponents for the delay are calling for a framework “that protects privacy and fairness without stifling innovation or driving business away from our state,” as the Polis letter states. Technology groups have urged Governor Polis to call a special legislative session to delay implementation of CAIA. 
SB 25-318 Key Provisions
Despite SB 25-318’s failure to pass, several provisions remain noteworthy and likely to remain part of the ongoing policy debate. Viewed as “thoughtful amendments” by some commentators, the legislation would have modified the consumer protections of CAIA, which required developers and/or deployers of AI systems to implement a risk management program; do an impact assessment and make notifications to consumers. If it passed, SB 25-318 would have delayed many requirements from February 1, 2026, to January 1, 2027, and included the following adjustments:
Definitions. SB 25-318 attempted to redefine “algorithmic discrimination” to mean the use of an AI system that results in a violation of any applicable federal, state, or local discrimination law. It also would have created exemptions to the definition of “developer” of an AI system and exempted certain technologies, such as those performing a narrow procedural task, or cybersecurity and data security systems, from the definition of “high-risk AI systems.”
Reasonable Care. The bill would have eliminated the duty of developers or deployers of a high-risk AI system to use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination and further would have eliminated the duty of deployers to notify the attorney general of such risks arising from intended uses or that the system causes algorithmic discrimination.
Developer Disclosures. SB 25-318 sought to exempt developers from specified disclosure requirements if, for example, the systems make 10,000 or fewer consequential decisions in a year for 2027-2028, decreasing to 2,500 or fewer for 2029-2030. Other contemplated exemptions included instances where developers received less than $10,000 from investors, have annual revenues of less than $5,000,000, have operated and generated revenue for less than 5 years, etc. It sought to broaden disclosure requirement exemptions for deployers based on the number of full-time employees (500 instead of 50 between 2027 and 2028 and decreasing to 100 in 2029). It further would have also exempted developers with respect to the use of AI in hiring. A further exemption would apply if the AI system produces or consists of a score, model, algorithm, or similar output that is a consumer report subject to the Fair Credit Reporting Act.
Impact Assessments. SB 25-318 sought to amend the requirement that deployers, or third parties contracted by deployers, complete impact assessments within 90 days of a substantial modification to instead require these impact assessments be completed before the first deployment or January 1, 2027, whichever comes first, and annually thereafter. SB 25-318 would have also required deployers to include in an impact assessment whether the system poses any known or reasonably foreseeable risks of limiting accessibility for certain individuals, an unfair or deceptive trade practice, a violation of state or federal labor laws, or a violation of the Colorado Privacy Act.
Disclosures to Consumers. SB 25-318 attempted to require deployers to provide additional information to consumers if a high-risk AI system makes, or is a substantial factor in making, a consequential decision. It further included a transparency requirement that consumer disclosures must include information on whether and how consumers can exercise their rights.
Documentation Requirements. SB 25-318 would have required developers and deployers to maintain required documentation, disclosures, and other records with respect to each high-risk AI system throughout the period during which the developer sells, markets, distributes, or makes available the high-risk AI system—and for at least three years following the last date on which the developer sells, markets, distributes, or makes available the high-risk AI system.
Takeaways
Because Colorado’s 2025 legislative session ended at midnight on Wednesday, May 7, the CAIA will go into effect as originally passed on February 1, 2026, unless Governor Polis calls a special session, or a new bill is introduced in time for the new legislative session on January 14. To the extent additional attempts to modify CAIA arise before February 1, 2026, we anticipate that they will revive certain issues addressed in SB 25-318 as part of such efforts.
Many outside of Colorado are also following this process closely, including other states who are using CAIA as a framework for their own state laws and by federal lawmakers whose efforts to pass comprehensive AI legislation through Congress have stalled. On Tuesday, May 13, the House Energy and Commerce Committee will mark up language for potential inclusion in the reconciliation package that would prevent states from passing and implementing such AI laws for 10 years, but this language may not pass.
As we noted last year, organizations should start to consider compliance issues including policy development, impact assessments, engagement with AI auditors, contract language in AI vendor agreements to reflect responsibilities and coordination, and more. Impact assessments, in particular, take time and resources to design and conduct, and therefore we recommend that businesses using high-risk AI systems in Colorado begin preparations to conduct these impact assessments now, rather than waiting for a speculative change to the law. If properly designed, impact assessments will be a useful tool for businesses to ensure that their AI systems are reliable and deliver expected outcomes while minimizing the risk of algorithmic discrimination. 

AI Governance Remains Critical Despite Political Pendulum Swings

Businesses increasingly rely on AI and generative AI for myriad uses. A new body of “AI law” is forming—and some legal requirements are now live. AI governance is a mandatory compliance function right now rather than next quarter or next year. 
AI law is a patchwork across jurisdictions and can be hard to pin down. While some jurisdictions are enacting new laws, others are pulling back. As the political pendulum continues to swing, regulatory retrenchment is among the key themes coming into focus in 2025.
Some hardline AI regulatory regimes that dominated headlines in 2024 are being walked back. For example, at the U.S. federal level, the Trump administration has undone Biden-era AI executive orders, and federal agencies are recalibrating enforcement priorities accordingly. Consistent with broader deregulation impacts, observers expect that the FTC, SEC, and other agencies will focus primarily on clear cases of fraud, rather than pursuing broader or more innovative regulatory actions. 
At the state level, the Colorado AI Act is under scrutiny for possible amendments, including through a new bill introduced in April 2025. Meanwhile, the governors of California and Virginia recently vetoed high-profile AI bills. And the U.S. House Energy and Commerce Committee proposed a 10-year moratorium on the enforcement of state AI laws in a recent draft budget reconciliation bill. Across the pond, the EU Commission recently withdrew the draft AI Liability Directive and is reportedly considering amendments to the EU AI Act to soften certain requirements.
But AI regulation is not dead. Newly enacted state laws in the U.S. (e.g., California, Illinois, New York, Utah) address algorithmic discrimination and automated decision-making; disclosure of AI use; impersonation, digital replicas, and deepfakes; watermarking of AI-generated content; data privacy and biometric data; and more. State attorneys general (e.g., California, New Jersey, Oregon) have reiterated that they will enforce existing laws against unlawful uses of AI. And, of course, the AI “copyright war”—testing the boundaries of copyright infringement and fair use for AI training and outputs—wages on in dozens of lawsuits in the U.S. and elsewhere.
The first requirements of the EU AI Act went live in February 2025. For example, companies using AI within the EU are now subject to the “AI literacy” requirement mandating “measures to ensure, to their best extent, a sufficient level of AI literacy” for employees or others who operate or use AI systems. The AI Act is extraterritorial. It applies to U.S. companies using AI systems within the EU or whose AI systems produce outputs intended for use in the EU. Employee training regarding the responsible use of AI is now mandatory for such companies.
Bottom line: while there may be a trend towards softening AI regulation in some areas, this is not a universal truth, and enterprise AI governance remains essential. Some new “AI law” requirements are now live, while others will be soon. In addition, regulators, state AGs, and plaintiffs will seek to apply existing laws to new technology. And, of course, there’s the potential self-inflicted wounds (like data leakage) and the reputational and public relations risks from an AI-powered snafu.
Luckily, there are some common threads in the AI regulatory thicket, and established guidance may ease the governance burden. Voluntary AI compliance frameworks like the NIST AI RMF and ISO/IEC 42001:2023 not only provide useful, detailed guidance for responsible AI governance, but they also form the basis of statutory safe harbors or affirmative defenses under laws like the Colorado AI Act. They provide a wise starting point for compliance programs, in addition to choosing AI model providers, models, and use cases wisely.

Luckily, there are some common threads in the AI regulatory thicket, and established guidance may ease the governance burden.