Exploring California’s Proposed AI Bill
California lawmakers have proposed new legislation to reshape the growing use of artificial intelligence (AI) in the workplace. While this bill aims to protect workers, employers have expressed concerns about how it might affect business efficiency and innovation.
What Does California’s Senate Bill 7 (SB 7) Propose?
SB 7, also known as the “No Robo Bosses Act,” introduces several key requirements and provisions restricting how employers use automated decision systems (ADS) powered by AI. These systems are used in making employment-related decisions, including hiring, promotions, evaluations, and terminations. The pending bill seeks to ensure that employers use these systems responsibly and that AI only assists in decision-making rather than replacing human judgment entirely.
The bill is significant for its privacy, transparency, and workplace safety implications, areas that are fundamental as technology becomes more integrated into our daily work lives.
Privacy and Transparency Protections
SB 7 includes measures to safeguard worker privacy and ensure that personal data is not misused or mishandled. The bill prohibits the use of ADS to infer or collect sensitive personal information, such as immigration status, religious or political beliefs, health data, sexual or gender orientation, or other statuses protected by law. These limitations could significantly limit an employer’s ability to use ADS to streamline human resources administration, even if the ADS only assists but does not replace human decision making. Notably, the California Consumer Privacy Act, which treats applicants and employees of covered businesses as consumers, permits the collection of such information.
Additionally, if the bill is enacted, employers and vendors will have to provide written notice to workers if an ADS is used to make employment-related decisions that affect them. The notice must provide a clear explanation of the data being collected and its intended use. Affected workers also must receive a notice after an employment decision is made with ADS. This focus on transparency aims to ensure that workers are aware of how their data is being used.
Workplace Safety
Beyond privacy, SB 7 also highlights workplace safety by prohibiting the use of ADS that could violate labor laws or occupational health and safety standards. Employers would need to make certain that ADS follow existing safety regulations, and that this technology does not compromise workplace health and safety. Additionally, ADS restrictions imposed by this pending bill could affect employers’ ability to proactively address or monitor potential safety risks with the use of AI.
Oversight & Enforcement
SB 7 prohibits employers from relying primarily on an ADS for significant employment-related decisions, such as hiring and discipline, and requires human involvement in the process. The bill grants workers the right to access and correct their data used by ADS, and they can appeal ADS employment-related decisions. A human reviewer must also evaluate the appeal. Employers cannot discriminate or retaliate against a worker for exercising their rights under this law.
The Labor Commissioner would be responsible for enforcing the bill, and workers may bring civil actions for alleged violations. Employers may face civil penalties for non-compliance.
What’s Next?
While SB 7 attempts to keep pace with the evolution of AI in the workplace, there will likely be ongoing debate about these proposed standards and which provisions will ultimately become law. Jackson Lewis will continue to monitor the status of SB 7.
CAFC Ruling Questions How and When Tools Built Using Machine Learning are Patentable
CAFC affirms that applying generic machine learning to industry-specific problems is not enough for patent eligibility under §101, reinforcing the importance of how innovations are framed in patent applications — especially in emerging tech like AI.
PDFShare
In a decision with major implications for AI-related patent strategy, the U.S. Court of Appeals for the Federal Circuit (CAFC) held on April 18, 2025 that four patents related to machine learning in live-event scheduling and broadcasting were ineligible under 35 U.S.C. §101. The court affirmed a decision from the U.S. District Court for the District of Delaware, concluding that the innovation merely applied generic machine learning techniques in the data environment for the entertainment industry, without demonstrating any specific technological improvement or inventive concept.
This ruling has significant implications for innovators, businesses, and legal practitioners in the technology and intellectual property fields, particularly those working with machine learning and AI, because it further defines the boundaries of patent eligibility under judicially created exceptions to patent subject matter eligibility.
Background
These exceptions — categories of subject matter that include laws of nature, natural phenomena, and abstract ideas — are not eligible for patent protection. Notably, the “abstract idea” category, which encompasses mathematical formulas and business methods, is often applied against software-implemented inventions. The judiciary and the United States Patent and Trademark Office (USPTO) have ruled that such categories are considered the basic tools of scientific and technological work — which, if patentable, would stifle innovation and restrict access to knowledge.
In considering whether claims are eligible for patent protection, the USPTO or a federal court looks to whether the claims recite more than the subject matter deemed to be within the judicial exception. Under this framework, a claim that recites a judicial exception but also includes additional elements that transform the nature of the claim into a patentable application is typically considered eligible.
Notably, although the CAFC stated that “[m]achine learning is a burgeoning and increasingly important field and may lead to patent-eligible improvements in technology,” patents that “do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied,” are nevertheless “patent ineligible under § 101.”
Still, it is worth noting that patent owner repeatedly conceded that it was not claiming machine learning itself. Thus, the ruling highlights that application of known machine learning techniques to new contexts will not constitute a patent-eligible invention unless the claims disclose specific improvements to the machine learning technique itself. Such an approach is a further extension of the Supreme Court rulings of Alice Corp. v. CLS Bank International and Mayo v. Prometheus Labs, in that it effectively limits the eligibility analysis to a determination of how machine learning itself is improved rather than the use of machine learning to build systems and methodologies to address technological problems in particular industrial applications.
Implications and Takeaways
The USPTO’s most recent patent eligibility guidance examples are fairly aligned with the CAFC’s recent decision. However, it remains to be seen whether such an approach effectively serves the underlying goal of enabling free use of known “basic tools” such as machine learning technology.
As a result, patent applicants should understand and anticipate that patent eligibility will hinge on both how an innovation is framed in the application text as well as the underlying technology used to generate that innovation.
Employment Law This Week- New Executive Order Targets Disparate Impact Claims Nationwide [Video, Podcast]
This week, we explore how key changes introduced by President Trump’s Executive Order 14281, “Restoring Equality of Opportunity and Meritocracy” (“EO 14281”), raise important questions for employers navigating compliance with varying federal, state, and local laws.
New Executive Order Targets Disparate Impact Claims Nationwide
EO 14281 poses significant challenges for employers because it seeks to limit disparate impact liability but clashes with established state and local regulations and laws, such as New York City’s law regarding the use of automated employment decision tools. This tension underscores the increasing complexity of managing artificial intelligence (AI)-driven decision-making in the workplace amid shifting legal standards.
This week’s key topics include:
the scope of EO 14281;
conflicts between EO 14281 and existing federal, state, and local laws; and
best practices to mitigate risks in AI employment decisions.
Epstein Becker Green attorneys Marc A. Mandelman and Nathaniel M. Glasser unpack these developments and provide employers with practical strategies to stay compliant and address critical workforce challenges.
AI in Job Postings: What Employers in Canada Need to Know
Artificial intelligence (AI) is rapidly changing the hiring landscape. Whether scanning resumes with machine learning tools or ranking candidates based on predictive models, employers in Canada may now want to ensure transparency when using AI during recruitment. This is no longer just a best practice—it is increasingly being reflected in legislative requirements.
Quick Hits
In Ontario, if AI is used to screen, assess, or select applicants, a disclosure may be required directly in the job posting.
Employers with fewer than twenty-five employees are exempt from Ontario’s requirement.
In Quebec, if a decision is made exclusively through automated processing (such as AI), employers need to inform the individual and offer a mechanism for human review.
Across Canada, privacy laws (Quebec, Alberta, British Columbia, and federally under PIPEDA) suggest that individuals be informed about the purposes for collecting their personal data and openness requirements, which suggests disclosing AI use.
In Quebec, individuals also have the right to know what personal data was used, the key factors behind an automated decision, and to request corrections.
With the coming into force of Ontario’s Working for Workers Four Act, 2024 (Bill 149) and the coming into force of Quebec’s Act to Modernize Legislative Provisions as Regards the Protection of Personal Information (Law 25), which began in 2022, alongside longstanding privacy obligations in Alberta, British Columbia, and under federal law, the Personal Information Protection and Electronic Documents Act, S.C. 2000, c. 5 (PIPEDA), employers may want to carefully review how AI is used in job postings and the broader hiring process.
Quebec—Regulatory Context
Quebec’s Act Respecting the Protection of Personal Information in the Private Sector, CQLR c. P-39.1, was significantly amended and is now in force. These provisions apply to all private sector organizations collecting, using, or disclosing personal information in Quebec. This includes employers hiring employees located in Quebec, regardless of where the employer is based, as long as they are considered to be doing business in Quebec.
Section 12.1 provides that any individual whose personal information is the subject of a decision made exclusively through automated processing may be informed of the decision, the main factors and parameters that led to it, and of their right to have the decision reviewed by a person. As such, employers may want to ensure that systems that are used for any automated decision-making in Quebec are explainable so that if they receive a request on the factors and parameters that led to the decision they are able to provide this information.
Although the law is not specific about how such a request must be made, we assume that the section on access rights will apply. This means that employers would need to respond to a written request for information within thirty days.
While the statute does not define “automated decision-making technology,” the language of the law may be interpreted broadly and may apply to a wide range of systems, including algorithmic and AI tools used in hiring. Based on the approach of the data protection regulatory authority in Quebec, the Commission d’accès à l’information (CAI), we can expect a broad interpretation of this concept as the CAI has recently taken the position that privacy laws in Quebec are quasi-constitutional in nature. (For a discussion of Quebec’s restrictive approach to data privacy, see our article, “Québec’s Restrictive Approach to Biometric Data Poses Challenges for Businesses Working on Security Projects.”)
The CAI has broad investigative and enforcement powers. These powers include conducting audits, issuing binding orders, and imposing administrative monetary penalties. Employers may want to monitor guidance from the CAI as the authority’s enforcement evolves.
Ontario—Regulatory Context
On March 21, 2024, the Working for Workers Four Act, 2024 (Bill 149) received Royal Assent in Ontario. Among other amendments, the act introduced a new provision under the Employment Standards Act, 2000, S.O. 2000, c. 41 (ESA) regarding employer disclosure of artificial intelligence use in hiring when a job is publicly advertised.
The implementing regulation, O. Reg. 476/24: Job Posting Requirements, defines artificial intelligence as:
“A machine-based system that, for explicit or implicit objectives, infers from the input it receives in order to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.”
The same regulation defines a “publicly advertised job posting” as:
“An external job posting that an employer or a person acting on behalf of an employer advertises to the general public in any manner.”
This requirement is set to take effect on January 1, 2026. It will not apply to general recruitment campaigns, internal hiring efforts, or employers with fewer than twenty-five employees at the time of the posting.
Employers may find it useful to assess what tools qualify as “artificial intelligence” or what constitutes “screening,” “assessing,” or “selecting” a candidate. The broad definition may include simple keyword filters or more complex machine learning systems, raising the potential for over- or under-disclosure.
Considerations for Employers: Human Rights and AI Bias
Employers in Canada may also want to consider their use of AI tools in conjunction with human rights legislation to ensure that their recruitment practices comply with legal standards. These laws prohibit discrimination based on grounds such as race, gender, age, disability, and other protected characteristics.
When implementing AI in hiring, employers may want to assess whether any of the tools used unintentionally promote bias or perpetuate discriminatory outcomes. AI systems, if not properly designed or monitored, can inadvertently reinforce bias by relying on historical data that may reflect past inequalities. For example, predictive models may favor certain demographic groups over others, which could lead to unintentional bias in hiring decisions.
Employers can play a key role in minimizing these risks by considering the following:
being involved in discussions about how AI tools work, ensuring transparency about the data being used, and the potential for bias in decision-making;
choosing AI tools that are explainable—meaning the algorithms and their decision-making processes are understandable to humans. This can help employers detect and correct biases before they impact hiring decisions.
regularly auditing AI tools to identify and addressing any unintentional bias, ensuring that these tools comply with both privacy and human rights obligations; and
for employers subject to PIPEDA or provincial privacy laws, providing clear, accessible notices explaining how personal information is collected, why it is collected, and who to contact with questions.
AI is increasingly common in recruitment, and with this advancement comes increased scrutiny. The new laws are intended to support transparency, fairness, and human oversight. By using explainable AI, having strong internal mechanisms, and communicating across departments, employers may not only reduce legal risks but also foster trust in the hiring process, ensuring that all candidates are treated fairly and equitably.
Next Steps
Tips and considerations for responsible AI use include the following:
understanding the AI technology, verifying that it complies with the requirements of transparency under data privacy law, and understanding what the tool is doing to determine if it is necessary to indicate this information in a job posting;
communicating across the organization by having company-wide discussions about the implementation of AI tools to avoid the risk of a tool being used without being advertised in a job posting or privacy notice;
revising job posting templates to include AI-use disclosures where applicable
creating plain-language descriptions of AI tools used in hiring, especially those that may lead to automated decisions;
implementing procedures that enable human review of AI decisions, as reflected under Quebec’s Law 25;
maintaining up-to-date privacy policies that explain AI usage, list contact information for privacy inquiries, and detail individual rights;
training hiring personnel on how AI tools function and how to respond to applicant questions related to privacy and automation;
limiting data collection to what is necessary and reasonable for recruitment purposes, in line with privacy obligations under applicable laws; and
verifying the applicability of exemptions. For example, Ontario’s AI disclosure requirement may not apply to employers with fewer than twenty-five employees, though privacy obligations may still be relevant.
AI is transforming the hiring process—and the legal landscape is evolving just as fast. Employers across Canada may want to proactively review their recruitment practices through the lens of employment standards, privacy laws, and human rights obligations. Embracing transparency doesn’t just reduce legal risk—it can build trust with candidates and unlock the full potential of AI while respecting individual rights.
Harnessing AI in Litigation: Techniques, Opportunities, and Risks [Video, Podcast]
What if the key to navigating your most complex legal challenges lies in the capabilities of artificial intelligence (AI)?
Join Epstein Becker Green attorneys Alkida Kacani and Christopher Farella as they sit down with Jonathan Murphy, Senior Manager of Forensics at BDO, to examine how AI is revolutionizing the practice of law.
Discover how advanced technologies are refining e-discovery, optimizing predictive analytics, and transforming document review processes. The discussion also takes a deep look into the ethical considerations of integrating AI into legal work, from safeguarding sensitive information to maintaining professional standards in a highly dynamic field.
How Artificial Intelligence Is Re-Shaping Litigation
Imagine sitting in court, getting ready to hear a victim impact statement during a sentencing hearing, but instead of hearing a family member deliver the impact statement about the decedent, you see a video. Not just any video, but an AI generated video of the deceased themselves. A video where the victim has been brought back to life to deliver an emotional impact statement. This is no longer a scenario from a sci-fi movie or from imagination.
This is where litigation and trials are now. The use of AI is rapidly embedding itself into the legal process, and shows no signs of stopping. Recently in Arizona, Stacey Wales created an AI video of her brother, who was tragically killed in a road rage accident, delivering his victim impact statement at his killer’s sentencing hearing. While this was a creative, and rather harmless use of AI in the legal, there have been more sinister uses of AI in the courtroom. Jerome Dewald, representing himself in an employment dispute in front of the New York State Supreme Court Appellate Division’s First Judicial Department, used an AI generated “attorney” to deliver his oral arguments. Justice Sallie Manzanet-Daniels almost immediately halted the presentation when it dawned on her that the person on the screen was not real.
While those are the two newsworthy moments of AI use in the courtroom in 2025, it is a guarantee that more are to come. Additionally, while these two incidents were done by non-attorneys, if this pattern continues, there is no doubt it will start becoming the norm for attorneys to use AI in similar manners.
As of now, outdated guidelines exist for the use of AI in the courtroom; these two incidents in 2025 alone are sufficient to demonstrate why we need stronger, more robust universal rules regarding the use of AI in the litigation process. Furthermore, a large number of the rules revolve around the disclosure of the use of AI, not the use of AI itself. Since the Federal District Court for the Northern District of Texas, namely Judge Brantley Starr, issued a standing order regarding AI disclosures in 2023, several other district courts have followed suit. However, a majority of these standing orders were drafted with the idea of preventing AI hallucinations in court filings and motions. Now in 2025, hallucinations have decreased, making it necessary to create a universal approach and guideline to how we approach AI use in the litigation process.
Utah Law Aims to Regulate AI Mental Health Chatbots
Those in the tech world and in medicine alike see potential in the use of AI chatbots to support mental health—especially when human support is unavailable, or therapy is unwanted.
Others, however, see the risks—especially when chatbots designed for entertainment purposes can disguise themselves as therapists.
So far, some lawmakers agree with the latter. In April, U.S. Senators Peter Welch (D-Vt.) and Alex Padilla (D-Calif.) sent letters to the CEOs of three leading artificial intelligence (AI) chatbot companies asking them to outline, in writing, the steps they are taking to ensure that the human interactions with these AI tools “are not compromising the mental health and safety of minors and their loved ones.”
The concern was real: in October 2024, a Florida parent filed a wrongful death lawsuit in federal district court, alleging that her son committed suicide with a family member’s gun after interacting with an AI chatbot that enabled users to interact with “conversational AI agents, or ‘characters.’” The boy’s mental health allegedly declined to the point where his primary relationships “were with the AI bots which Defendants worked hard to convince him were real people.”
The Florida lawsuit also claims that the interactions with the chatbot became highly sexualized and that the minor discussed suicide with the chatbot, saying that he wanted a “pain-free death.” The chatbot allegedly responded, “That’s not a reason not to go through with it.”
Another lawsuit in Texas, meanwhile, claims that a chatbot commiserated with a minor over a parents’ time use limit for a phone, mentioning news headlines such as “child kills parents.”
In February 2025, the American Psychological Association urged regulators and legislators to adopt safeguards. In their April 2 letters described above, the senators informed the CEOs that the attention that users receive from the chatbots can lead to “dangerous levels of attachment and unearned trust stemming from perceived social intimacy.”
“This unearned trust can [lead], and has already[ led,] users to disclose sensitive information about their mood, interpersonal relationships, or mental health, which may involve self-harm and suicidal ideation—complex themes that the AI chatbots on your products are wholly unqualified to discuss,” the senators assert.
Utah’s Solution
States are taking note. In line with national objectives, Utah is embracing AI technology and innovation while still focusing on ethical use, protecting personal data/privacy, ensuring transparency, and more.
Several of these new Utah laws to analyze the impact across industries and have broad-reaching implications across a variety of sectors. For example:
The Artificial Intelligence Policy Act (B. 149) establishes an “AI policy lab” and creates a number of protections for users and consumers of AI, including requirements for healthcare providers to prominently disclose any use of generative AI in patient treatment.
The AI Consumer Protection Amendments (B. 226) limit requirements regarding the use of AI to high-risk services.
The Unauthorized Artificial Intelligence Impersonation Amendments (B. 271) protect creators by prohibiting the unauthorized monetization of art and talent.
Utah’s latest AI-related initiatives also include H.B. 452, which took effect May 7 and which creates a new code section titled “Artificial Intelligence Applications Relating to Mental Health.” This new code section imposes significant restrictions on mental health chatbots using AI technology. Specifically, the new law:
establishes protections for users of mental health chatbots using AI technology;
prohibits certain uses of personal information by a mental health chatbot;
requires disclosures to users that a mental health chatbot is AI technology, as opposed to a human;
places enforcement authority in the state’s division of consumer protection;
contains requirements for creating and maintaining chatbot policies; and
contains provisions relating to suppliers who comply with policy requirements.
We summarize the key highlights below.
H.B. 452: Regulation of Mental Health Chatbots Using AI Technology
Definitions. Section 13-72a-101 defines a “mental health chatbot” as AI technology that:
Uses generative AI to engage in interactive conversations with a user, similar to the confidential communications that an individual would have with a licensed mental health therapist; and
A supplier represents, or a reasonable person would believe, can or will provide mental health therapy or help a user manage or treat mental health conditions.
“Mental health chatbot” does not include AI technology that only
Provides scripted output (guided meditations, mindfulness exercises); or
Analyzes an individual’s input for the purpose of connecting the individual with a human mental health therapist.
Protection of Personal Information. Section 13-72a-201 provides that a supplier of a mental health chatbot may not sell to or share with any third party: 1) individually identifiable health information of a Utah user; or 2) the input of a Utah user. The law exempts individually identifiable health information—defined as any information relating to the physical or mental health of an individual—that is requested by a health care provider, with user consent, or provided to a health plan of a Utah user upon request.
A supplier may share individually identifiable health information necessary to ensure functionality of the chatbot if the supplier has a contract related to such functionality with another party, but both the supplier and the third party must comply with all applicable privacy and security provisions of 45 C.F.R. Part 160 and Part 164, Subparts A and E (see the Privacy Rule of the Health Insurance Portability and Accountability Act of 1996 (HIPAA)).
Advertising Restrictions. Section 13-72a-202 states that a supplier may not use a mental health chatbot to advertise a specific product or service absent clear and conspicuous identification of the advertisement as an advertisement, as well as any sponsorship, business affiliation, or third-party agreement regarding promotion of the product or service. The chatbot is not prohibited from recommending that the user seek assistance from a licensed professional.
Disclosure Requirements. Section 13-72a-203 provides that a supplier shall cause the mental health chatbot to clearly and conspicuously disclose to a user that the chatbot is AI and not human—before the chatbot features are accessed; before any interaction if the user has gone seven days without access; and any time a user asks or prompts the chatbot about whether AI is being used.
Affirmative Defense. Section 58-60-118 allows for an affirmative defense to liability in an administrative or civil action alleging a violation if the supplier demonstrates that it:
created, maintained, and implemented a written policy, filed with the state’s Division of Consumer Protection, which it complied with at the time of the violation; and
maintained documentation regarding the development and implementation of the chatbot that describes foundation models; training data; compliance with federal health privacy regulations; user data collection and sharing practices.
The law also contains specific requirements regarding the policy and the filing.
Takeaways
A violation of the Utah statute carries an administrative fine of up to $2500 per violation, and the state’s Division of Consumer Protection may bring an action in court to enforce the statute. The attorney general may also bring a civil action on behalf of the Division. As chatbots become more sophisticated, and more harms are realized in the context of mental health, other states are sure to follow Utah’s lead.
AI Drives Need for New Open Source Licenses – Linux Publishes the OpenMDW License
For many reasons, existing open source licenses are not a good fit for AI. Simply put, AI involves more than just software and most open source licenses are designed primarily for software. Much work has been done by many groups to assess the open source license requirements for AI. For example, the OSI has published its version of an AI open source definition – The Open Source AI Definition – 1.0. Recently, the Linux Foundation published a draft of the Open Model Definition and Weight (OpenMDW) License.
The OpenMDW License is a permissive license specifically designed for use with machine‑learning models and their related artifacts, collectively referred to as “Model Materials.” “Model Materials” include machine‑learning models (including architecture and parameters) along with all related artifacts—such as datasets, documentation, preprocessing and inference code, evaluation assets, and supporting tools—provided in the distribution. This inclusive definition purports to align with the OSI’s Open Source Definition and the Model Openness Framework, covering code, data, weights, metadata, and documentation without mandating that every component be released. The Model Openness Framework is a three-tiered ranked classification system that rates machine learning models based on their completeness and openness, following open science principles.
The OpenMDW License is a permissive license, akin to the Apache or MIT license. It grants a royalty free, unrestricted license to use, modify, distribute, and otherwise “deal in” the Model Materials under all applicable intellectual‑property regimes—including copyright, patent, database, and trade‑secret rights. This broad grant is designed to eliminate ambiguity around the legal permissions needed to work with AI assets.
The primary substantive compliance obligation imposed by OpenMDW is preservation of the license itself. Any redistribution of Model Materials must include (1) a copy of the OpenMDW Agreement and (2) all original copyright and origin notices. Compliance is as easy as placing a single LICENSE file at the root of the repository. There are no copyleft or share‑alike requirements, ensuring that derivative works and integrations remain as unconstrained as possible.
There is however a patent‑litigation‑termination clause. If a licensee initiates litigation alleging that the Model Materials infringe their patents—except as a defensive response to a suit first brought against them—all rights granted to that licensee under the OpenMDW are terminated. This provision serves to discourage aggressive patent actions that could undermine open collaboration.
Any outputs generated by using the Model Materials are free of license restrictions or obligations. The license also disclaims all warranties and liabilities “to the greatest extent permissible under applicable law,” placing responsibility for due diligence and rights clearance squarely on the licensee.
We all know that AI will be transformative, but we do not yet know all the ways in which it will be so. One of the transformations that AI will undoubtedly drive is a redefinition of what it means to be “open source” and the type of open source AI licenses. As a leader of my firm’s Open Source Team and its AI Team, the intersection of these areas is near and dear to my heart. While many lawyers and developers may not yet have focused on this, it will be a HUGE issue. If you have not yet done so, now is a good time to start.
One of the core issues is that traditionally, under an open source license, the source code is made available so others can copy, inspect, modify and redistribute software based thereon. With AI, the code alone is often not enough to accomplish those purposes. In many cases, other things are or may be necessary such as the training data, model weights and other non-code aspects that are important to AI. This issue is significant in many ways. So much so that, as mentioned above, the Open Source Initiative, stewards of the Open Source definition, developed the Open Source AI Definition 1.0 to REDEFINE the meaning of open source in the context of AI. To learn more about these issues, check out the OSI Deep Dive initiative here.
Listen to this post
New Kansas Law Will Presume Nonsolicitation Agreements Enforceable
Kansas Governor Laura Kelly recently signed a bill into law that deems certain nonsolicitation agreements with business owners and employees to be presumptively enforceable and not a restraint on trade. While generally consistent with existing Kansas case law, the legislation comes as many states are moving to limit or ban the use and enforceability of restrictive covenants in employment and reaffirms Kansas’s status as a relatively employer-friendly jurisdiction for the enforcement of well-tailored restrictive covenant agreements.
Quick Hits
Kansas recently enacted a law to make certain written agreements not to solicit customers or employees “conclusively presumed” to be enforceable.
The legislation applies to nonsolicitation agreements between businesses and their owners, which are limited to four years after the end of their business relationship, and agreements with employees, which are limited to two years following employment.
The legislation will take effect on July 1, 2025.
Kansas Senate Bill (SB) 241, which was signed on April 9, 2025, clarifies guidelines for what constitutes reasonable and enforceable nonsolicitation agreements and noninterference agreements regarding employers’ customers and employees under the Kansas Restraint of Trade Act.
Unlike the trend of scrutinizing restrictive covenants in employment, SB 241 sets forth a more employer-friendly approach, deeming certain types of restrictive covenants in writing to be “conclusively presumed to be enforceable and not a restraint of trade.” (Emphasis added).
Enforceable Covenants
SB 241 applies to certain nonsolicitation agreements “in writing” between businesses and/or the business’s owners and employees regarding interference with the business’s employees and/or customers.
Owner Nonsolicitation of Employees—Covenants in which an owner agrees not to recruit or otherwise interfere with employees or owners of a business entity for up to four years after their business relationship ends.
Owner Nonsolicitation of Customers—Covenants in which an owner agrees not to solicit a business entity’s “material contact customers” for up to four years after their business relationship ends.
Employee Nonsolicitation of Employees—Covenants between a business and one or more of its employees where an employee agrees not to solicit employees or owners of the business. The agreement must either: (1) seek to “protect confidential or trade secret business information or customer or supplier relationships, goodwill or loyalty,” or (2) not last for more than two years after employment.
Employee Nonsolicitation of Customers—Covenants where an employee agrees not to solicit or interfere with a business entity’s “material contact customers” for up to two years after their employment ends are enforceable if they are limited to material contact customers.
Owner Notice Provisions—Provisions requiring an owner to provide prior notice before terminating, selling, or disposing of their ownership interest in a business entity.
SB 241 defines “material contact customer” as an “any customer or prospective customer that is solicited, produced or serviced, directly or indirectly, by the employee or owner at issue or any customer or prospective customer about whom the employee or owner, directly or indirectly, had confidential business or proprietary information or trade secrets in the course of the employee’s or owner’s relationship with the customer.”
Modification and Interpretation
Under the Kansas Restraint of Trade Act, the act’s provisions for covenants presumed to be enforceable control even if they conflict with federal court decisions on U.S. antitrust law. SB 241 adds that “[i]f a covenant that is not presumed to be enforceable … is determined to be overbroad or otherwise not reasonably necessary to protect a business interest of the business entity seeking enforcement of the covenant” courts must “modify the covenant” and “enforce the covenant as modified,” granting “only the relief reasonably necessary to protect such interests.”
Despite the “presumption of enforceability,” SB 241 will allow employees or owners to “assert any applicable defense available at law or in equity” in a court’s consideration of a written covenant.
Next Steps
Restrictive covenants have come under scrutiny in recent years. In 2024, the Federal Trade Commission (FTC) finalized a rule that sought to ban nearly all noncompete agreements in employment, but that effort was struck down in court. The Trump administration has since asked to halt appeals while the administration considers whether to drop the FTC’s rule. Still, the FTC under the Trump administration has indicated it will scrutinize restrictive covenants that unreasonably harm competition in labor markets, even if it is unlikely to do so through formal rulemaking. Moreover, at the state level, Virginia and Wyoming enacted restrictions on noncompete agreements in 2025.
However, Kansas’s SB 241, while not applying to noncompete agreements, goes against the broader scrutiny of restrictive covenants in employment. Instead, the law presumes certain nonsolicitation agreements to be enforceable, providing guidelines for employers to craft reasonable and enforceable agreements to protect legitimate business interests and trade secrets. The law is set to take effect on July 1, 2025.
Live from Workplace Horizons 2025 – Emerging AI + Related Tech Issues in the Workplace [Video, Podcast]
Welcome to this special edition of We get work®. Over 500 representatives from 260 companies gathered together to share valuable insights and best practices on workplace law issues impacting their business today.
Nanterre Court of Justice Issues First Decision About Introduction of AI in the Workplace in France
For the first time, a French court has ruled on the implementation of artificial intelligence (AI) processes within a company.
Quick Hits
For the first time, a French court has ruled on the implementation of AI processes within a company, emphasizing the necessity of works council consultation even during experimental phases.
The Nanterre Court of Justice determined that the deployment of AI applications in a pilot phase required prior consultation with the works council, leading to the suspension of the project and a fine for the company.
The ruling highlights the importance for employers of carefully assessing the scope of AI tools experimentation to ensure compliance with consultation obligations and avoid legal penalties.
More specifically, the Nanterre Court of Justice was called upon to determine the prerogatives of the works council when AI technologies are introduced into the workplace.
In this case, a company had presented in January 2024 to its works council a project to deploy new computer applications using artificial intelligence processes.
The works council had asked to be consulted on the matter and had issued an injunction against the company to open the consultation and suspend the implementation of the new tools.
The company had finally initiated the works council consultation, even if it considered that a mere experimentation of AI tools could not fall into the scope of the consultation process of the works council.
However, the works council, considering that it did not have enough time to study the project and did not have sufficient information about it, took legal action to obtain an extension of the consultation period with suspension of the project under penalty of a fine of €50,000 per day and per offense, as well as €10,000 in damages for infringement of its prerogatives because the AI applications submitted for its consultation had been implemented without waiting for its opinion.
On this point, it should be noted that in France, the works council, which is an elected body representing the company’s staff, has prerogatives that in some cases oblige the employer to inform it, but also to consult it, before being able to make a final decision. The consultation process means that the works council renders an opinion about the project before any implementation. This opinion is not binding, which means the employer can deploy the project even if the works council renders a negative opinion.
However, in the absence of consultation prior to the implementation of the project, the works council may take legal action to request the opening of the consultation and the suspension of the implementation of the project under penalty. The works council may also consider that failure to consult infringes its proper functioning, which is a criminal offense.
Indeed, in application of Article L.2312-15 of the French Labor Code,
[t]he social and economic committee issues opinions and recommendations in the exercise of its consultative powers. To this end, it has sufficient time for examination and precise, written information transmitted or made available by the employer, and the employer’s reasoned response to its own observations. […] If the committee considers that it does not have sufficient information, it may refer the matter to the president of the court, who will rule on the merits of the case in an expedited procedure, so that he may order the employer to provide the missing information.”
Within the area of new technologies, the prerogatives relating to consultation of the works council are numerous and variable, as it is stipulated that in companies with at least fifty employees, the works council must be:
informed and consulted, particularly when introducing new technologies and any significant change affecting health and safety or working conditions (Article L.2312-8 of the Labor Code);
informed, prior to their introduction into the company, about automated personnel management processes and any changes to them, and consulted, prior to the decision to implement them in the company, about the means or techniques enabling the monitoring of employees’ activity (Article L.2312-38 of the Labor Code); and
consulted where a type of processing, particularly when using new technologies, and taking into account the nature, scope, context, and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall carry out, prior to the processing an analysis of the impact of the envisaged processing operations on the protection of personal data (article 35(9) of the European Union’s General Data Protect Regulation (GDPR)).
In addition, regarding AI applications, it is worth noting that the EU’s regulation of June 13, 2024, on AI (Regulation (EU) 2024/1689) provides in its Recital 92 that in certain cases the
Regulation is without prejudice to obligations for employers to inform or to inform and consult workers or their representatives under Union or national law and practice, including Directive 2002/14/EC of the European Parliament and of the Council, on decisions to put into service or use AI systems. It remains necessary to ensure information of workers and their representatives on the planned deployment of high-risk AI systems at the workplace where the conditions for those information or information and consultation obligations in other legal instruments are not fulfilled. Moreover, such information right is ancillary and necessary to the objective of protecting fundamental rights that underlies this Regulation. Therefore, an information requirement to that effect should be laid down in this Regulation, without affecting any existing rights of workers.
In the case at hand, the company considered that the works council consultation was irrelevant as the AI tools were in the process of being tested and had not yet been implemented within the company.
However, the Nanterre Court of Justice, in a decision of February 14, 2025 (N° RG 24/01457), ruled that the deployment of the AI applications had been in a pilot phase for several months, involving the use of the AI tools, at least partially, by all the employees concerned.
To reach this conclusion, the court relied on the fact that certain software programs, such as Finovox, had been made available to all employees reporting to the chief operating officer (COO) and that the employees of the communications department had all been trained in the Synthesia software program. As such, the employer could not validly claim that such an implementation was experimental since so many employees had been trained and allowed to use AI tools.
The court, therefore, considered that the pilot phase could not be regarded as a simple experiment but should instead be analyzed as an initial implementation of the AI applications subject to the prior consultation of the works council.
The court therefore ordered:
the suspension of the project until the end of the works council consultation period, subject to a penalty of €1,000 per day per violation observed for ninety days; and
the payment of damages amounting to €5,000 to the works council.
Key Takeaways
In light of the Nanterre Court of Justice’s ruling, employers in France may want to remain cautious before deploying AI tools, even if it is worth noting that:
the ruling is only a summary decision, i.e., an emergency measure pending a decision on the merits of the case; and
this decision confirms that an experimental implementation of AI might be feasible, provided that it is followed by an information and consultation of the works council, prior to a complete deployment of AI tools. However, the range and scope of this experimentation is to be assessed with care because a court might consider the experiment actually demonstrates that a decision to implement AI was irrevocably taken.
How BMW and Romania Built AIconic, a Powerful AI Supply Chain System

How BMW and Romania Built AIconic, a Powerful AI Supply Chain System. When most people think of BMW, they picture sleek design, precision engineering, and the roar of a finely tuned engine. But behind the scenes, far from the showroom floor, something entirely different is unfolding, something that doesn’t run on fuel, but on data. […]