Human Authorship Required: AI Isn’t an Author Under Copyright Act

 
The US Court of Appeals for the District of Columbia upheld a district court ruling that affirmed the US Copyright Office’s (CO) denial of a copyright application for artwork created by artificial intelligence (AI), reaffirming that human authorship is necessary for copyright registration. Thaler v. Perlmutter, Case No. 23-5233 (D.C. Cir. Mar. 18, 2025) (Millett, Wilkins, Rogers, JJ.)
Stephen Thaler, PhD, created a generative AI system that he named the Creativity Machine. The machine created a picture that Thaler titled, “A Recent Entrance to Paradise.” Thaler applied to the CO for copyright registration for the artwork, listing the Creativity Machine as the author and Thaler as the copyright owner.
The CO denied Thaler’s application because “a human being did not create the work.” Thaler twice sought reconsideration of the application, which the CO denied because the work lacked human authorship. Thaler subsequently sought review in the US District Court for the District of Columbia, which affirmed the CO’s denial of registration. The district court concluded that “[h]uman authorship is a bedrock requirement of copyright.” Thaler appealed.
The DC Circuit reaffirmed that the Creativity Machine could not be considered the author of a copyrighted work. The Copyright Act of 1976 mandates that to be eligible for copyright, a work must be initially authored by a human being. The Court highlighted key provisions of the Copyright Act that only make sense if “author” is interpreted as referring to a human being. For instance:

A copyright is a property right that immediately vests in the author. Since AI cannot own property, it cannot hold copyright.
Copyright protection lasts for the author’s lifetime, but machines do not have lifespans.
Copyright is inheritable, but machines have no surviving spouses or heirs.
Transferring a copyright requires a signature, and machines cannot provide signatures.
Authors of unpublished works are protected regardless of their nationality or domicile, yet machines do not have a domicile or national identity.
Authors have intentions, but machines lack consciousness and cannot form intentions.

The DC Circuit concluded that the statutory provisions, as a whole, make human activity a necessary condition for authorship under the Copyright Act.
The DC Circuit noted that the human authorship requirement is not new, referencing multiple judicial decisions, including those from the Seventh and Ninth Circuits, where appellate courts have consistently ruled that authors must be human.
Practice Note: Only humans, not their tools, can author copyrightable works of art. Images autonomously generated are not eligible for copyright. However, works created by humans who used AI are eligible for copyright depending on the circumstances, how the AI tool operates, and to what degree the AI tool was used to create the final work. Authors whose works are assisted by AI should seek advice of counsel to determine whether their works are copyrightable.

Oregon’s Privacy Law: Six Month Update, With Six Months to End of Cure Period

Oregon’s Attorney General released a new report this month, summarizing the outcomes since Oregon’s “comprehensive” privacy law took effect six months ago. A six-month report isn’t new: Connecticut released a six month report in February of last year to assess how consumers and businesses were responding to its privacy law.
The report summarizes business obligations under the law, and highlights differences between the Oregon law and other, similar state laws. It also summarizes the education and outreach efforts conducted by the state’s Department of Justice. This includes a “living document” set of FAQs answering questions about the law. The report also summarizes the 110 consumer complaints received to-date, and enforcement the Privacy Unit has taken since the law went into effect. On the enforcement side, Oregon reports that it has initiated and closed 21 privacy enforcement matters, with companies taking prompt steps to cure the issues raised.
As a reminder, these actions are being brought during the law’s “cure” period, which gives companies a 30-day period to fix violations after receiving the Privacy Unit’s notice. The Oregon cure provision sunsets on January 1, 2026. Other states with a cure period are Delaware, Indiana, Iowa, Kentucky, Maryland, Minnesota, Montana, Nebraska, New Hampshire, New Jersey, Tennessee, Texas, Utah, Virginia. (Of these, Minnesota, New Hampshire, New Jersey, Oregon, Delaware, Maryland, and Montana will expire, with varying expiration dates between December 31, 2025 (Delaware) and April 1, 2027 (Maryland). Those without or where the cure period has expired are California, Colorado, Connecticut, and Rhode Island. For an overview of US state “comprehensive” privacy laws, visit our tracker.
Common business deficiencies identified by Oregon in the enforcement notices included:

Disclosure issues: This included not giving consumers a notice of their rights under the law.Also, of concern, has been insufficiently informing Oregon consumers about their rights under the law, specifically the list of third parties to whom their data has been sold.
Confusing privacy notices: By way of example, Oregon pointed to -as confusing- notices that name some states in the “your state rights” section of the privacy policy, but not specifically name Oregon. This, the report posits, gives consumers the impression that privacy rights are only available to people who live in those named states.
Lacking or burdensome rights mechanisms: In other words, not including a clear and conspicuous link to a webpage enabling consumers to opt out, request their privacy rights, or inappropriately difficult authentication requirements.

Putting it into Practice: This report is a reminder to companies to look at their disclosures around consumer rights. It also sets out the state’s expectations around drafting notices that are “clear” and “accessible” to the “average consumer.” Companies have six months before the cure period in Oregon sunsets.

CIBC Signs Voluntary Code of Conduct for Responsible AI

CIBC Sets a New Standard for Responsible AI Adoption. The Canadian Imperial Bank of Commerce (CIBC) has solidified its position as a leader in responsible artificial intelligence (AI) use by signing the federal government’s voluntary code of conduct for generative AI. As the first major Canadian bank to adopt this code, CIBC is demonstrating its […]

D.C. Circuit Denies Copyright to AI Artwork – What Humans Have and Artificial Intelligence Does Not

Can a non-human machine be an author under the Copyright Act of 1976? In a March 18, 2025 precedential opinion, a D.C. Circuit panel affirmed prior determinations from the D.C. District Court and the Copyright Office that an original artwork created solely by artificial intelligence (AI) is not eligible for copyright registration, because human authorship is required for copyright protection.
Dr. Stephen Thaler created a generative AI named DABUS (or Device for the Autonomous Bootstrapping of Unified Sentience), also referred to as the “Creativity Machine,” which made a picture that Thaler titled “A Recent Entrance to Paradise.” In the copyright registration application to the U.S. Copyright Office, Thaler listed the Creativity Machine as the artwork’s sole author and himself as just the work’s owner.
Writing for the panel, D.C. Circuit Judge Patricia A. Millett opined that “the Copyright Act requires all work to be authored in the first instance by a human being,” including those who make work for hire. The court noted the Copyright Act’s language compels human authorship as it limits the duration of a copyright to the author’s lifespan or to a period that approximates how long a human might live. “All of these statutory provisions collectively identify an ‘author’ as a human being. Machines do not have property, traditional human lifespans, family members, domiciles, nationalities, mentes reae, or signatures,” the court concluded.
In rejecting Thaler’s copyright claim of entirely autonomous AI authorship, the court did not consider whether Thaler is entitled to authorship on the basis that he made and used the Creativity Machine, because Thaler waived such argument in the underlying proceedings. The court also declined to rule on whether or when an AI creation could give rise to copyright protection. However, citing the guidance from the Copyright Office, the court noted that whether a work made with AI is registrable depends on the circumstances, particularly how the AI tool operates and how much it was used to create the final work.  In general, a string of recent rulings from the Copyright Office concerning “hybrid” AI-human works have allowed copyright registration as to the human-created portions of such works.
The D.C. Circuit’s statutory text-based analysis and holding stands in parallel with the counterpart U.S. patent doctrine that human inventorship is required for patent protection, provided in Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022; Cert. denied) and reflected in the USPTO’s Inventorship Guidance for AI-Assisted Inventions issued February 12, 2024. 
Underlying the judicial rulings to require the human authorship and inventorship for copyright and patent protection is the concept that only humans can “create” art or can conceive the invention – that there is something special and important about human creativity, which is what the intellectual property law aims to protect. This underpinning of human creativity in the authorship and inventorship requirements was addressed in detail in a White Paper published last summer by Mammen and a multidisciplinary group of scholars at the University of Oxford.  The White Paper explains that creativity includes three core elements: (a) an external component (expressed ideas or made artifacts that reflect novelty, value, and surprisingness), (b) a mental component (a person’s thought process – interplay of divergent (daydreaming) thinking, convergent (task-focused), and recognition of salience (relevance)), and (c) a social context (for example, what society considers new, valuable, and surprising, and thus “creative”).   IP doctrines require all three core elements. Generative AI does not presently exhibit the equivalent of the mental component that is key to human creativity. 
In fact, as the White Paper discusses, there is some evidence that Generative AI can negatively impact even human creativity. First, using AI to produce creative products involves working in a way that emphasizes speed and instant answers, as well as becoming the passive consumer of such answers, rather than self-reflection or toggling between convergent and divergent thinking, which is key to creativity. Second, humans interacting with AIs tend to lose confidence in their own creative skills, and start to restrict the range of their own creative repertoire in favor of creating “mash-ups” of what AI provides. 
In analyzing the causal impact of generative AI on the production of short stories where some writers obtained story ideas from a large language model (LLM), Doshi and colleagues reported that access to generative AI caused stories to be more creative, better written, and more enjoyable in less creative writers, while such AI help had no effect for highly creative writers. However, the stories produced after using an LLM for just a few minutes indicated significantly reduced diversity of ideas incorporated into the stories, leading to a greater homogeneity between the stories as compared to stories written by humans alone. Thus, generative AI augmented less creative individuals’ creativity and quality of work, but decreased collective novelty and diversity among writers, suggesting degradation of collective human creativity by use of generative AI.
To be sure, the questions raised by Dr. Thaler and DABUS are testing the boundaries and rationales for existing IP doctrines.  Dr. Thaler argued that judicial opinions from the Gilded Age could not settle the question of whether computer generated works are copyrightable today.  But as reflected in the White Paper and affirmed by the courts, it is not enough merely to suggest that the outputs of Generative AI warrant IP protection because they are “just as good as” human-created outputs that are entitled to protection. Moreover, in most instances of AI-created work or invention, a human factor appears to be present to some extent, either in creating the AI, desiring certain goals and outputs, commanding the AI to generate a goal-oriented output, evaluating and selecting the AI-generated output, modifying the AI-generated output, or owning the AI for the purpose of using the AI-generated output. As the capabilities of AI continue to evolve, the border between human creativity and AI capability may blur further, posing an evolving set of challenges at the frontier of IP law. 

The Trump Factor, Jobs, Laws, and the Workplace [Podcast]

What does the new administration mean for the future ofemployment law? As new orders and laws are signed at the federal level, The Employment Strategists David T. Harmon and Mariya Gonor break down what policies have been removed or revised, including:

Stay-or-pay provisions
Student-athlete rights
Rolling back on DE&I initiatives
Amended pregnancy accommodations, including abortion protections
Deprioritizing discrimination policies around race, gender, and sexual orientation
The use of AI in hiring

Whether you’re an employer trying to stay compliant withstate and federal mandates or an employee wanting to better understand your rights, this conversation is one you won’t want to miss.

The AI Workplace: A Guide on AI Policy Essentials [Podcast]

In this episode of our new podcast series, The AI Workplace, where we explore the latest advancements in integrating artificial intelligence (AI) into the workplace, Sam Sedaei (associate, Chicago) shares his insights on crafting and implementing effective AI policies. Sam, who is a member of the firm’s Cybersecurity and Privacy and Technology practice groups, discusses the rapid rise of generative AI tools and highlights their potential to boost productivity, spark innovation, and deliver valuable insights. He also addresses the critical risks associated with AI, such as inaccuracies, bias, privacy concerns, and intellectual property issues, while emphasizing the importance of legal and regulatory guidance to ensure the responsible and effective use of AI in various workplace functions. Join us for a compelling discussion on navigating the AI-driven future of work.

California Legislature Introduces Several Employment Law Bills for 2025

California lawmakers introduced numerous bills early in the 2025 legislative session that could affect California employment law in significant ways. Although it is too soon to predict which bills, if any, will advance, the proposed bills could substantially affect California employers.

Quick Hits

California legislators have proposed bills in the 2025 legislative session that address pay transparency, automated decision systems, workplace surveillance, paid family leave, and employee training.
The legislative session in California will end on September 12, 2025.
The governor will have until October 12, 2025, to sign or veto bills passed by the state legislature.

California legislators have introduced the following employment law-related bills this session:

SB 642 would require pay scales provided in job ads to be no more than 10 percent above or below the mean pay rate within the salary or hourly wage range. The bill revises language to make clear that employers cannot pay an employee less than they pay employees of “another” sex, rather than “the opposite” sex, for substantially similar work.
AB 1018 would regulate the development and deployment of automated decision systems to make employment-related decisions, including hiring, promotion, performance evaluation, discipline, termination, and setting pay and benefits. The bill applies to machine learning, statistical modeling, data analytics, and artificial intelligence. It would require that employers allow workers to opt out of the automated decision system.
AB 1331 would place limits on workplace surveillance, including devices used for video or audio recording, electronic work pace tracking, location monitoring, electromagnetic tracking, and photoelectronic tracking. The bill would prohibit employers from using surveillance tools during off-duty hours or in private, off-duty areas, such as bathrooms, locker rooms, changing areas, breakrooms, and lactation spaces. The bill also would prohibit employers from monitoring a worker’s residence or personal vehicle.
SB 590 would expand eligibility for benefits under the state’s paid family leave program to include individuals who take time off work to care for a seriously ill designated person, meaning any individual whose association with the employee is the equivalent of a family relationship. State law already permits paid leave to care for a seriously ill child, stepchild, foster child, spouse, parent, sibling, grandparent, or grandchild.
AB 1371 would permit employees to refuse to perform a task assigned by an employer if the assigned task would violate safety standards, or if the employee has a reasonable fear that the assigned task would result in injury or illness to the employee or others. The bill would prohibit employers from disciplining or retaliating against an employee for refusing to perform the assigned task.
AB 1234 would revise the process for the state labor commissioner to investigate and hear wage theft complaints. The bill would require the labor commissioner to set a hearing date and establish procedures for the hearing within ninety days after issuing a formal complaint. It would require the labor commissioner to issue an order, decision, or award within fifteen days of the hearing, known as a Berman hearing.
AB 1015 would authorize employers to satisfy the state’s workplace discrimination and harassment training requirements by demonstrating that the employee possesses a certificate of completion within the past two years.
SB 261 would establish a civil penalty for employers that fail to pay a court judgment awarded for nonpayment of work performed.

Next Steps
Most of these bills are in the committee review stages and have not yet passed either the California Senate or Assembly. To advance, the bills must pass both legislative bodies. The last day for the legislature to pass bills is September 12, 2025. The governor will have until October 12, 2025, to sign or veto bills passed by the legislature.
California employers may wish to stay abreast of legislative action on state bills related to pay transparency, workplace surveillance, paid family leave, automated decision systems, employee training, and other employment law topics.

OECD Report on Data Scraping and AI – What Companies Can Do Now as Policymakers Consider the Issues

The power of large language models (LLMs) that enables generative AI derives from vast quantities of data. Much of this data comes from scraping all forms of content from the internet. Despite the benefits, this practice raises numerous legal issues, some of which implicate IP issues. Dozens of pending lawsuits in the US alone include claims involving IP issues with data scraping. The recent OECD report titled “Intellectual Property Issues in AI Trained on Scraped Data” (Report) explores the intricate relationship between AI and IP rights, particularly focusing on data scraping practices used in AI training. It aims to provide policymakers with insights into the legal challenges posed by data scraping and potential policy approaches to address these issues. The following is an overview of the Report.
Data scraping involves the automated extraction of data from websites, databases, or social media platforms without coordination with the data host. Techniques include web scraping, web crawling, and screen scraping.
Scraping raises concerns about IP rights infringement, especially when copyrighted materials are involved. The issues can include copyright infringement, right of publicity/misuse of name, image or likeness (NIL), database rights, trademarks, trade secrets, among others. Legal disputes over data scraping to train AI are increasing worldwide. The legal issues are complicated by the variation in the relevant laws in different jurisdictions. As one example, the US has a “fair use” exception to copyright law, while the European Union (EU) has a “text and data mining” (TDM) exception. Some jurisdictions recognize sui generis (unique) rights to protect specific types of materials under IP law. For example, the EU provides sui generis database rights. Some jurisdictions, such as Japan, has proclaimed that training AI on copyrighted material will not be deemed an infringement. In the US, it is a critical issue pending in many lawsuits, with no clear answer yet. Other legal issues arise with the output of AI, including whether it is an infringement, if so, who is liable (the tool provider or the user) and whether it is copyright protectable.
The Report further notes that contracts, including website terms of service (TOS) and end user license agreements (EULAs) can help govern how data can be scraped or used as between the parties to the agreement, including specific provisions on permitted uses, attribution requirements, and liability allocations. However, it further notes that the enforceability of TOS and EULAs and their interaction with IP laws can vary significantly across jurisdictions. This further complicates matters.
As the Report notes, the term data scraping is often conflated with “data mining.” The latter refers to computational processes for identifying patterns, trends, and correlations in data. The Report highlights the inconsistencies in definitions and proposes a broad working definition for data scraping.
The Report further notes that the data scraping ecosystem includes research institutions and academia, AI data aggregators, technology companies and platform operators. AI data aggregators reportedly make scraped data available to third parties, often without clear licensing terms or clear disclosure of data provenance. This exacerbates the IP and other legal issues. Different legal issues may apply to different entities, depending on their role in the AI ecosystem.
The Report indicates that policymakers are increasingly considering codes of conduct and other forms of voluntary commitments by business to address challenges such as data scraping. It discusses various issues that should be addressed in these codes for AI data aggregators and users. Voluntary commitments are easy to adopt, but only work if uniformly adopted. Bad actors are unlikely to comply.
Another part of a potential solution includes policymakers encouraging the development of standard and widely accessible technical tools that protect IP rights, enable rights holders to control access to their data more easily, and support licensing mechanisms. Such tools can help with data access control and rights management to streamline compliance and enhance transparency.
The Report also suggests developing appropriate standard contract terms, including common terminology. However, given the cross-border nature of AI and the territorial nature of IP rights, this process would need to consider technical, legal and/or other terms that may already exist in different jurisdictions.
Lastly, the Report advocates raising awareness of data scraping legal issues by educating stakeholders about their rights and responsibilities in the AI data ecosystem. This includes informing stakeholders about legal implications and best practices for data usage in AI. This is an area where many companies that are just getting up to speed can benefit from assistance of knowledgeable legal counsel.
In summary, the Report underscores the need for coordinated international policy approaches to address the challenges posed by AI data scraping, balancing innovation with the protection of IP rights. It notes that by adopting voluntary codes of conduct, developing technical tools, and implementing standard contract terms, policymakers can promote responsible AI development while safeguarding intellectual property. However, this will not happen overnight and companies are using AI now.
In the interim, there are things that companies can do now to ensure responsible use of AI and minimize legal liability. Many companies are seeking advice from legal counsel to receive in-house legal training and responsible AI policy development. For some of the issues with training and developing AI policies see Why Companies Need AI Legal Training and Must Develop AI Policies and 5 Things Corporate Boards Need to Know About Generative AI Risk Management.

New Workplace Policies Employers Should Consider

As the workplace landscape continues to evolve, employers must stay ahead of emerging challenges by implementing thoughtful and proactive policies. 
In 2025, three key areas stand out as critical for fostering a positive and productive work environment: promoting collaboration and respect, supporting employee well-being, and responsibly integrating artificial intelligence.  In this article, we’ll explore how well-crafted policies in these areas can enhance workplace culture, ensure compliance, and boost employee satisfaction.
Policies Promoting Collaboration, Respect, and Opportunity
Diversity, Equity, and Inclusion (“DEI”) is a key employment topic to prioritize in 2025.  On January 21, 2025, President Trump signed Executive Order 14173, Ending Illegal Discrimination and Restoring Merit-Based Opportunity, which encourages private employers to end “illegal DEI discrimination and preferences.”  This executive action directs federal agencies to promote “the policy of individual initiative, excellence, and hard work” in the private sector, and it directs the Attorney General to submit a report containing recommendations for enforcement measures to end “illegal discrimination and preferences.” 
Despite the recent executive action, employers may still implement a policy addressing collaboration, respect, and opportunity in the workplace.  In implementing this policy, employers should balance between an initiative that will aim to ensure fair treatment and equal opportunity for all, regardless of background, as opposed to one that incites claims of discrimination.  An effective policy on collaboration, respect, and opportunity can foster a positive working environment, promote a sense of belonging and satisfaction, boost morale, and drive innovation.
Well-Being in the Workplace 
Workplace well-being has transitioned from a perk to a necessity.  Often, when an employee’s well-being deteriorates, so does their job performance.  According to the National Alliance on Mental Illness (last updated April 2023) (“NAMI”): 

Approximately 1 in 5 adults in the United States experience mental illness each year; and
Approximately 1 in 20 adults in the United States experience a serious mental illness each year. 

Additionally, Equal Employment Opportunity Commission (“EEOC”) data shows that charges of discrimination based on mental health conditions (including substance use disorders) are substantial.  Well-being not only has a physical and social impact on the individual employee, but it also has a financial impact on the employer as employee well-being impacts productivity levels and healthcare costs.  
A comprehensive well-being in the workplace policy provides guidance on collaboration between employees, encourages healthy habits through on-site initiatives, provides access to mental health resources, and implements strategies designed to promote social engagement (for example, a well-being in the workplace policy may offer days off for volunteering activities).  An effective well-being in the workplace policy can reduce the stigma surrounding mental health and stress, cultivate a sense of purpose and accomplishment in the workplace, and ultimately enhance job satisfaction and productivity.
AI in the Workplace
The rapid integration and easy access of artificial intelligence (“AI”) in the workplace necessitates clear employment policies.  There are several accessible (and often free) AI tools for work that assist employees in drafting emails, preparing summary notes, drafting work materials, and preparing presentations.  However, an array of legal issues may arise when employees use AI tools to perform their job duties.  While these tools may be useful in promoting efficiency, they also generate legal risks if used improperly—primarily confidentiality.  A comprehensive AI policy should address AI usage guidelines (including clearly defining “AI usage,” listing permitted and prohibited uses, and implementing protocols for human oversight), ethical considerations, data privacy, and mandatory training.  An effective AI policy can cultivate responsible innovation, build trust, and assist in a smooth transition into an AI-driven work environment.
Conclusion
Although employers typically update their employee handbooks either at the end or beginning of the calendar year, there is never a bad time to implement new policies that address significant issues in the workplace.  These policies are only three examples of proactive steps an employer can take to improve their workplace culture and compliance with important laws.  

New York Attorney General Proposes Bill to Expand Consumer Protection Law

On March 13, New York Attorney General Letitia James announced the introduction of the Fostering Affordability and Integrity through Reasonable Business Practices Act (FAIR Business Practices Act). The proposed legislation seeks to extend the state’s existing ban on deceptive business practices to also prohibit unfair and abusive practices, aligning New York with 42 other states. 
The bill, introduced in both state Senate and Assembly, would enhance enforcement capabilities for the Office of the Attorney General (OAG) and private consumers, including the ability to seek civil penalties and restitution for UDAAP violations. According to Attorney General James, the legislation is needed to tackle a host of consumer harms, including: 

Subscription cancellations. Preventing companies from making it unreasonably difficult for consumers to cancel recurring payments.
Debt collection abuses. Prohibiting debt collectors from improperly seizing Social Security benefits or nursing homes from suing relatives of deceased residents for unpaid bills.
Auto dealer practices. Prohibiting car dealerships from withholding a customer’s photo identification until a sale is finalized. 
Student loan servicing misconduct. Restricting student loan servicers from steering borrowers into costlier repayment plans. 
Exploitation of limited English proficiency consumers. Addressing deceptive practices targeting non-English-speaking consumers. 
Junk fees and hidden costs. Reducing unnecessary and deceptive charges in various industries, including healthcare and lending. 
Artificial intelligence (AI) scams and online fraud. Strengthening enforcement against AI-driven scams, phishing schemes, and deceptive digital marketing practices. 

The proposal has garnered support from former CFPB director Rohit Chopra and former FTC Chair Lina Khan, both of whom have emphasized the need for stronger state-level enforcement against deceptive and abusive business practices. 
Putting It Into Practice: New York’s proposed legislation is the latest example of a growing trend among states taking a more active role in consumer protection enforcement (previously discussed here and here). This also highlights how some states are proactively responding to the CFPB’s state-level consumer protection recommendations from January, which encourage the adoption of the “abusive” standard (previously discussed here). With ongoing uncertainty surrounding the future of the CFPB, more states are likely to step in to fill the regulatory void by expanding their own consumer protection laws. 
Listen to this post

Virginia Moves to Regulate High-Risk AI with New Compliance Mandates

On February 20, the Virginia General Assembly passed the High-Risk Artificial Intelligence Developer and Deployer Act. If signed into law, Virginia would become the second state, after Colorado, to enact comprehensive regulation of “high-risk” artificial intelligence systems used in critical consumer-facing contexts, such as employment, lending, housing, and insurance.
The bill aims to mitigate algorithmic discrimination and establishes obligations for both developers and deployers of high-risk AI systems. 

Scope of Coverage. The Act applies to entities that develop or deploy high-risk AI systems used to make, or that are a “substantial factor” in making, consequential decisions affecting consumers. Covered contexts include education enrollment or opportunity, employment, healthcare services, housing, insurance, legal services, financial or lending services, and decisions involving parole, probation, or pretrial release. 
Risk Management Requirements. AI deployers must implement risk mitigation programs, conduct impact assessments, and provide consumers with clear disclosures and explanation rights. 
Developer Obligations. Developers must exercise “reasonable care” to protect against known or foreseeable risks of algorithmic discrimination and provide deployers with key system usage and limitation details. 
Transparency and Accountability. Both developers and deployers must maintain records sufficient to demonstrate compliance. Developers must also publish a summary of the types of high-risk AI systems they have developed and the safeguards in place to manage risks of algorithmic discrimination. 
Enforcement. The Act authorizes the Attorney General to enforce its provisions and seek civil penalties of up to $7,500 per violation. 
Safe Harbor. The Act includes a safe harbor from enforcement for entities that adopt and implement a nationally or internationally recognized risk management framework that reasonably addresses the law’s requirements. 

So how does this compare to Colorado’s law? Virginia defines “high-risk” more narrowly—limiting coverage to systems that are a “substantial factor” in making a consequential decision, whereas the Colorado law applies to systems that serve as a “substantial” or “sole” factor. Colorado’s law also includes more prescriptive requirements around bias testing and impact assessment content, and provide broader exemptions for small businesses. 
Putting It Into Practice: If enacted, the Virginia AI law will add to the growing patchwork of state-level AI regulations. In 2024, at least 45 states introduced AI-related bills, with 31 states enacting legislation or adopting resolutions. States such as California, Connecticut, and Texas have already enacted AI-related statutes . Given this trend, it is anticipated that additional states will introduce and enact comprehensive AI regulations in the near future. 

Key Considerations Before Negotiating Healthcare AI Vendor Contracts

The integration of artificial intelligence (AI) tools in healthcare is revolutionizing the industry, bringing efficiencies to the practice of medicine and benefits to patients. However, the negotiation of third-party AI tools requires a nuanced understanding of the tool’s application, implementation, risk and the contractual pressure points. Before entering the negotiation room, consider the following key insights:
I. The Expanding Role of AI in Healthcare
AI’s role in healthcare is rapidly expanding, offering a wide range of applications including real-time patient monitoring, streamlined clinical note-taking, evidence-based treatment recommendations, and population health management. Moreover, AI is transforming healthcare operations by automating staff tasks, optimizing operational and administrative processes, and providing guidance in surgical care. These technological advancements can not only improve efficiency but also enhance the quality of care provided. AI-driven customer support tools are also enhancing patient experiences by offering timely responses and personalized interactions. Even in employment recruiting, AI is being leveraged to identify and attract top talent in the healthcare sector.
With such a wide array of applications, it is crucial for stakeholders to understand the specific AI service offering when negotiating a vendor contract and implementing the new technology. This knowledge ensures that the selected AI solution aligns with the organization’s goals and can be effectively integrated into existing systems, while minimizing each party’s risk.
II. Pre-Negotiation Strategies
Healthcare AI arrangements are complex, often involving novel technologies and products, a wide range of possible applications, important data use and privacy considerations and the potential to significantly impact patient care and patient satisfaction. Further, the regulatory landscape is developing and can be expected to evolve significantly in the coming years. Vendors and customers should consider the following when approaching a negotiation:
Vendor Considerations:

Conduct a Comprehensive Assessment: Understand the problem the product is addressing, expected users, scope, proposed solutions, data involved, potential evolution, and risk level.
Engage Stakeholders: Schedule kick-off calls with the customer’s privacy, IT, compliance, and clinical or administrative teams.
Documentation: Maintain summary documentation detailing model overview, value proposition, processing activities, and privacy/security controls.
Collaborate with Sales: Develop strategies with the sales team and consider trial periods or pilot programs. Plan for the progression of these programs. For example, even if a pilot program is free, data usage terms should still apply.

Customer Considerations:

Evaluate Within AI Governance Scope: Don’t treat an AI contract like a normal tech engagement. Instead, approach this arrangement within a larger AI governance scope, including accounting for the introduction of ethical frameworks, data governance practices, monitoring and evaluation systems, and related guardrails to work in tandem with the product’s applications.
Engage Stakeholders: Collaborate with legal, privacy, IT, compliance, and other relevant stakeholders from the outset.
Consider AI-Specific Contracts: Use AI-specific riders or MSAs and review standard vendor forms to streamline negotiations.
Assess Upstream Contract Requirements: Ensure upstream requirements can be appropriately reflected downstream.
Perform vendor due diligence:As with any nascent industry, some vendors will not survive or may significantly change their focus or products, which might impact support or the long-term viability of the service. Learn about your vendor and ask questions about their financial stability, privacy and security posture.

III. AI Governance and Risk Assessment
Evaluating AI-related risk requires understanding risk across the full lifecycle of an AI product, including its model architecture, training methods, data types, model access, and specific application context. In the healthcare space, this includes understanding the impact to operations, the effect on clinical care and any other impact to patients, the amount of sensitive information involved, and the degree of visibility and/or control the organization has over the model.[1] For example, the risk is much larger with respect to AI that is used to assist clinical decision-making for diagnostics (e.g., assessing static imaging in radiology); whereas, technology used for limited administrative purposes carries a comparatively smaller risk. Here are three resources that healthcare organizations can use to evaluate and address AI-related risks:
A. HEAT Map
A HEAT map can be a helpful tool for evaluating the severity of risks associated with AI systems. It categorizes risks into different “heat” levels (e.g., informational, low, medium, high, and critical). This high-level visual representation can be particularly helpful when a healthcare organization is initially deciding whether to engage a vendor for a new AI product or platform. It can help the organization identify the risk associated with rolling out a given product and prioritize risk management strategies if it moves forward in negotiating an agreement with that vendor.
For example, both the customer and the vendor might consider (and categorize within the HEAT map) what data the vendor will require to perform its services, why the vendor needs it, who will receive the data, and what data rights the vendor might be asking for, how that data is categorized, whether any federal, state or global rules impact the acceptance of that data, and what mitigations are necessary to account for data privacy.
B. NIST AI Risk Management Framework
The National Institute of Standards and Technology (NIST) has created the NIST AI Risk Management Framework to guide organizations in identifying and managing AI-related risks.[2] This framework offers an example of a risk tiering system that can be used to understand and assess the risk profile of a given AI product, and ultimately guide organizations in the creation of risk policies and protocols, evaluation of ongoing AI rollouts, and resolution of any issues that arise. Whether healthcare organizations choose to adopt this risk tiering approach or apply their own, this framework reminds organizations of the many tools at their disposal to manage risk during the rollout of an AI tool, including data protection and retention policies, education of users, incident response protocols, auditing and assessment practices, changes to management controls, secure software development practices, and stakeholder engagement.
C. Attestations and Certifications
Attestations and certificates (e.g., HITRUST, ISO 27001, SOC-2) can also help your organization ensure compliance with industry standard security and data protection practices. Specifically, HITRUST focuses on compliance with healthcare data protection standards, reducing the risk of breaches and ensuring AI systems that handle health data are secure; ISO 27001 provides a framework for managing information security, helping organizations to safeguard AI data against unauthorized access and breaches; and SOC-2 assesses and verifies a service organization’s controls related to security, availability, processing integrity, confidentiality, and privacy, in order to ensure AI services are trustworthy. By engaging in the process to meet these certification standards, the organization will be better equipped to issue-spot potential problems and implement corrective measures. Also, these certifications can demonstrate to the public that the organization takes AI risks seriously, thereby strengthening trust and credibility amongst its patients and business partners.
IV. Contract Considerations
Once parties have assessed their organizational needs, engaged applicable stakeholders/collaborators, and reviewed their risk exposure from an AI governance perspective, they can move forward in negotiating the specific terms of the agreement. Here’s a high-level checklist of the terms and conditions that each party will want to pay careful attention to in negotiations, along with a deeper dive into the considerations surrounding data use and intellectual property (IP) issues:
A. Key Contracting Provisions:

Third-party terms
Privacy and security
Data rights
Performance and IP warranties
Service level agreements (SLAs)
Regulatory compliance
Indemnification (IP infringement, data breaches, etc.)
Limitations of liability and exclusion of damages
Insurance and audit rights
Termination rights and effects

B. Data Use and Intellectual Property Issues
When negotiating the terms and conditions related to data use, ownership, and other intellectual property (IP) issues, each party will typically aim to achieve the following objectives:
Customer Perspective:

Ensure customer will own all inputs, outputs, and derivatives of its data used in the application of the AI model;
Confirm data usage will be restricted to service-related purposes;
Confirm the customer’s right to access data stored by vendor or third-party as needed. For example, the customer might want to require that the vendor provide any relevant data and algorithms in the event of a DOJ investigation or plaintiff lawsuit;[3]
Aim for broad, protective IP liability and indemnity provisions; and
Where patient health information is involved, ensure that it is being used in compliance with HIPAA. Vendors want to train their algorithm on PHI. Unless the algorithm is only being trained for the benefit of the HIPAA-regulated entity and fits within a healthcare operations exception, a HIPAA authorization from the data subject will typically be required to train the algorithm for broader purposes.

Vendor Perspective:

Ensure vendor owns all services, products, documentation, and enhancements thereto;
Access customer data sources for training and improving machine learning models; and
Retain ownership over outputs. From the vendor’s perspective, any customer data that is inputted into the vendor’s model is modified by that model or product, resulting in the blending of information owned by both sides. One potential solution to this shared ownership issue is for the vendor to grant the customer a longstanding license to use that output.

V. Conclusion
In conclusion, negotiating contracts for AI tools in healthcare demands a comprehensive understanding of the technology, data use, risks and liabilities, among other considerations. By preparing effectively and engaging the right stakeholders and collaborators, both vendors and customers can successfully navigate these negotiations.

FOOTNOTES
[1] UC AI Council Risk Assessment Guide.
[2] NIST AI 600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (July 2024).
[3] Paul W. Grimm et al., Artificial Intelligence as Evidence, 19 Northwestern J. of Tech. and Intellectual Prop. 1, 9 (2021).
Listen to this post