SEC Creates New Tech-Focused Enforcement Team
On February 20, the SEC announced the creation of its Cyber and Emerging Technologies Unit (CETU) to address misconduct involving new technologies and strengthen protections for retail investors. The CETU replaces the SEC’s former Crypto Assets and Cyber Unit and will be led by SEC enforcement veteran Laura D’Allaird.
According to the SEC, the CETU will focus on rooting out fraud that leverages emerging technologies, including artificial intelligence and blockchain, and will coordinate closely with the Crypto Task Force established earlier this year (previously discussed here). The unit is comprised of approximately 30 attorneys and specialists across multiple SEC offices and will target conduct that misuses technological innovation to harm investors and undermine market confidence.
The CETU will prioritize enforcement in the following areas:
Fraud involving the use of artificial intelligence or machine learning;
Use of social media, the dark web, or deceptive websites to commit fraud;
Hacking to access material nonpublic information for unlawful trading;
Takeovers of retail investor brokerage accounts;
Fraud involving blockchain technology and crypto assets;
Regulated entities’ noncompliance with cybersecurity rules and regulations; and
Misleading disclosures by public companies related to cybersecurity risks.
In announcing the CETU, Acting Chairman Mark Uyeda emphasized that the unit is designed to align investor protection with market innovation. The move signals a recalibration of the SEC’s enforcement strategy in the cyber and fintech space, with a stronger focus on misconduct that directly affects retail investors.
Putting It Into Practice: Formation of the CETU follows Commissioner Peirce’s statement on creating a regulatory environment that fosters innovation and “excludes liars, cheaters, and scammers” (previously discussed here). The CETU is intended to reflect that approach, redirecting enforcement resources toward clearly fraudulent conduct involving emerging technologies like AI and blockchain.
Listen to the Post
Virginia Governor Vetoes Artificial Intelligence Bill HB 2094: What the Veto Means for Businesses
Virginia Governor Glenn Youngkin has vetoed House Bill (HB) No. 2094, a bill that would have created a new regulatory framework for businesses that develop or use “high-risk” artificial intelligence (AI) systems in the Commonwealth.
The High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094) had passed the state legislature and was poised to make Virginia the second state, after Colorado, with a comprehensive AI governance law.
Although the governor’s veto likely halts this effort in Virginia, at least for now, HB 2094 represents a growing trend of state regulation of AI systems nationwide. For more information on the background of HB 2094’s requirements, please see our prior article on this topic.
Quick Hits
Virginia Governor Glenn Youngkin vetoed HB 2094, the High-Risk Artificial Intelligence Developer and Deployer Act, citing concerns that its stringent requirements would stifle innovation and economic growth, particularly for startups and small businesses.
The veto maintains the status quo for AI regulation in Virginia, but businesses contracting with state agencies still must comply with AI standards under Virginia’s Executive Order No. 30 (2024), and any standards relating to the deployment of AI systems that are issued pursuant to that order.
Private-sector AI bills are currently pending in twenty states. So, regardless of Governor Youngkin’s veto, companies may want to continue proactively refining their AI governance frameworks to stay prepared for future regulatory developments.
Veto of HB 2094: Stated Reasons and Context
Governor Youngkin announced his veto of HB 2094 on March 24, 2025, just ahead of the bill’s deadline for approval. In his veto message, the governor emphasized that while the goal of ethical AI is important, it was his view that HB 2094’s approach would ultimately do more harm than good to Virginia’s economy. In particular, he stated that the bill “would harm the creation of new jobs, the attraction of new business investment, and the availability of innovative technology in the Commonwealth of Virginia.”
A key concern was the compliance burden HB 2094 would have imposed. Industry analysts estimated the legislation would saddle AI developers with nearly $30 million in compliance costs, which could be especially challenging for startups and smaller tech firms. Governor Youngkin, echoing industry concerns that such costs and regulatory hurdles might deter new businesses from innovating or investing in Virginia, stated, “HB 2094’s rigid framework fails to account for the rapidly evolving and fast-moving nature of the AI industry and puts an especially onerous burden on smaller firms and startups that lack large legal compliance departments.”
Virginia Executive Order No. 30 and Ongoing AI Initiatives
Governor Youngkin’s veto of HB 2094 does not create an AI regulatory vacuum in Virginia. Last year, Governor Youngkin signed Executive Order No. 30 on AI, establishing baseline standards and guidelines for the use of AI in Virginia’s state government. This executive order directed the Virginia Information Technologies Agency (VITA) to publish AI policy standards and IT standards for all executive branch agencies. VITA published the policy standards in June 2024. Executive Order No. 30 also created the Artificial Intelligence Task Force, currently comprised of business and technology nonprofit executives, former public servants, and academics, to develop further “guardrails” for the responsible use of AI and to provide ongoing recommendations.
Executive Order No. 30 requires that any AI technologies used by state agencies—including those provided by outside vendors—comply with the new AI standards for procurement and use. In practice, this requires companies supplying AI software or services to Virginia agencies to meet certain requirements with regard to transparency, risk mitigation, and data protection defined by VITA’s standards. Those standards draw on widely accepted AI ethical principles (for instance, requiring guardrails against bias and privacy harms in agency-used AI systems). Executive Order No. 30 thus indirectly extends some AI governance expectations to private-sector businesses operating in Virginia via contracting. Companies serving public-sector clients in Virginia may want to monitor the state’s AI standards for anticipated updates in this quickly evolving field.
Looking Forward
Had HB 2094 become law, Virginia would have joined Colorado as one of the first states with a broad AI statute, potentially adding a patchwork compliance burden for firms operating across state lines. In the near term, however, Virginia law will not explicitly require the preparation of algorithmic impact assessments, preparation and implementation of new disclosure methods, or the formal adoption of the prescribed risk-management programs that HB 2094 would have required.
Nevertheless, companies in Virginia looking to embrace or expand their use of AI are not “off the hook,” as general laws and regulations still apply to AI-driven activities. For example, antidiscrimination laws, consumer protection statutes, and data privacy regulations (such as Virginia’s Consumer Data Protection Act) continue to govern the use of personal information (including through AI) and the outcomes of automated decisions. Accordingly, if an AI tool yields biased hiring decisions or unfair consumer outcomes, companies could face liability under existing legal theories regardless of Governor Youngkin’s veto.
Moreover, businesses operating in multiple jurisdictions should remember that Colorado’s AI law is already on the books and that similar bills have been introduced in many other states. There is also ongoing discussion at the federal level about AI accountability (through agency guidance, federal initiatives, and the National Institute of Standards and Technology AI Risk Management Framework). In short, the regulatory climate around AI remains in flux, and Virginia’s veto is just one part of a larger national picture that warrants careful consideration. Companies will want to remain agile and informed as the landscape evolves.
Building the Blueprint: The Foundation of South Florida’s Tech Evolution Part 3 [Podcast]
In part 3 of the Building the Blueprint podcast miniseries, host Jaret Davis, Senior Vice President of Greenberg Traurig and Co-Managing Shareholder of the Miami office, is joined by GT Shareholder Joshua Forman, who leads the firm’s team in Miami focused on digital infrastructure and data centers. Together, they explore the critical role of data centers in Miami’s tech growth and their broader impact on the global tech ecosystem.
As demand for data continues to grow – fueled by AI, IoT, and remote work – Miami and South Florida are poised to play a key role in this evolving industry. This episode offers insight into the advancements shaping the future of data centers and examines the intersection of tech infrastructure, investment, and innovation in one of the fastest-growing tech hubs in the U.S. Tune in!
Privacy Tip #437 – 23andMe Files for Bankruptcy—What to Do If It Has Your Genetic Information
Genetic testing company 23andMe has filed for Chapter 11 bankruptcy protection, and its CEO has resigned. It is seeking to sell “substantially all of its assets” through a reorganization plan that will have to be approved by a federal bankruptcy judge.
Mark Jensen, Chair and member of the Special Committee of the Board of Directors stated: “We are committed to continuing to safeguard customer data and being transparent about the management of user data going forward, and data privacy will be an important consideration in any potential transaction.” The company has also stated that the buyer must comply with applicable law in using the data.
That said, privacy professionals are concerned about the sale of the data in 23andMe’s possession, including the sensitive genetic information of over 15 million people. People often assume that the information is protected by HIPAA or the Genetic Information Nondiscrimination Act, but as my students know, neither applies to genetic information collected and used by a private company. State laws may apply, and consumers could be offered the ability to request the deletion of their data.
The company has said that customers can delete their data and terminate their accounts. The California Attorney General “urgently” suggests that consumers request the deletion of their data and destruction of the genetic materials in its possession and offers a step-by-step guide on how to do so.
Apparently, so many people have followed the suggestion that the 23andMe website crashed. The site is now back up and running, so 23andMe customers may wish to log in and request the deletion of their data and termination of their accounts.
AI Governance: Steps to Adopt an AI Governance Program
There are many factors to consider when assisting clients with assessing the use of artificial intelligence (AI) tools in an organization and developing and implementing an AI Governance Program. Although adopting an AI Governance Program is a no-brainer, no form of a governance program is insufficient. Each organization has to evaluate how it will use AI tools, whether (and how) it will develop its own, whether it will allow third-party tools to be used with its data, the associated risks, and what guardrails and guidance to provide to employees about their use.
Many organizations don’t know where to start when thinking about an AI Governance Program. I came across a guide that I thought might be helpful in kickstarting your thinking about the process: Syncari’s “The Ultimate AI Governance Guide: Best Practices for Enterprise Success.”
Although the article scratches the surface of how to develop and implement an AI Governance Program, it is a good start to the internal conversation regarding some basic questions to ask and risks that may be present with AI tools. Although the article mentions AI regulations, including the EU AI Act and GDPR, it is important to consider state AI regulations being introduced and passed daily in the U.S. In addition, when considering third-party AI tools, it is important to question the third-party on how it collects, uses, and discloses company data, and whether company data is being used to train the AI tool.
Now is the time to start discussing how you will develop and implement your AI Governance Program. Your employees are probably already using it, so assess the risk and get some guardrails around it.
Virginia Governor Vetoes High-Risk Artificial Intelligence Developer and Deployer Act
On March 25, 2025, Virginia Governor Glenn Youngkin vetoed the High-Risk Artificial Intelligence Developer and Deployer Act (the “Act”), which had been passed by the Virginia legislature. The Act would have imposed accountability and transparency requirements with respect to the development and deployment of “high-risk” AI systems. In explaining the veto, the Governor stated that the Act’s “rigid framework fails to account for the rapidly evolving and fast-moving nature of the AI industry and puts an especially onerous burden on smaller firms and startups that lack large legal compliance departments.” The Governor also noted that Virginia’s existing laws “protect consumers and place responsibilities on companies relating to discriminatory practices, privacy, data use, libel, and more,” and that an executive order issued by the Governor’s Administration had “establish[ed] safeguards and oversight for AI use, and assembl[ed] a highly skilled task force comprised of industry leading experts to work closely with [the] Administration on key AI governance issues.”
FERC’s Co-Location Conundrum: Balancing Grid Reliability with Data Center Development as PJM’s Tariff Faces Scrutiny
Key Points
FERC’s Order Signals Transformative Change While Navigating Jurisdictional Limits: While FERC recognizes the urgent need to address co-location arrangements (particularly given the AI/data center boom), the intricate interplay of federal and state authority means any solution must carefully navigate jurisdictional boundaries. The Order reflects FERC’s attempt to maximize its impact within the framework of the Federal Power Act’s cooperative federalism.
Cost Allocation and Reliability Concerns Drive Reform: FERC’s primary concerns center on preventing cost-shifting to other ratepayers and ensuring grid reliability. The current Tariff’s lack of clear provisions for ancillary services, different co-location configurations, and sudden load shifts poses risks that FERC seeks to address through this proceeding.
Industry Response Suggests High Stakes for Multiple Stakeholders: The approximately 100 intervention motions filed indicate that stakeholders view this proceeding as potentially industry-reshaping. The outcome will likely influence how data center developers approach power supply strategies and could affect the viability of co-location as a solution to grid connection challenges.
Last year, the Federal Energy Regulatory Commission (“FERC”) convened a technical conference to discuss issues related to large loads being co-located with generating facilities (Docket No. AD24-11-000), which we summarized in the following client alert. In a related development late last year, Constellation Energy Generation, LLC (“Constellation”) filed a complaint against PJM Interconnection, LLC (“PJM”) pursuant to Section 206 of the Federal Power Act (“FPA”), arguing that PJM’s Open Access Transmission Tariff is “unjust, unreasonable and unduly discriminatory” due to the absence of guidance on co-located configurations where the generating asset is completely isolated from the grid (Docket No. EL25-20-000).
The importance of this topic is underscored by nearly daily announcements of new data center projects, such as the $500 billion proposed investment on AI infrastructure by OpenAI, SoftBank and Oracle highlighted by President Trump on the day after his inauguration. The massive power demands from both training and inference applications of AI are anticipated to place significant strains on power grids, while grid operators are contending with lengthy interconnection queues and insufficient buildout of transmission networks. In order to secure power supply for their projects, many data center developers are exploring co-location opportunities with new and existing generating facilities.
On February 20, 2025, FERC issued an order (the “Order”) consolidating the two dockets mentioned above and instituting for cause proceedings under Section 206 of the FPA, finding that PJM’s tariff appears to be unjust, unreasonable, unduly discriminatory or preferential (Docket No. EL25-49-00). FERC ordered PJM and the relevant transmission owners to either:
“show cause as to why the Open Access Transmission Tariff, the Amended and Restated Operating Agreement of PJM, and Reliability Assurance Agreement Among Load Serving Entities in the PJM Region (the “Tariff”) remains just and reasonable and not unduly discriminatory or preferential without provisions addressing with sufficient clarity or consistency the rates, terms and conditions of service that apply to co-location arrangements; or
explain what changes to the Tariff would remedy the identified concerns if [FERC] were to determine that the Tariff has in fact become unjust and unreasonable or unduly discriminatory or preferential and, therefore, proceeds to establish a replacement Tariff.”
On March 24, 2025, PJM and the transmission owners filed their responses to the Order, with both PJM and a joint answer submitted on behalf of a significant majority of the transmission owners arguing that the Tariff remains just and reasonable. The transmission owners urged FERC to clarify that co-located load served by generation interconnected to the transmission or distribution system is network load for the purposes of the Tariff. PJM presented a number of different configurations under the existing Tariff, while noting jurisdictional concerns based on federal/state shared jurisdiction and differences in regulation among the states.
Interested parties are able to respond with comments by April 23, 2025. Approximately 100 such entities have filed motions to intervene, which is indicative of the significance industry players are placing on these proceedings and FERC’s ultimate resolution.
FERC’s Analysis
Although the Order relates specifically to the complaint initiated by Constellation under Section 206 of the FPA, FERC is clearly conscious of many policy considerations that need to be addressed in the context of co-located large load configurations.
Jurisdiction. Although FERC has indicated that it is aware of the nationwide importance of co-located large load configurations, particularly with respect to the national security interests identified in facilitating the rapid buildout of AI infrastructure, it is also plainly conscious of its jurisdictional limitations. The Order highlights that the FPA only allocates jurisdiction to FERC for transmission and wholesale sales of electricity in interstate commerce, whereas retail sales, intrastate transmission and wholesaling, as well as siting authority, are all subject to state jurisdiction. Accordingly, there are jurisdictional limits to how transformative FERC’s guidance can be on this issue. The Order invites comments on when and under what circumstances co-located load should be considered as interconnected to the transmission system in interstate commerce. Specifically, FERC poses the query of whether fully isolated load should be understood as being connected to the transmission system, and if so, what characteristics would result in such a determination.1
Tariff Provisions. The Order makes a determination that the Tariff is “unjust and unreasonable or unduly discriminatory or preferential” due to its lack of clarity and consistency regarding rates and terms of use. For example, FERC comments that the Tariff does not account for costs associated with ancillary services that the co-located generator would be unable to provide, such as black start capabilities and load following services. There is also significant discussion about how the Tariff does not account for different co-location configurations, and specifically, how those differences may impact overall costs. Due to the ambiguities in the Tariff, FERC seems to be acutely concerned with the potential for parties to a co-location arrangement to shift costs to other ratepayers.
Reliability and Resource Adequacy. The concerns raised in the Order with respect to reliability and resource adequacy were identified and thoroughly discussed at the technical conference. For example, in the event a generator co-located with a large load customer temporarily goes offline, the large load customer could suddenly be drawing from the grid, thus potentially impacting overall network performance. Grid operators would be better placed if they had the ability to model such scenarios. Further, concerns relating to the removal of existing generating assets from capacity markets, and thus increasing rates of other consumers (at least in the short-term), were raised by many participants to the technical conference and restated in the Order. On the other hand, FERC notes that many of the concerns raised by serving large load customers would be present even if the customer is treated as network load rather than in a behind-the-meter configuration.
Questions. The Order stipulates that PJM and the transmission owners must include responses to a number of questions relating to: 1) transmission service, 2) ancillary or other wholesale services, 3) interconnection procedures and cost allocation, 4) the PJM capacity market, reliability and resource adequacy, and 5) general and miscellaneous questions which do not fall under any of these headings. The responses to these questions will assist FERC in framing its analysis of how revisions can be made to the Tariff to ensure it is just, reasonable and not unduly discriminatory.
Final Thoughts
Electricity infrastructure is already being built out at a rapid pace in the United States. This trend is set to continue, particularly to meet the needs of increased electrification across numerous sectors such as industry and transportation, along with the anticipated expansion of the data center fleet. Developers have pursued co-located arrangements as a potential means of reducing time frames for getting projects online. FERC’s guidance will result in greater certainty for developers on the costs and timing associated with co-location, which should clarify the role of co-location in the ongoing data center build-out.
1 Key questions about jurisdiction, cost allocation, and reliability turn on what it means for load to be isolated from the grid. For example, in the Complaint, Constellation describes “Fully Isolated Co-Located Loads” as behind the meter load with system protection facilities designed to ensure power does not flow from the grid to the load, with the PJM transmission owners refer to “fully isolated” load where both load and generator serving the co-located load are islanded from the transmission and distribution systems. PJM saw the nuance of different co-location agreements as risking to introduce regulatory gaps in federal and state jurisdiction.
Human Authorship Required: AI Isn’t an Author Under Copyright Act
The US Court of Appeals for the District of Columbia upheld a district court ruling that affirmed the US Copyright Office’s (CO) denial of a copyright application for artwork created by artificial intelligence (AI), reaffirming that human authorship is necessary for copyright registration. Thaler v. Perlmutter, Case No. 23-5233 (D.C. Cir. Mar. 18, 2025) (Millett, Wilkins, Rogers, JJ.)
Stephen Thaler, PhD, created a generative AI system that he named the Creativity Machine. The machine created a picture that Thaler titled, “A Recent Entrance to Paradise.” Thaler applied to the CO for copyright registration for the artwork, listing the Creativity Machine as the author and Thaler as the copyright owner.
The CO denied Thaler’s application because “a human being did not create the work.” Thaler twice sought reconsideration of the application, which the CO denied because the work lacked human authorship. Thaler subsequently sought review in the US District Court for the District of Columbia, which affirmed the CO’s denial of registration. The district court concluded that “[h]uman authorship is a bedrock requirement of copyright.” Thaler appealed.
The DC Circuit reaffirmed that the Creativity Machine could not be considered the author of a copyrighted work. The Copyright Act of 1976 mandates that to be eligible for copyright, a work must be initially authored by a human being. The Court highlighted key provisions of the Copyright Act that only make sense if “author” is interpreted as referring to a human being. For instance:
A copyright is a property right that immediately vests in the author. Since AI cannot own property, it cannot hold copyright.
Copyright protection lasts for the author’s lifetime, but machines do not have lifespans.
Copyright is inheritable, but machines have no surviving spouses or heirs.
Transferring a copyright requires a signature, and machines cannot provide signatures.
Authors of unpublished works are protected regardless of their nationality or domicile, yet machines do not have a domicile or national identity.
Authors have intentions, but machines lack consciousness and cannot form intentions.
The DC Circuit concluded that the statutory provisions, as a whole, make human activity a necessary condition for authorship under the Copyright Act.
The DC Circuit noted that the human authorship requirement is not new, referencing multiple judicial decisions, including those from the Seventh and Ninth Circuits, where appellate courts have consistently ruled that authors must be human.
Practice Note: Only humans, not their tools, can author copyrightable works of art. Images autonomously generated are not eligible for copyright. However, works created by humans who used AI are eligible for copyright depending on the circumstances, how the AI tool operates, and to what degree the AI tool was used to create the final work. Authors whose works are assisted by AI should seek advice of counsel to determine whether their works are copyrightable.
Oregon’s Privacy Law: Six Month Update, With Six Months to End of Cure Period
Oregon’s Attorney General released a new report this month, summarizing the outcomes since Oregon’s “comprehensive” privacy law took effect six months ago. A six-month report isn’t new: Connecticut released a six month report in February of last year to assess how consumers and businesses were responding to its privacy law.
The report summarizes business obligations under the law, and highlights differences between the Oregon law and other, similar state laws. It also summarizes the education and outreach efforts conducted by the state’s Department of Justice. This includes a “living document” set of FAQs answering questions about the law. The report also summarizes the 110 consumer complaints received to-date, and enforcement the Privacy Unit has taken since the law went into effect. On the enforcement side, Oregon reports that it has initiated and closed 21 privacy enforcement matters, with companies taking prompt steps to cure the issues raised.
As a reminder, these actions are being brought during the law’s “cure” period, which gives companies a 30-day period to fix violations after receiving the Privacy Unit’s notice. The Oregon cure provision sunsets on January 1, 2026. Other states with a cure period are Delaware, Indiana, Iowa, Kentucky, Maryland, Minnesota, Montana, Nebraska, New Hampshire, New Jersey, Tennessee, Texas, Utah, Virginia. (Of these, Minnesota, New Hampshire, New Jersey, Oregon, Delaware, Maryland, and Montana will expire, with varying expiration dates between December 31, 2025 (Delaware) and April 1, 2027 (Maryland). Those without or where the cure period has expired are California, Colorado, Connecticut, and Rhode Island. For an overview of US state “comprehensive” privacy laws, visit our tracker.
Common business deficiencies identified by Oregon in the enforcement notices included:
Disclosure issues: This included not giving consumers a notice of their rights under the law.Also, of concern, has been insufficiently informing Oregon consumers about their rights under the law, specifically the list of third parties to whom their data has been sold.
Confusing privacy notices: By way of example, Oregon pointed to -as confusing- notices that name some states in the “your state rights” section of the privacy policy, but not specifically name Oregon. This, the report posits, gives consumers the impression that privacy rights are only available to people who live in those named states.
Lacking or burdensome rights mechanisms: In other words, not including a clear and conspicuous link to a webpage enabling consumers to opt out, request their privacy rights, or inappropriately difficult authentication requirements.
Putting it into Practice: This report is a reminder to companies to look at their disclosures around consumer rights. It also sets out the state’s expectations around drafting notices that are “clear” and “accessible” to the “average consumer.” Companies have six months before the cure period in Oregon sunsets.
CIBC Signs Voluntary Code of Conduct for Responsible AI

CIBC Sets a New Standard for Responsible AI Adoption. The Canadian Imperial Bank of Commerce (CIBC) has solidified its position as a leader in responsible artificial intelligence (AI) use by signing the federal government’s voluntary code of conduct for generative AI. As the first major Canadian bank to adopt this code, CIBC is demonstrating its […]
D.C. Circuit Denies Copyright to AI Artwork – What Humans Have and Artificial Intelligence Does Not
Can a non-human machine be an author under the Copyright Act of 1976? In a March 18, 2025 precedential opinion, a D.C. Circuit panel affirmed prior determinations from the D.C. District Court and the Copyright Office that an original artwork created solely by artificial intelligence (AI) is not eligible for copyright registration, because human authorship is required for copyright protection.
Dr. Stephen Thaler created a generative AI named DABUS (or Device for the Autonomous Bootstrapping of Unified Sentience), also referred to as the “Creativity Machine,” which made a picture that Thaler titled “A Recent Entrance to Paradise.” In the copyright registration application to the U.S. Copyright Office, Thaler listed the Creativity Machine as the artwork’s sole author and himself as just the work’s owner.
Writing for the panel, D.C. Circuit Judge Patricia A. Millett opined that “the Copyright Act requires all work to be authored in the first instance by a human being,” including those who make work for hire. The court noted the Copyright Act’s language compels human authorship as it limits the duration of a copyright to the author’s lifespan or to a period that approximates how long a human might live. “All of these statutory provisions collectively identify an ‘author’ as a human being. Machines do not have property, traditional human lifespans, family members, domiciles, nationalities, mentes reae, or signatures,” the court concluded.
In rejecting Thaler’s copyright claim of entirely autonomous AI authorship, the court did not consider whether Thaler is entitled to authorship on the basis that he made and used the Creativity Machine, because Thaler waived such argument in the underlying proceedings. The court also declined to rule on whether or when an AI creation could give rise to copyright protection. However, citing the guidance from the Copyright Office, the court noted that whether a work made with AI is registrable depends on the circumstances, particularly how the AI tool operates and how much it was used to create the final work. In general, a string of recent rulings from the Copyright Office concerning “hybrid” AI-human works have allowed copyright registration as to the human-created portions of such works.
The D.C. Circuit’s statutory text-based analysis and holding stands in parallel with the counterpart U.S. patent doctrine that human inventorship is required for patent protection, provided in Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022; Cert. denied) and reflected in the USPTO’s Inventorship Guidance for AI-Assisted Inventions issued February 12, 2024.
Underlying the judicial rulings to require the human authorship and inventorship for copyright and patent protection is the concept that only humans can “create” art or can conceive the invention – that there is something special and important about human creativity, which is what the intellectual property law aims to protect. This underpinning of human creativity in the authorship and inventorship requirements was addressed in detail in a White Paper published last summer by Mammen and a multidisciplinary group of scholars at the University of Oxford. The White Paper explains that creativity includes three core elements: (a) an external component (expressed ideas or made artifacts that reflect novelty, value, and surprisingness), (b) a mental component (a person’s thought process – interplay of divergent (daydreaming) thinking, convergent (task-focused), and recognition of salience (relevance)), and (c) a social context (for example, what society considers new, valuable, and surprising, and thus “creative”). IP doctrines require all three core elements. Generative AI does not presently exhibit the equivalent of the mental component that is key to human creativity.
In fact, as the White Paper discusses, there is some evidence that Generative AI can negatively impact even human creativity. First, using AI to produce creative products involves working in a way that emphasizes speed and instant answers, as well as becoming the passive consumer of such answers, rather than self-reflection or toggling between convergent and divergent thinking, which is key to creativity. Second, humans interacting with AIs tend to lose confidence in their own creative skills, and start to restrict the range of their own creative repertoire in favor of creating “mash-ups” of what AI provides.
In analyzing the causal impact of generative AI on the production of short stories where some writers obtained story ideas from a large language model (LLM), Doshi and colleagues reported that access to generative AI caused stories to be more creative, better written, and more enjoyable in less creative writers, while such AI help had no effect for highly creative writers. However, the stories produced after using an LLM for just a few minutes indicated significantly reduced diversity of ideas incorporated into the stories, leading to a greater homogeneity between the stories as compared to stories written by humans alone. Thus, generative AI augmented less creative individuals’ creativity and quality of work, but decreased collective novelty and diversity among writers, suggesting degradation of collective human creativity by use of generative AI.
To be sure, the questions raised by Dr. Thaler and DABUS are testing the boundaries and rationales for existing IP doctrines. Dr. Thaler argued that judicial opinions from the Gilded Age could not settle the question of whether computer generated works are copyrightable today. But as reflected in the White Paper and affirmed by the courts, it is not enough merely to suggest that the outputs of Generative AI warrant IP protection because they are “just as good as” human-created outputs that are entitled to protection. Moreover, in most instances of AI-created work or invention, a human factor appears to be present to some extent, either in creating the AI, desiring certain goals and outputs, commanding the AI to generate a goal-oriented output, evaluating and selecting the AI-generated output, modifying the AI-generated output, or owning the AI for the purpose of using the AI-generated output. As the capabilities of AI continue to evolve, the border between human creativity and AI capability may blur further, posing an evolving set of challenges at the frontier of IP law.
The Trump Factor, Jobs, Laws, and the Workplace [Podcast]
What does the new administration mean for the future ofemployment law? As new orders and laws are signed at the federal level, The Employment Strategists David T. Harmon and Mariya Gonor break down what policies have been removed or revised, including:
Stay-or-pay provisions
Student-athlete rights
Rolling back on DE&I initiatives
Amended pregnancy accommodations, including abortion protections
Deprioritizing discrimination policies around race, gender, and sexual orientation
The use of AI in hiring
Whether you’re an employer trying to stay compliant withstate and federal mandates or an employee wanting to better understand your rights, this conversation is one you won’t want to miss.