No Infringement of Nonfiction Work by Makers of Tetris Film – Court Uses Wrong Analysis to Reach the Right Result

Ackerman v. Pink asks how much of a written history can be claimed as proprietary by the author of that history. The answer: Not much. It is black letter that the author of a non-fiction work cannot prevent others from using historical facts in some other work – even if those historical facts are known only because of the author’s independent research. Copyright covers only the author’s expression of the research, not the research itself. 
Ackerman is interesting because the analysis the court uses to separate the historical fact from the author’s expression is misplaced – even though the Court appears to have reached the right conclusion.
Daniel Ackerman, the author, sued several entities claiming that the film Tetris (the “Film”) infringed his copyright in a book he wrote about the video game Tetris – The Tetris Effect: The Game that Hypnotized the World (the “Book”). The Book purports to be a non-fiction history of the development of the video game Tetris. The Film also purports to tell the same story – albeit with some dramatic embellishments. 
The opinion by district judge Katherine Polk Failla recites the relatively vanilla proposition that historical facts cannot be claimed by an author of a non-fiction work of authorship. So far, so good. The problem, however, is where the court compares certain purportedly historic scenes in the Book with how they are portrayed fictionally in the Film in an effort to separate the facts from the expression. 
The court recounts several factual differences between the portrayal of historical scenes in the Book and the dramatization in the Film to hold that the expression of the former is not copied by the latter. For example, the Book describes one of the characters pitching Tetris to Sega in Japan. In the Movie, however, there are several differences including the fact the scene takes place in Seattle. Given that that historical facts described in the Book are not protected by copyright, whether the Film changed some of the facts should be immaterial to the court’s analysis. To the extent the author of the Book described certain scenes exactly as they happened, the Film was within its rights to portray those same facts as they happened. Nonetheless, the court’s analysis, to some degree, depends on these factual differences to hold that the Film does not copy the Book author’s expression. 
The Court went on to compare the “total concept and overall feel of the two works” by focusing on the organization and focus of the Book and Film. This framework does seem to separate Book author’s expression from the unprotected historical facts. For example, the court noted that “the Book jumps through time to provide as much background and context as possible for the people and events it portrays, the Film proceeds largely chronologically.” The court, however, could find no evidence that the Film somehow misappropriated the way the author “selected, coordinated, and arranged the facts in his Book.”
While Judge Polk Failla reached the right result (in our view), her focus on the difference in the facts unnecessarily muddies the water on separating protected expression from something underlying that expression that is in the public domain. A subsequent author who wishes to write a book or make a movie about an historical event should not be required to change the historical facts to avoid a finding that they copied the original author’s expression – even though the makers of the Film chose to do so here.

If You Are Uptight About AI, This May Relax You

While AI has many people uptight, Aescape has developed technology to help you relax – AI robotic massage. Aescape touts that it combines the timeless art of massage with robotics and artificial intelligence to deliver an exceptional massage experience every time. The “Aertable” (i.e., the massage table) has bolsters, headrests, and armrests that are all adjustable to provide a customized fit during each session. It also has continuous feedback which allows for real-time adjustments to optimize comfort. The “Aerscan” system captures 1.2 million data points, precisely mapping your body’s muscle structure to create a unique blueprint for a highly personalized massage experience. “Aerpoints” replicate the seven touch techniques of a skilled therapist, simulating the knuckle, thumb, cupped hand, blade of hand, palm, forearm, and elbow. The “Aerview” provides personal control so you can adjust the pressure, manage the music, or customize the display to create a session tailored to your preferences, needs and mood. The company has developed “Aerwear” a high-compression performance fabric that enhances body detection for the system and allows Aerpoints to move smoothly over your body. Wearing it is mandatory during the massage. The tables are equipped with advanced safety features, including force sensors and pause and emergency stop features to prevent or abate issues if things go wrong. Aescape is a classic example of an application of AI and robotics that will interact with humans. We will see many more such applications from this point forward. While Aescape seems to have anticipated some of the potential problems that can arise, any AI robotic application that interacts with humans has the potential for a variety of legal issues. The following are some of the general legal issues that may be relevant to AI robotic applications that interact with humans. But the actual issues will vary by application.
Despite the mixed feelings by some therapists about this technology, the $19B massage industry faces significant challenges. At least some of these challenges can be addressed by AI robotic massage. Some of the problems relate to delivering consistent, high-quality experiences, client satisfaction can vary from one session to another, some locations have a shortage of skilled therapists and therapists work limited hours. AI robotic massage can address many of these issues due to its consistency and 24/7 availability.
Aescape seems to be gaining traction. After launching with Equinox in New York City, Aescape reported exceptional consumer adoption with high utilization and repeat rates, driving a notable spike in gym memberships. This led to a national expansion to 60 Equinox locations. Aescape has also had success with leading hospitality brands (e.g., luxury hotels) and some NBA and NFL teams.
By now many questions are likely going through your mind. Can robots really replace the “human touch” aspect of massages? Will this technology replace massage therapists, leading to job loss? Can AI assist human massage therapists? It is beyond the scope of this post to cover all of these and other valid questions. But if you are interested, these topics are well-covered in the following articles – Will AI Impact the Employment of Massage Therapists? and 10 Ways Massage Therapists Can Use AI.
Aescape is a classic example of an application of AI and robotics that will interact with humans. We will see many more such AI robotic personal services applications from this point forward. While Aescape seems to have anticipated some of the potential problems that can arise, any AI robotic application that interacts with humans has the potential for a variety of legal issues. The following are some of the general legal issues that may be relevant to AI robotic applications that interact with humans. But the actual issues will vary by application.
Liability and malpractice: One often raised concern with autonomous applications is their safety and reliability. Despite best efforts to anticipate and provide failsafes for potential problems, this remains a risk. Technology malfunctions can cause physical harm. This raises concerns about potential liability issues. Will harmed clients have a claim in the nature of “malpractice” or product liability, or both? To complicate matters further, for harms resulting from AI-assisted, human massage, how should the liability be allocated between the technology provider and the therapist? In some cases, it may be difficult to obtain insurance for such applications, especially for new, unproven AI technology. If you are a location (spa, health club, etc.) deploying the technology, it would be wise to ensure you have an effective indemnity.
Privacy and data protection: To optimize the personalization of AI driven applications, a lot of personal data is needed. Aescape’s system claims to scan and store detailed body data, mapping over 1 million 3D data points of a person’s body. Massage therapists often inquire whether the client has any injuries, recent surgeries or medical conditions. More generally, AI robotic massage technology can employ a database to analyze a client’s physical condition, medical history, preferences and other personal information to create a customized massage tailored to their individual needs. All of this raises privacy concerns about how this sensitive personal information, including information typically covered by HIPPA, is stored, used, and protected. From a practical perspective, some clients may be less willing to share their sensitive personal information and medical history with a machine. While privacy is always important, there may unique considerations in crafting a privacy policy in these cases and it will be prudent to prioritize transparency and obtain explicit consents from clients before incorporating AI into their sessions. There may be legal questions on whether clients are fully informed about the nature of the robotic massage, the data collected and how it is used, and whether clients can provide informed consent, especially given the novelty of the technology.
Professional licensing: Massage therapists require licenses. Will AI systems need to be “licensed” in a manner like human massage therapists. If so, how this would be implemented? Or will certain jurisdictions prohibit unlicensed, non-humans. And while most massages are not deemed to be medical treatment, some can be. To the extent AI robotic massage crosses that line, it could involve the unauthorized practice of medicine.
Regulatory compliance: As a new technology, AI robotic massage systems may face challenges in meeting existing regulations for massage therapy and medical devices, where applicable. There could be a need for new regulatory frameworks to address this emerging field.
Consumer protection and marketing: There could be legal issues related to ensuring the safety and effectiveness of the robotic massage systems, as well as truthful marketing of their capabilities. The FTC has warned companies about overstating the capabilities of AI technology.
Intellectual Property: As with any new technology, there may be patent disputes or copyright issues related to the AI algorithms and robotic designs used in these systems. It is prudent to work with IP counsel to protect your IP and assess whether you might be infringing on any third-party IP.
These are some of the potential issues in the complex legal landscape that AI robotic applications may face. Other issues will undoubtedly arise.

TIME OUT!: NFL Team Tampa Bay Buccaneers Hit With Latest in A Series of Time Restriction TCPA Class Action

So TCPAWorld has been reporting on the clear trend of TCPA class action suits against companies (primarily retailers) that deploy text clubs and particularly those arising out of timing limitations in the TCPA and state statutes.
Well, the NFL’s Tampa Bay Buccaneers are the latest to fall victim to this trend with a new TCPA class action filed in Florida against the team’s ownership today.
Plaintiff Andrew Leech claims he was texted by the Buccaneers at 9:24 pm his time– he claims to live in Palm Beach County, Florida so not sure what happened there.
Plaintiff seeks to represent a class consisting of:
All persons in the United States who from four years prior to thefiling of this action through the date of class certification (1)Defendant, or anyone on Defendant’s behalf, (2) placed more thanone marketing text message within any 12-month period; (3) wheresuch marketing text messages were initiated before the hour of 8a.m. or after 9 p.m. (local time at the called party’s location)
Notably the Plaintiff does not say whether he agreed to be texted by the Buccaneers to begin with. As I have previously reported the TCPA’s timing regulations likely do NOT apply to consented calls, but there is very little case law on the issue.
The case is brought by the Law Offices of Jibrael S. Hindi– the same firm behind a number of similar timing cases. (He is apparently a Dolphins fan…)
Again until this trend abates companies deploying SMS need to be EXTREMELY cautious to assure timing limitations are complied with!

Social Casino Sweepstakes Model is Under Fire – What Game Companies, Payment Processors and App Stores Need to Know

Social casino games remain incredibly popular and profitable. This success has drawn attacks from plaintiffs’ class action lawyers in the form of gambling loss recovery lawsuits and other consumer-based actions. Some have been successful, mostly in Washington state, netting plaintiffs hundreds of millions of dollars. This has created an incentive for more lawsuits. Some social casino game companies have evolved their business model to include a dual currency, sweepstakes model. This model has also drawn a substantial number of lawsuits and a call to action by the American Gaming Association (AGA). Last August, the AGA published a position paper imploring gaming regulators and state Attorneys General to investigate companies or platforms that offer games under the “sweepstakes” model to determine whether or not these operators are in compliance with their respective laws and regulations, and to take appropriate action if not. Some regulators have responded by initiating actions against both the game companies and payment processors. Numerous private lawsuits have also been filed against game companies, and some have even included payment processors and app stores. In response, some social casino game operators have ceased operations in certain states. As many legal misconceptions exist with this model, game companies, payment processors and app store operators need to understand the evolving legal risks associated with this social casino sweepstakes model, and the steps they can take to mitigate such risks. This post addresses those issues.
For more information, see here.

Some Implications of the EU AI Act on Video Game Developers

This blog post provides a brief overview of the impact of Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonized rules on artificial intelligence (“AI Act”) on video game developers. More and more are integrating AI systems into their video games, including to generate backgrounds, non-player characters (NPCs), histories of objects to be founds in the video game. Some of these use cases are regulated under specific circumstances, and create obligations under the AI Act.
The AI Act entered into force on 1st August 2024 and will gradually apply over the next two years. The application of the provisions of the AI Act depends predominantly on two factors: the role of the video game developer, and the AI risk level.
The role of the video game developer
Article 2 of the AI Act delimits the scope of the regulation, specifying who may be subject to the AI Act. Video game developers might specifically fall under two of these categories:

Providers of AI systems, who are developers of AI systems who place them on the EU market or put the AI system into service under their own name or trademark, whether for payment or free of charge (Article 3(3) AI Act).
Deployers of AI systems, who are users of AI systems in the course of a professional activity, provided they are established in the EU or have users of the AI system based in the EU (Article 3(4) AI Act).

Thus, video game developers will be considered (i) providers if they develop their own AI system and they will be considered (ii) deployers if they integrate existing AI system made by a third party into their video games.
The AI risk level and related obligations
The AI Act classifies AI systems into four categories based on the risk associated with them (Article 3(1) AI Act). Obligations on economic operators vary depending on the level of risk resulting from the AI systems used:

AI systems with unacceptable risks are prohibited (Article 5 AI Act). In the video game sector, the most relevant prohibitions are the provision or use of AI systems deploying manipulative techniques or exploiting people’s vulnerabilities, and therefore causing significant harm. As an example, it is prohibited to use AI generated NPCs which would manipulate players towards increased spending in a game.
AI systems with high-risk (Articles 6, 7 and Annex III AI Act) trigger strict obligations for providers and, to a lesser extent, for deployers (Sections 2 and 3 AI Act). The relevant high-risk AI systems used in video games are those which pose a significant risk of harm to the health, safety or fundamental rights of natural persons, given their intended purpose, and in particular the AI systems used for emotional recognition (Annex III(1)(c) AI Act). These could, e.g. be used to make exchanges between players and NPCs more fluid and natural, resulting in strong emotion in players who might feel genuine empathy, compassion, or even anger towards virtual characters.

The list of obligations for providers of high-risk AI systems includes implementing quality and risk management systems, appropriate data governance and management practices, as well as technical documentation, ensuring transparency and information to deployers, keeping the documentation, ensuring resilience against unauthorized alterations or cooperating with competent authorities.
Deployers of high-risk AI systems shall notably operate the system in accordance with the instructions given, ensure human oversight, monitor the operation of the high-risk AI system or inform the provider and the relevant market surveillance authority of any incident or any risk to the health, safety, or fundamental rights of persons.

AI systems with specific transparency risk include chatbots, AI systems generating synthetic content or deep fakes, or emotion recognition systems. They trigger more limited obligations, listed in Article 50 AI Act.

Providers of chatbots must ensure that the latter are developed in such a way that the players are informed that they are interacting with an AI system (unless this is obvious for a reasonably well-informed person). Providers of content-generating AI must ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated.
Deployers of emotion recognition systems must inform players of the operation of the system, and process the personal data in accordance with Regulation 2016/679 (GDPR) which applies alongside the AI Act. Deployers of deep fakes-generating AI must disclose that the content has been artificially generated or manipulated.

AI system with minimal risk are not regulated under the AI Act. This category includes all other AI systems that do not fall into the aforementioned categories.

The EC stated that, in principle, AI-enabled video games face no obligation under the AI Act, but companies could voluntarily adopt additional codes of conduct (see AI Act | Shaping Europe’s digital future). It should be borne in mind, however, that in specific cases such as those described in this section, the AI Act will apply. Moreover, the AI literacy obligation applies regardless of the level of risk of the system, including minimal risk.
The AI literacy obligation
The AI literacy obligation applies from February 2025 (Article 113 a) AI Act) to both providers and deployers (Article 4 AI Act), regardless of the AI’s risk level. AI literacy is defined as skills, knowledge and understanding that allow providers, deployers and affected persons, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
The ultimate purpose is to ensure that video games developer’s staff are able to take informed decisions in relation to AI, taking into account their technical knowledge, experience, education and training and the context the AI system is to be used in, and considering the persons or groups of persons on whom the AI system is to be used.
The AI Act does not detail how providers and deployers should comply with the AI literacy obligation. In practice, various steps can be taken to achieve AI literacy:

Determining how and which employees currently use or plan to use or develop AI in the near future;
Assessing employees’ current AI knowledge to identify gaps (e.g. through surveys or quiz sessions);
Providing training activities and materials to the employees using AI on AI basics, and at least the concepts, rules and obligations which are relevant.

Conclusion
The regulation of AI systems in the EU has potentially a significant impact on video game developers depending on the way AI systems are used within particular video games. It is early days for the AI Act and we are carefully watching this space particularly as the AI Act is evolving to adapt to new technologies.
Listen to this post

Spilling Secrets Podcast: Trade Secrets in Hollywood: Lessons from Oscar-Nominated Films [Podcast]

In this episode of Spilling Secrets, Epstein Becker Green attorneys Daniel R. Levy, Aime Dempsey, and George Carroll Whipple, III, explore trade secrets through the lens of Oscar-nominated films, offering insights into protecting sensitive information in today’s competitive landscape.
Whether looking at a magical spellbook from Wicked or groundbreaking architectural designs in The Brutalist, the discussion underscores how trade secrets intertwine with innovation, employee training, and organizational culture. Discover how Hollywood’s biggest stories offer practical lessons for safeguarding your business’s most valuable assets.

Employment Law This Week: Trade Secrets in Hollywood – Lessons from Oscar-Nominated Films [Podcast] [Video]

As featured in #WorkforceWednesday: This week, on our Spilling Secrets podcast series, our panelists dig into trade secrets lessons employers can learn from hit movies.
In this episode, Epstein Becker Green attorneys Daniel R. Levy, Aime Dempsey, and George Carroll Whipple, III, explore trade secrets through the lens of Oscar-nominated films, offering insights into protecting sensitive information in today’s competitive landscape.
Whether looking at a magical spellbook from Wicked or groundbreaking architectural designs in The Brutalist, the discussion underscores how trade secrets intertwine with innovation, employee training, and organizational culture. Discover how Hollywood’s biggest stories offer practical lessons for safeguarding your business’s most valuable assets.

Indian Music Industry Enters the Global Copyright Debate Over AI

The legal battles surrounding generative AI and copyright continue to escalate with prominent players in the Indian music industry now seeking to join an existing lawsuit against OpenAI, the creator of ChatGPT. On February 13, 2025, industry giants such as Saregama, T-Series, and the Indian Music Industry (IMI) presented their concerns in a New Delhi court, arguing that OpenAI’s methods for training its AI models involve extracting protected song lyrics, music compositions, and recordings without proper licensing or compensation. This development follows a broader trend of copyright holders challenging generative AI companies, as evidenced by similar claims in the U.S. and Europe.
This case was originally filed by Asian News International (ANI), a leading Indian news agency, which alleged that OpenAI had used its copyrighted content without permission to train its AI models. Since then, the lawsuit has drawn interest from music companies, book publishers, and news organizations, all highlighting the alleged economic harm and intellectual property concerns stemming from these practices in India. The proceedings emerge amid a global backlash against the use of copyrighted materials in AI training. In November 2024, GEMA, Germany’s music licensing body, filed a lawsuit against OpenAI, alleging that the company reproduced protected lyrics without authorization. In parallel, lawsuits from authors and publishers in the U.S. have accused OpenAI and other AI platforms of improperly using copyrighted materials as training data.
The unfolding litigation raises critical questions about the boundaries and applicability of ‘fair use’ within the context of AI in the digital age. While OpenAI maintains that its reliance on publicly available data falls within fair use principles, commentators warn that a ruling against the tech giant could set a precedent that reshapes AI training practices not only in India but worldwide—given the global nature of AI development and jurisdiction-specific nuances of copyright law. As courts grapple with these complex issues, both creative industries and the broader tech community are watching closely to understand how emerging precedent and legal frameworks around the world might influence future AI development and deployment.
As legal challenges mount globally, this litigation is another reminder for businesses developing AI models or integrating AI technologies to proactively assess data privacy and sourcing practices, secure appropriate licenses for copyrighted content, and thoroughly review existing agreements and rights to identify any issues or ambiguities regarding the scope of permitted AI use cases. Beyond obtaining necessary licenses, companies should implement targeted risk mitigation strategies, such as maintaining comprehensive records of data sources and corresponding licenses, establishing internal and (where appropriate) external policies that define acceptable AI use and development, and conducting regular audits to ensure compliance. For any company seeking to unlock AI solutions and monetization opportunities while safeguarding its business interests, engaging qualified local legal counsel early in the process is essential for effectively navigating the evolving global patchwork of fair use, intellectual property laws, and other relevant regulations.

Guangdong Higher People’s Court: 107 Million RMB Settlement in Pokémon Copyright and Unfair Competition Case

On February 21, 2025, Guangdong’s Higher People’s Court announced a settlement of 107 million RMB in favor of The Pokémon Company for copyright infringement and unfair competition. Pokémon had sued Guangzhou Mai Network Technology Co., Ltd., Huo Network Technology Co., Ltd. and others for copyright infringement and unfair competition disputes over the game “Pokémon: Remastered” in December 2021. Pokémon requested 500 million RMB and was awarded 107 million RMB in the first instance at the Shenzhen Intermediate People’s Court. The Guangdong High People’s Court then mediated an appeal to reach the current settlement.

The Shenzhen Intermediate People’s Court held in the first instance that the core elements of the Pokémon characters, game protagonists, maps, etc. in the accused game correspond one-to-one and are similar to the corresponding elements of the Pokémon Company’s games. The multiple element systems formed by the combination of game elements are highly similar or even completely consistent, and many numerical system designs are the same. Therefore, the specific story expressions of the accused game and the Pokémon Company’s games are substantially similar. The accused game infringes the copyright of the Pokémon Company’s games (including the reproduction rights, information network dissemination rights and adaptation rights stipulated in Article 10, paragraph 1, items 5, 12 and 14 of the Copyright Law).

 

In addition, the Court also determined that the operation and promotion of the game in question violated Article 2 and Article 8, Paragraph 1 of the Anti-Unfair Competition Law, and constituted unfair competition.

 

The Guangdong Higher People’s Court stated:

Under the current background of various industrial policies to help Chinese games go global and strengthen the export of Chinese culture, equal protection of the intellectual property rights of digital entertainment products of Chinese and foreign parties in accordance with the law is an important aspect of adhering to cultural self-confidence and self-reliance and promoting the construction of a strong country in intellectual property rights. This case involves the rights protection lawsuit of the world’s top game and animation IP “Pokemon”, which has a high level of social attention, a large amount of compensation in the first instance, and an open trial in the second instance, which has educational and guiding significance for the innovation and standardized development of related industries.

In addition, it is necessary to remind relevant industries and practitioners that copying other people’s game products to gain attraction and influence among relevant player groups, thereby obtaining undeserved commercial benefits, may infringe copyright and other intellectual property rights, and may also constitute unfair competition. Competition in the game product market cannot stop at “copying”, but must innovate and deeply empower elements such as game play, which can truly benefit the healthy and long-term development of the game industry.

The full text of announcement is available here (Chinese only).

U.S. Senate Advances KOSMA Bill Targeting Social Media Use by Minors

Varnum Viewpoints:
KOSMA Restrictions: The Kids Off Social Media Act (KOSMA) aims to ban social media for kids under 13 and limit targeted ads for users under 17.
Bipartisan Support & Opposition: While KOSMA has bipartisan backing, critics argue it could infringe on privacy and First Amendment rights.
Business Impact: KOSMA could affect companies targeting minors, requiring compliance with new privacy regulations alongside existing laws like COPPA.

While COPPA 2.0 and KOSA are discussed more frequently when it comes to protecting the privacy of minors online, the U.S. Senate is advancing new legislation aimed at regulating social media use by those 17 and under. In early February, the Senate Committee on Commerce, Science and Transportation voted to advance the Kids Off Social Media Act (KOSMA), bringing it closer to a full Senate vote.
KOSMA Restrictions
KOSMA would prohibit children under 13 from accessing social media. Additionally, social media companies would be prohibited from leveraging algorithms to promote targeted advertising or personalized content to users under 17. Further, schools receiving federal funding would be required to limit the use of social media on their networks. The bill would also grant enforcement authority to the Federal Trade Commission and state attorneys general.
Bipartisan Support & Opposition
KOSMA has received bipartisan support, with advocates such as Senator Brian Schatz (D-HI), who introduced the bill in January, citing the growing mental health crisis amongst minors due to social media use. Supporters argue that while existing laws like COPPA protect children’s data, they do not adequately address the considerations of social media since they predate the platforms. However, much like similar state laws that have come before it, KOSMA is rife with opposition as well. Opponents argue that this type of regulation could erode privacy and impose unconstitutional restrictions on young people’s ability to engage online. Instituting a ban as opposed to mandating appropriate safeguards, opponents argue, infringes on First Amendment rights.
Business Impact
Although KOSMA only applies to “social media platforms,” the definition of this term could be interpreted broadly and potentially include many companies that publish user-generated content within the scope of KOSMA’s restrictions. KOSMA identifies specific types of companies that would be exempt from the definition of social media platforms, such as teleconferencing platforms or news outlets. If KOSMA were to go into effect, companies across the country that are knowingly collecting data from minors or targeting them with personalized content or advertising would have an additional layer of regulatory consideration when assessing their privacy practices pertaining to the processing of data related to minors—on top of existing federal and state laws.

NLRB Acting GC: Student-Athletes Are Not Employees

On February 18, 2025, National Labor Relations Board Acting General Counsel William Cowen rescinded a September 2021 memorandum in which former Board General Counsel Jennifer Abruzzo declared college athletes should be considered employees under the National Labor Relations Act. This was one of many memoranda he rescinded that had been issued by his Biden-administration predecessor.
Acting General Counsel Cowen’s withdrawal of the memorandum is the latest in a series of defeats for pro-employee advocates who had hoped to designate collegiate student-athletes as “employees” under the Act.
The first was the December 2024 withdrawal of an unfair labor practice charge filed by the National College Players Association (NCPA) against the NCAA, the Pac-12 Conference, and a private university in the Los Angeles area. The NCPA’s executive director stated the charge had been withdrawn considering the rise of “name, image, and likeness” (NIL) payments to players, as well as the shift in attitude on the subject under the new Trump Administration.
The second blow to proponents of the concept that student-athletes be deemed “employees” was the January 2025 decision by Service Employees International Union (SEIU), Local 560 to withdraw its petition to represent an Ivy League university’s men’s basketball players. In February 2024, a Regional Director for the Board took the historic step of determining that the university’s men’s basketball players should be considered employees under the Act. The case was filed in September 2023 after all 15 members of the men’s basketball team signed a petition to join Local 560 of the SEIU. At the time, the Regional Director determined the university’s level of control over the players was sufficient to qualify the players as employees under Section 2(3) of the Act. The Board found that traditional “team” activities, including the university’s ability to control the players’ academic schedules and the team’s regimented schedules for home and away games, weighed heavily in favor of an employment relationship. With the petition withdrawn for now, the university’s basketball players will remain non-unionized.
Given these developments, the window for student-athletes being deemed employees under the Act appears to be closed for the time being. With the uncertainty surrounding NIL and other issues around collegiate athletics, this area of law will need to be monitored for additional developments. In the interim, private collegiate institutions should be aware that they may face charges or petitions filed with the Board. Such filings must be treated seriously in light of the Regional Director decision discussed above.
Jackson Lewis’ Education and Collegiate Sports Group is available to assist universities, conferences, and other stakeholders in dealing with matters before the Board or otherwise involving the appropriate classification of student-athletes.

Clarifying the Copyrightability of AI-Assisted Works

The U.S. Copyright Office’s long-awaited second report assessing the issues raised by artificial intelligence (AI) makes clear that purely AI generated works cannot be copyrighted, and the copyrightability of AI-assisted works depends on the level of human creative authorship integrated into the work.
With the rise of mainstream generative AI platforms, clarity has been sought by creators, artists, producers, and technology companies concerning whether works created with AI may be entitled to copyright protection. In its most recent report, the Copyright Office concludes that existing copyright legislation and principles are well-suited for the issue of AI outputs’ copyrightability and suggests that AI may be used in the creation of copyrighted works as long as there is the requisite level of human creative expression. The Copyright Office’s report also makes clear that copyright protection will not extend to purely computer-generated works. Instead, copyrightability must be assessed on a case-by-case basis analyzing whether a work has the necessary human creative expression and originality to be copyrightable. Such intensive analysis equips existing U.S. copyright law to adapt to works made with emerging technologies.
In the process of crafting the report, the Copyright Office considered input from over 10,000 stakeholders seeking clarity on the protection of works for licensing and infringement purposes. This report does not address issues relating to fair use in training AI systems or copyright liability associated with the use of AI systems; these topics are expected to be covered in separate publications.
Report on Copyrightability of AI Outputs
The Copyright Office’s report examines the threshold question of copyrightability, or whether a work can be protected and endowed with rights that are enforceable against subsequent copiers, which raises important policy questions on the incentives of copyright law and the history of emerging technologies. Overall, the Copyright Office makes clear that tangential use of AI technology will not disqualify any subsequent work of authorship from protection, but rather the level of protection hinges on the nature and extent of the human expression added to the work.
I. Scope of the Report
The Copyright Office sought to clarify several overarching questions on the copyrightability of AI outputs, including:

Whether the Copyright Clause of the Constitution protects AI-generated works.
Whether AI can be the author of a copyright.
If additional protection for AI generations is recommended, and if so what additions.
If revisions to the human authorship standard are necessary.

II. The Copyrightability Standard and Current AI Technology
Human Authorship –There is a low level of human creativity or “authorship” needed to create copyright protection in a work, and the Copyright Office believes existing legal frameworks are relevant to the assessment of AI-generated outputs. Specifically, the Copyright Office believes that determining whether the authorship standard for copyrightability has been met depends on the level of human expressive intervention in the work.
For example, a photographer’s arrangement, lighting, timing, and post-production editing are all indications of the human expression required for copyright protection, even though, technologically, the camera “assists” to capture the photo.[1] On the other hand, photos taken by animals do not create authorship in the animal because of their non-human status.[2] Similarly, “divine messages” from alleged spirits do not contain the requisite human creativity to amount to authorship.[3] In the context of AI, like a photographer using a camera, the use of new technology does not default to a lack of authorship, but like a monkey taking a picture, non-human machines cannot be authors and therefore the expressions created solely by AI platforms cannot be copyrighted.
Assistive AI – The report further comments on the incorporation of AI into creative tasks, like aging actors on film, adding or removing objects to a scene, or finding errors in software code, and concludes that protection of works using such technology would depend on how the system is being used by a human author and whether a human’s expression is captured by the resulting work.
Protection of Prompts – The Copyright Office concludes that prompts alone do not form a basis for claiming copyright protection in AI-generated outputs (no matter how complex they may seem), unless the prompt itself involves a copyrightable work. At its core, copyright law does not protect ideas because copyright seeks to promote the free flow of ideas and thought. Rather, copyright law protects unique human expressions of the underlying ideas which are fixed in some tangible medium. The Copyright Office explains that prompts do not provide sufficient human control to make AI-users authors. Instead, prompts function as instructions that reflect a user’s conception of the idea but do not control the expression of that idea. Primarily, gaps between prompts and resulting outputs demonstrate that lack of control a user has in the expression of those ideas.
Expressive Inputs – The Copyright Office uses two examples in its report to illustrate this point. The first prompt, detailing the subject matter and composition of a cat smoking a pipe, was considered uncopyrightable because the AI system fills in the gaps of a user’s prompt. Here, the prompt does not specify the breed or coloring of the cat, its size, the pose, or what clothes it should be wearing underneath the robe. Without these particular instructions in the user’s prompts, the AI system still generated an image based on its own internal algorithm to fill in the gaps, thus stripping away expressive control from the user.

In contrast, the second prompt, asking the AI system to generate a photorealistic graphic of a human-drawn sketch, was considered copyrightable because the original elements of the sketch were retained in the AI-generated output. In assessing copyrightability, the Copyright Office pointed to the copyright in the original elements of the sketch as evidence of authorship, and any output depicting identifiable elements of the sketch (directed by the human author) was viewed by the Office as a derivative work of the sketch’s copyright. The artist’s protection in the AI output would overlap with the protectable elements in the original sketch, and like other derivative rights, the AI output would require a license to the original sketch.
In sum, where a human inputs their own copyrightable work into an AI system, they will be the author of that portion of the work still perceptible in the output; the individual elements must be identifiable and traceable to the initial human expression.

The Copyright Office views the current use of prompts as largely containing unprotectable (or public domain) ideas but notes that extensive human expression could potentially make prompts protectable, just not with currently available technology. Additionally, the Copyright Office notes that current technology is unpredictable and inconsistent, often producing vastly different outputs from the same prompts, which in its view shows that prompts lack the requisite clear direction of expression to rise to the level of human authorship.
Arrangement and Modification of AI Works –The Copyright Office also concludes that human authorship can be shown by the additions to, or arrangement of, AI outputs, including the use of AI adaptive tools. For example, a comic book “illustrated” with AI but with added original text by a human author was granted protection in the arrangement and expression of the images in addition to any copyrightable text because the work is the product of creative human choices. The same reasoning applies to AI generated editing tools which allow users to select and regenerate regions of an image with a modified prompt. Unlike prompts, the use of these tools enables users to control the expression of specific creative elements, but the Copyright Office clarifies that assessing the copyrightability of these modifications depends on a case-by-case determination.

III. International AI Copyright Decisions
In its review of international responses to AI copyright questions, the U.S. Copyright Office notes the general consensus of applying existing human authorship requirements to determine copyrightability of AI works.
Instructions from Japan’s Cultural Counsel underline the case-by-case basis necessary for assessing copyrightability and noted examples of human AI input that may rise to a copyrightable level. These include the number and type of prompts given, the number of attempts to generate the ideal work, selection by the user, and any later changes to the work.
A court in China found that over 150 prompts, along with retouches and modifications to the AI’s output, resulted in sufficient human expression to gain copyright protection.
In the European Union, most member states agree that current copyright policy is equipped to cover the use of AI, and similar to the U.S., most member states require significant human input into the creative process to qualify for copyright protection.
Canada and Australia have both expressed a lack of clarity on the issue of AI, but neither has taken steps to change legislation.
Unlike other countries, some commonwealth jurisdictions like the United Kingdom, India, New Zealand, and Hong Kong enacted laws before modern generative AI allowing for copyright protection for works created entirely by computers. With recent developments in technology, the United Kingdom has considered changing this law, but other countries have yet to clarify whether their existing laws would apply to AI-generated works.
IV. Policy Implications for Additional Protection
Incentives – One of the key components of copyright policy, as written in the U.S. Constitution, is to “promote science and the useful arts.” Comments to the Copyright Office varied on whether providing protection for AI-generated work would incentivize authorship; proponents of increasing copyright protection argued that it would promote emerging technologies, while opponents note the quick expansion of these technologies shows incentivization is not necessary. The Copyright Office finds the current legal framework as sufficiently balanced, stating that additional laws are not needed to incentivize AI creation because the existing threshold requirement of human creativity already protects and incentivizes the works of human authorship that copyright law seeks to promote.
Staying Internationally Competitive – Commentators noted that without underlying copyright protection, U.S. creators would be impacted by weaker protection for AI-generated works. The Copyright Office counters that similar protections are available worldwide and align with the U.S.’s standards of human authorship.
Clarity on AI-Generated Protection – Commentators petitioned Copyright Office officials for some legal certainty that works created with AI could be licensed to other parties and be registered with the Copyright Office. The Copyright Office’s report provides assurance that works made with assistance from AI platforms may be registered under existing copyright laws and notes the difficulty of any further clarity due to the case-by case nature of copyright analysis.
Conclusion and Considerations[4]
The foundations of U.S. copyright law have been applied consistently to emerging technologies, and the Copyright Office believes those doctrines will apply equally well to AI technologies. With the Copyright Office’s affirmation that purely AI-generated works cannot be copyrighted, and that AI-assisted works must involve meaningful human authorship, businesses leveraging AI systems must consider several key legal and strategic factors:

Maintain detailed records of human prompts and modifications, such as arranging, adapting, or refining AI outputs.
Focus on enhancing human-made, copyrightable works with AI systems rather than generating works solely through uncopyrightable prompts.
For companies commissioning AI-assisted work, specify in contracts that employees or contractors provide sufficient human control, arrangement, or modification of AI works to ensure copyrightability.
For companies offering AI-assisted work as part of their services, consider mitigating risks by excluding AI generated works from standard IP representations/warranties, and further disclaiming any liability in relation to the use of such works.
Consider variations in international AI copyright laws to assess the impact on global IP strategies.

Given the unique analysis copyright cases require, and the existing precedent requiring human input for protection, copyright law is well prepared to face the challenges posed by AI platforms. Due to the unique facts of each case, creators are encouraged to check with an experienced copyright attorney who can help evaluate whether an individual AI-assisted work includes enough human intervention to be protectable.

[1] Burrow-Giles Litho. Co. v. Sarony, 111 U.S. 53, 55–57 (1884).
[2] Naruto v. Slater, No. 15-cv-04324, 2016 U.S. Dist. LEXIS 11041, at *10 (N.D. Cal. Jan. 28, 2016) (finding animals are not “authors” within the meaning of the Copyright Act).
[3] Urantia Found. v. Kristen Maaherra, 114 F.3d 955, 957–59 (9th Cir. 1997) (holding that copyright law does not intend to protect divine beings, and protects the arrangement of otherworldly messages, but not the messages’ content).
[4] As noted above, the Copyright Office’s report does not address issues relating to fair use in training AI systems or copyright liability associated with the use of AI systems; these topics are expected to be covered in a separate publication.