DC Circuit Affirms Decision That Copyright Statute Requires Some Amount of Human Authorship, Leaves More Difficult Questions for Another Day

Does copyright law require that a human create a work? Yesterday the D.C. Circuit in Thaler v. Perlmutter held that it does and that a machine (such as a computer operating a generative AI program) cannot be designated as the author of the work. However, the D.C. Circuit refrained from saying more for now, leaving other questions about the use of AI when creating works for another day.
Dr. Stephen Thaler, a computer scientist who works with artificial intelligence, submitted a copyright application in 2019 for the image below, which he titled “A Recent Entrance to Paradise.” On the application, Thaler identified himself as the claimant, while designating a generative AI platform that he created and called the “Creativity Machine” as the author. For explanation of how the copyright was transferred from the machine as author to himself as claimant, Thaler stated his “ownership of the machine” caused the transfer. He would later argue that some form of work-for-hire transferred ownership to himself.
The Copyright Office denied registration, holding that copyright law requires a human author. Thaler appealed the decision to the District Court for the District of Columbia, which affirmed. As part of his case before the district court, Thaler raised, for the first time, the argument that he was in fact the author based on his creation of the Creativity Machine. However, because he had claimed on the record that the machine was the author throughout the proceedings before the Copyright Office, the district court held that he had waived this argument. Thaler then appealed to the D.C. Circuit Court of Appeals.
The D.C. Circuit’s decision yesterday affirmed both the Copyright Office’s and the district court’s decisions refusing Thaler’s copyright application for registration. On the key issue of copyright authorship, the court held that the text and structure of the Copyright Act requires a human author. Section 201 of the Copyright Act states that ownership “vests initially in the author or authors” of a work. While “author” is undefined, the court looked to at least seven other provisions throughout the Copyright Act that required various acts or states of mind of the author. These included reliance on the author’s life, references to the author’s widow or widower and children, the act of signature required for copyright transfer, and the intent needed to create a joint work of authorship. However, the court deemed none of these requirements applicable to a machine “author.” The court also relied on the Copyright Office’s long-standing policy of refusing registration to nonhuman authors and other appellate court decisions by the Seventh and Ninth circuits that refused claims of copyright authorship inhering in nature, “otherworldly entities,” or animals. Finally, the court held that Thaler’s work-for-hire claim failed at least because the machine had not signed any document designating the work as being made for hire, and that he had waived any claim of personal authorship because he had failed to raise it before the Copyright Office. Therefore, the D.C. Circuit affirmed the denial of registration of the work with the Creativity Machine designated as the author.
While this case is the first to address the question of copyright authorship in the context of generative artificial intelligence, its holding is not unexpected. As briefly referenced above, other appellate courts have addressed the question of nonhuman authorship in other scenarios and come to the same conclusion. Therefore, while Thaler is important for extending the same holding to the context of AI creations, the requirement of human authorship is neither new nor unusual. As the Ninth Circuit held in Naruto v. Slater (a case involving the famous “monkey selfie” photograph), there is a strong presumption in the law that statutes are written with humans as the subjects of rights, not animals or machines. The numerous textual references to the lives, acts, and intentions of authors in the Copyright Act made it easy for the D.C. Circuit not to overturn that presumption here. The D.C. Circuit also held that it did not need to address whether the U.S. Constitution requires a human author under the current case. That issue is left for a future litigant to contend with.
Moreover, Thaler presented his case for machine authorship in the most extreme form possible — a claim that the author was solely the “Creativity Machine,” with no human input at all. As the court references late in the decision, there have now been at least four copyright applications denied registration in whole or in part by the Copyright Office because the author used AI in creating or editing a work, but also relied on human input and human claims of authorship. The D.C. Circuit wisely decided to let that issue await a future ruling. Yet, as a result of that caution, litigants should recognize the limitations of the Thaler decision.
Finally, both Thaler and the other cases coming through the Copyright Office only concern AI creation of visual works of art. They do not concern other fields of creative works, such as written works or music. While the main holding of Thaler — that copyright protection cannot be granted where a machine is the sole creator — will certainly apply in these other fields, the permissible contours of AI-human interaction and their effect on authorship in these other categories of works are even more unclear given the lack of disputes arising to date.

Deep Legal: Transform Corporate Legal Practice with Client-Integrated, Real-Time Risk Monitoring Systems

The traditional practice of law has long been characterized by its reactive nature: clients call when problems arise, documents need review, or litigation looms. But what if the practice of law could be fundamentally reimagined? What if, instead of firefighters arriving after the blaze, attorneys could design sophisticated sprinkler systems that activate at the first sign of smoke?
This transformation is now possible. The advent of AI-powered legal research and analysis tools that can process vast amounts of legal information in seconds is enabling a paradigm shift from reactive counsel to proactive legal architecture. For large law firms serving enterprise clients, this represents perhaps the most significant opportunity in decades to redefine their value proposition.
The legal profession stands at an inflection point. For centuries, attorneys have served as expert navigators brought in to chart a course through troubled waters. Today, they can become architects of sophisticated systems that continuously monitor the legal seaworthiness of their clients’ operations before storms arrive. This new paradigm involves establishing what might best be described as legal security systems, integrated monitoring frameworks that continuously scan client operations for emerging legal risks, flag potential issues before they mature into problems, and provide real-time guidance on mitigation.
Much like cybersecurity systems that monitor networks for intrusions, these legal security systems vigilantly watch for potential regulatory violations, contractual exposures, compliance gaps, and litigation risks. When properly implemented, they transform the attorney’s role from crisis responder to strategic risk manager. The implications of this shift are profound. Law firms can deepen client relationships, develop more predictable revenue streams, and deliver measurably better outcomes. Clients benefit from reduced legal emergencies, lower overall legal spending, and the ability to operate with greater confidence in increasingly complex regulatory environments.
1. The Current Gap in Corporate Legal Protection
Despite significant investments in compliance programs and legal departments, most corporations operate with substantial blind spots in their legal risk management. These gaps persist for several interrelated reasons that technology is now positioned to address.
First, the volume and complexity of regulations governing modern business have expanded exponentially. A global corporation might be subject to tens of thousands of regulatory requirements across dozens of jurisdictions, many of which change frequently. No human team, regardless of size or expertise, can maintain perfect awareness of all applicable legal obligations.
Second, legal risks emerge from the everyday operations of business: contractual commitments made by sales teams, representations in marketing materials, HR decisions, operational changes, or strategic pivots. These activities occur continuously across organizational silos, often without legal review until problems surface.
Third, traditional compliance frameworks rely heavily on periodic audits, manually updated policies, and training programs that quickly become outdated. These approaches, while valuable, cannot keep pace with the dynamic nature of modern business operations.
Finally, corporate clients increasingly expect their outside counsel to function as business partners rather than specialized service providers. They seek attorneys who understand their operations intimately and who proactively identify risks before they materialize, an expectation that traditional service models struggle to fulfill.
An additional factor favoring outside counsel in this evolution is the matter of economies of scale. While in-house legal departments face continual budgetary constraints and typically focus on a single industry or company context, law firms can distribute the investment in sophisticated monitoring systems across multiple clients. By developing expertise and technical infrastructure that serves many clients in similar sectors, outside counsel can offer capabilities that would be prohibitively expensive for any single corporate legal department to build independently. 
This scale advantage extends beyond technology to collective intelligence. Outside firms working across an industry accumulate insights about emerging risks, regulatory trends, and effective mitigation strategies that no single company could develop internally. These insights, when encoded into monitoring systems, create a network effect that benefits all clients served by the firm, creating an offering that in-house teams simply cannot replicate.
The result of all these factors is a protection gap that leaves even well-resourced organizations vulnerable to preventable legal challenges. The cost of this gap is measurable not just in litigation expenses and regulatory penalties, but in operational disruptions, reputational damage, and missed business opportunities due to legal uncertainty.
2. Architecting a Legal Security System
Creating an effective legal security system requires a thoughtful architecture that integrates technology, legal expertise, and client operations. While the specific design will vary based on client needs, industry context, and risk profile, certain fundamental components remain consistent.
At its core, a legal security system must establish continuous monitoring capabilities across key risk vectors. These typically include regulatory compliance, contractual obligations, intellectual property protection, employment practices, corporate governance, and industry-specific risk areas. For each vector, the system must connect the sources of potential risk and the indicators that suggest emerging issues.
The foundation of this monitoring capability is a comprehensive legal knowledge base tailored to the client’s specific operations. This knowledge base must encode not just applicable laws and regulations, but how they intersect with the client’s business model, organizational structure, and strategic objectives. It must be continuously updated as laws change and as the client’s operations evolve.
Upon this foundation, firms can implement real-time scanning of client activities against the knowledge base. This might involve reviewing internal communications, analyzing contract terms, monitoring regulatory announcements, or scanning public records for potential litigation risks. Advanced systems might incorporate predictive analytics to identify patterns that historically precede legal problems.
The most sophisticated implementations integrate directly with client systems, enabling real-time legal guidance within operational workflows. For example, a sales contract management system might automatically flag problematic terms before agreements are finalized, or a product development platform might identify potential regulatory hurdles early in the design process.
Critically, these systems must balance comprehensiveness with practicality. Flagging every theoretical legal risk would quickly overwhelm both attorneys and clients with false positives. Effective systems must establish appropriate thresholds for escalation based on risk magnitude, company risk tolerance, and operational context.
3. Implementation Strategies for Outside Counsel
Implementing a legal security system requires a structured approach that balances technological capabilities with the practicalities of client relationships and operations. For large law firms, the implementation process typically unfolds in stages, beginning with a comprehensive risk assessment and culminating in a fully integrated monitoring system.
The first step involves conducting a thorough legal risk assessment for the client organization. This goes beyond traditional legal audits to examine not just current compliance status but the dynamic processes through which legal risks emerge in day-to-day operations. The assessment should identify both the most significant risk areas and the operational contexts in which they typically arise.
Based on this assessment, firms can develop a tailored monitoring framework that prioritizes the most critical risk vectors. This framework should define what will be monitored, how frequently, using what data sources, and with what thresholds for intervention. It should also establish clear protocols for escalation when potential issues are identified.
With the monitoring framework defined, firms can begin building the necessary technological infrastructure. This would involve existing legal technology platforms, custom-developed tools, and integration with client systems. The specific technology stack will vary based on client needs and firm capabilities, but should enable automated scanning, intelligent analysis, and structured escalation.
Throughout implementation, firms must work closely with key stakeholders across the client organization. This includes not just the general counsel’s office but operational leaders whose activities will be monitored. Engaging these stakeholders early helps ensure the system addresses real-world risks, integrates with existing workflows, and gains the organizational buy-in necessary for successful adoption.
The most effective implementations follow an iterative approach, beginning with focused monitoring of high-priority risk areas and expanding over time. This allows for continuous refinement based on feedback and results, while demonstrating immediate value to clients through early wins.
4. Transforming the Business Model: From Billable Hours to Recurring Value
Perhaps the most profound implication of legal security systems is how they reshape the economics of legal practice. As AI dramatically accelerates research and analysis capabilities, many firms are facing an uncomfortable reality: traditional billable hour models increasingly put firm interests at odds with client demands for efficiency. When a task that once took ten hours can be completed in minutes (or seconds), how do firms maintain revenue while passing efficiency gains to clients?
Legal security systems offer a compelling answer. By shifting from discrete billable transactions to ongoing monitoring and risk management, firms can establish subscription-based revenue models that align incentives between counsel and client. Rather than selling time, firms sell outcomes, specifically, maintenance of legal health and early detection of potential issues before they become costly problems.
This model recognizes that legal expertise is most valuable when applied preventatively and continuously, not just during crises. Clients gain predictable legal costs and better outcomes, while firms secure more stable revenue streams and deeper client relationships. The subscription approach also values the significant upfront investment required to build effective monitoring systems, the expertise, knowledge base development, and tech infrastructure that make real-time legal guidance possible.
For firms accustomed to hourly billing, this transition requires both strategic vision and practical execution. Most successful implementations begin with hybrid approaches: maintaining hourly billing for certain services while establishing subscription components for continuous monitoring and preventative counsel. Over time, as both firms and clients grow comfortable with the new model, the subscription elements can expand to encompass broader aspects of the relationship.
Closing Thoughts
The question isn’t whether AI will transform how we deliver legal services to our corporate clients; it’s whether your firm will lead this transformation or struggle to catch up. Legal security systems represent more than just an innovation; they embody a fundamental reimagining of the attorney-client relationship.
Throughout my career, I’ve watched countless innovations promise to revolutionize legal practice, but few have offered such clear and compelling benefits to both law firms and their clients. By embedding our expertise within the daily operations of our clients, we not only protect them more effectively but elevate our own practice from transactional service provider to indispensable strategic partner. The firms that master this approach will define the next generation of legal excellence.
The path to implementation doesn’t require massive infrastructure investments or wholesale practice redesigns. It starts with identifying a single high-value area where continuous monitoring could demonstrably benefit a key client. Perhaps it’s tracking regulatory changes affecting a specific business unit, monitoring contractual compliance across a supply chain, or providing real-time guidance for recurring transaction types. Start small, demonstrate value, and build from there.
I encourage you to take that first step this quarter. Identify one client relationship where this approach could strengthen your position as trusted counsel. Arrange a conversation about their most pressing legal concerns and explore how continuous monitoring might address them more effectively than traditional approaches. You may be surprised by how receptive clients are to this evolution; after all, they’ve been waiting for their law firms to embrace the same data-driven approach that has transformed their own operations. Exceed client expectations and deepen your relationships with “Deep Legal.”

The Symbiotic Future of Quantum Computing and AI

Quantum computing has the potential to revolutionize various fields, but practical deployments capable of solving real-world problems face significant headwinds due to the fragile nature of quantum systems. Qubits, the fundamental units of quantum information, are inherently unstable and susceptible to decoherence—a process by which interactions with the environment cause them to lose their quantum properties. External noise from thermal fluctuations, vibrations, or electromagnetic fields exacerbates this instability, necessitating extreme isolation and control, often achieved by maintaining qubits at ultra-low temperatures. Preserving quantum coherence long enough to perform meaningful computations remains one of the most formidable obstacles, particularly as systems scale.
Another major challenge is ensuring the accuracy and reliability of quantum operations, or “gates.” Quantum gates must manipulate qubits with extraordinary precision, yet hardware imperfections introduce errors that accumulate over time, jeopardizing the integrity of computations. While quantum error correction techniques offer potential solutions, they demand enormous computational resources, dramatically increasing hardware requirements. These physical and technical limitations present fundamental hurdles to building scalable, practical quantum computers.
The Intersection With Neural Networks
One promising approach to mitigating these issues lies in the unexpected ability of classical neural networks to approximate quantum states. As discussed in When Can Classical Neural Networks Represent Quantum States? (Yang et al., 2024), certain neural network architectures—such as recurrent neural networks (RNNs) and convolutional neural networks (CNNs)—can be trained to exhibit quantum properties. This insight suggests that instead of relying entirely on fragile physical qubits, classical neural networks could serve as an intermediary computational layer, learning and simulating quantum behaviors in ways that reduce the burden on quantum processors. Yang further proposes that classical deep learning models may be able to efficiently learn and encode quantum correlations, allowing them to predict and correct errors dynamically, thereby improving fault tolerance without the need for excessive physical qubits.
Neural networks capable of representing quantum states could also enable new forms of hybrid computing. Instead of viewing artificial intelligence (AI) and quantum computing as separate domains, recent research suggests a future where they complement one another. Classical AI models could handle optimization, control, and data preprocessing, while quantum systems tackle computationally intractable problems.
Ultimately, the interplay between quantum mechanics and AI will most likely reshape our approach to computation. While quantum computers remain in their infancy, AI could provide a bridge to unlock their potential. By harnessing classical neural networks to mimic quantum properties, the scientific community may overcome the current limitations of quantum hardware and accelerate the development of practical, scalable quantum systems. The boundary between classical and quantum computation may not be as rigid as once thought.

Chinese Court Again Rules AI-Generated Images Are Eligible for Copyright Protection

On March 7, 2025, the Changshu People’s Court announced that it had ruled that images generated with Artificial Intelligence (AI) are eligible for copyright protection.  This is believed to be the second case regarding AI-generated images with the Beijing Internet Court ruled similarly in late 2023.   In the instant case, Lin XX generated an image of a half heart in a city waterfront using Midjourney and further used Photoshop to edit the image. An unnamed Changsha real estate company then used the image in a WeChat posting and further built a three-dimensional installation based on the image at one of its developments.
The Court explained that it first reviewed the user agreement of the AI software involved in the case, and clarified that the assets and rights of the pictures produced by using the software service in the Midjourney software user agreement belong to the user, and logged into the creation platform in court to review the login process, user information, and the picture iteration process such as the modification of the prompts. The court held that Lin’s modification of the prompts and the modification of the picture through the image processing software reflected his unique selection and arrangement, and the image generated by this was original and belonged to the works protected by the Copyright Law. The two defendants violated the copyright by disseminating the picture on the Internet without the permission of the copyright owner. At the same time, it was determined that the copyright enjoyed by Lin should be limited to the picture, and the manufacturing of the three-dimensional installation was only based on the image. The real estate company’s design and construction of the corresponding installation did not constitute an infringement of Lin’s copyright. The court then ruled: 1. The infringing party publicly apologized to the plaintiff Lin on its Xiaohongshu [Red Note] account for three consecutive days; 2. The infringing party compensated the plaintiff Lin for economic losses and reasonable expenses totaling 10,000 RMB; 3. The plaintiff Lin’s other claims were rejected. After the first-instance judgment, neither the plaintiff nor the defendant appealed, and the judgment has taken legal effect.
This is the opposite of the decision reached by the U.S. Copyright Office in Zarya of the Dawn (Registration # VAu001480196) that did not recognize copyright in AI-generated images.
The original announcement can be found here (Chinese only).

AppLovin & Its AI: A Lesson in Accuracy

Last week, we explored a recent data breach class action and the litigation risk of such lawsuits. Companies need to be aware of litigation risk not only arising from data breaches, but also from shareholder class actions related to privacy concerns.
On March 5, 2025, a class action securities lawsuit was filed against AppLovin Corporation and its Chief Executive Officer and Chief Financial Officer (collectively, the defendants). AppLovin is a mobile advertising technology business that operates a software-based platform connecting mobile game developers to new users. AppLovin offers a software platform and an app. In the lawsuit, the plaintiff alleges that the defendants misled investors regarding AppLovin’s artificial intelligence (AI)-powered digital ad platform, AXON.
According to the complaint, the defendants made material representations through press releases and statements on earnings calls about how an upgrade to AppLovin’s AXON AI platform would provide improvements over the platform’s earlier version. The complaint further alleged that the defendants made numerous statements indicating that AppLovin’s financial growth in 2023 and 2024 was driven by improvements to the AXON technology. The defendants reportedly stated that AppLovin’s increase in net revenue per installation of the mobile app and the volume of installations was a result of the improved AXON technology.
The complaint further states that on, February 25, 2025, two short seller reports were published that linked AppLovin’s digital ad platform growth not to AXON, but to exploitative app permissions that carried out “backdoor” installations without users noticing. According to the reports, AppLovin used a code that purportedly allowed it to bind to consumers’ permissions for AppHub, Android’s centralized Google repository where app developers can upload and distribute their apps. The complaint claims that by attaching to AppHub’s one-click direct installations as its own, AppLovin directly downloaded apps onto consumers’ phones without their knowledge.
The research reports also state that AppLovin was reverse-engineering advertising data from Meta platforms and using manipulative practices, such as having ads click on themselves and forcing shadow downloads, to inflate its installation and profit figures. One of the research reports states that AppLovin was “intentionally vague about how its AI technology actually works,” and that the company used its upgraded AXON technology as a “smokescreen to hide the true drivers of its mobile gaming and e-commerce initiatives, neither of which have much to do with AI.” The reports further assert that the company’s “recent success in mobile gaming stems from the systematic exploitation of app permissions that enable advertisements themselves to force-feed silent, backdoor app installations directly onto users’ phones.” The complaint details the findings from the reports and alleges that AppLovin’s misrepresentations led to artificially inflated stock prices, which materially declined because of the research report findings.
On a company blog post in response to the research reports, the CEO wrote that “every download [of AppLovin] results from an explicit user choice—whether via the App Store or our Direct Download experience.”
As organizations begin integrating AI into their operations, they should be cautious in making representations regarding AI as a profitability driver. Executive leaders responsible for issuing press releases and leading earnings calls relating to a company’s technology practices should also understand how these technologies function and ensure that any statements they make are accurate. Whether such allegations are true or not, litigation around materially false representations can prove costly to an organization, both from a financial and reputation perspective. 

Privacy Tip #435 – Threat Actors Go Retro: Using Snail Mail for Scams

We have educated our readers about phishing, smishing, QRishing, and vishing scams, and now we’re warning you about what we have dubbed “snailing.” Yes, believe it or not, threat actors have gone retro and are using snail mail to try to extort victims. TechRadar is reporting that, according to GuidePoint Security, an organization received several letters in the mail, allegedly from the BianLian cybercriminal gang, stating:
“I regret to inform you that we have gained access to [REDACTED] systems and over the past several weeks have exported thousands of data files, including customer order and contact information, employee information with IDs, SSNs, payroll reports, and other sensitive HR documents, company financial documents, legal documents, investor and shareholder information, invoices, and tax documents.”

The letter alleges that the recipient’s network “is insecure and we were able to gain access and intercept your network traffic, leverage your personal email address, passwords, online accounts and other information to social engineer our way into [REDACTED] systems via your home network with the help of another employee.” The threat actors then demand $250,000-$350,000 in Bitcoin within ten days. They even offer a QR code in the letter that directs the recipient to the Bitcoin wallet.
It’s comical that the letters have a return address of an actual Boston office building.
GuidePoint Security says the letters and attacks mentioned in them are fake and are inconsistent with BianLian’s ransom notes. Apparently, these days, even threat actors get impersonated. Now you know—don’t get scammed by a snailing incident.

MS-ISAC Loses Funding and Cooperative Agreement with CIS

The Cybersecurity and Infrastructure Security Agency (CISA) confirmed on Tuesday, March 11, 2025, that the Multi-State Information Sharing and Analysis Center (MS-ISAC) will lose its federal funding and cooperative agreement with the Center for Internet Security. MS-ISAC’s mission “is to improve the overall cybersecurity posture of U.S. State, Local, Tribal, and Territorial (SLTT) government organizations through coordination, collaboration, cooperation, and increased communication.”
According to its website, MS-ISAC is a cybersecurity partner for 17,000 State, Local, Tribal, and Territorial (SLTT) government organizations, and offers its “members incident response and remediation support through our team of security experts” and develops “tactical, strategic, and operational intelligence, and advisories that offer actionable information for improving cyber maturity.” The services also include a Security Operations Center, webinars addressing recent threats, evaluations of cybersecurity maturity, advisories and notifications, and weekly top malicious domain reports.
All of these services assist governmental organizations that do not have adequate resources to respond to cybersecurity threats. Information sharing has been essential to prevent government entities from becoming victims. State and local governments have relied on this information sharing for resilience. Dismantling MS-ISAC will make it harder for governmental entities to obtain timely information about cybersecurity threats for preparedness. It is an organized place for governmental entities to share information about cyber threats and attacks and to learn from others’ experiences.
According to CISA, the dismantling of MS-ISAC will save $10 million. State representatives rely on the information shared by MS-ISAC. It may save the federal government minimal dollars, but when state and local governments are adversely affected and become victims of cyberattacks, this savings will be dwarfed by the amount spent on future attacks without MS-ISAC’s assistance. Responding to state and local government cyberattacks still expends taxpayer dollars. This shift is an unhelpful one that will leave state and local governments in the dark and at increased risk. This is a short-sighted strategy by the administration.

Protecting Your Business: AI Washing and D&O Insurance

Artificial intelligence (AI) is en vogue. As it rapidly reshapes industries, companies are racing to integrate and market AI-driven solutions and products. But how much is too much? Some companies are finding out the hard way.
The legal risks associated with AI, especially those facing corporate leadership, are growing as quickly as the technology itself. As we explained in a recent post, directors and officers risk personal liability, both for disclosing and failing to disclose how their businesses are using AI. Two recent securities class action lawsuits illustrate the risks associated with AI-related misrepresentations, underscoring the need for management to have a clear and accurate understanding of how the business is using AI and the importance of ensuring adequate insurance coverage for AI-related liabilities.
AI Washing: A Growing Legal Risk
Built on the same premise as “greenwashing,” AI washing is on the rise. In its simplest terms, AI washing refers to the practice of exaggerating or misrepresenting the role AI plays in a company’s products or services. Just last week, two more securities lawsuits were filed against corporate executives based on alleged misstatements about how their companies were using AI technologies. These latest lawsuits, much like the Innodata and Telus lawsuits we previously wrote about, serve as early warnings for companies navigating AI-related disclosure issues.
Cesar Nunez v. Skyworks Solutions, Inc.
On March 4, 2025, a plaintiff shareholder filed a putative securities class action lawsuit against semiconductor products manufacturer Skyworks Solutions and certain of its directors and officers in the US District Court for the Central District of California. See Cesar Nunez v. Skyworks Solutions, Inc. et al. Docket No. 8:25-cv-00411 (C.D. Cal. Mar. 4, 2025).
Among other things, the lawsuit alleges that Skyworks misrepresented its position and ability to capitalize on AI in the smartphone upgrade cycle, leading investors to purchase the company’s securities at “artificially inflated prices.”
Quiero v. AppLovin Corp.
A similar lawsuit was filed the next day against mobile technology company AppLovin and certain of its executives. See Quiero v. AppLovin Corp. et al. Docket No. 4:25-cv-02294 (N.D. Cal. Mar. 5, 2025).
The Applovin complaint alleges, among other things, that AppLovin misled investors by misleadingly touting its use of “cutting-edge AI technologies” “to more efficiently match advertisements to mobile games, in addition to expanding into web-based marketing and e-commerce.” According to the complaint, these misleading statements coincided with the reporting of “impressive financial results, outlooks, and guidance to investors, all while using dishonest advertising practices.”
Risk Mitigation and the Role of D&O Insurance
Our recent posts have shown how AI can implicate coverage under all lines of commercial insurance. The Skyworks and AppLovin lawsuits underscore the specific importance of comprehensive D&O liability insurance as part of any corporate risk management solution.
As we discussed in a previous post, companies may wish to assess their D&O programs from multiple angles to maximize protection against AI-washing lawsuits. Key considerations include:

Policy Review: Ensuring that AI-related losses are covered and not excluded under exclusions like cyber or technology exclusions.
Regulatory Coverage: Confirming that policies provide coverage not only for shareholder claims but also regulator claims and government investigations.
Coordinating Coverages: Evaluating liability coverages, especially D&O and cyber insurance, holistically to avoid or eliminate gaps in coverage.
AI-Specific Policies: Considering the purchase of AI-focused endorsements or standalone policies for additional protection.
Executive Protection: Verifying adequate coverage and limits, including “Side A” only or difference-in-condition coverage, to protect individual officers and directors, particularly if corporate indemnification is unavailable.
New “Chief AI Officer” Positions: Chief information security officers (CISOs) remain critical in monitoring cyber-related risks but are not the only emerging positions to fit into existing insurance programs. Although not a traditional C-suite position, more and more companies are creating “chief AI officer” positions to manage the multi-faceted and evolving use of AI technologies. Ensuring that these positions are included within the scope of D&O and management liability coverage is essential to affording protection against AI-

In sum, a proactive approach—especially when placing or renewing policies—can help mitigate the risk of coverage denials and enhance protection against AI-related legal challenges. Engaging experienced insurance brokers and coverage counsel can further strengthen policy terms, close potential gaps and facilitate comprehensive risk coverage in the evolving AI landscape.

Navigating the AI Frontier: Why Information Governance Matters More Than Ever

Artificial Intelligence (AI) is rapidly transforming the legal landscape, offering unprecedented opportunities for efficiency and innovation. However, this powerful technology also introduces new challenges to established information governance (IG) processes. Ignoring these challenges can lead to significant risks, including data breaches, compliance violations, and reputational damage.
“AI Considerations for Information Governance Processes,” a recent paper published by Iron Mountain, delves into these critical considerations, providing a framework for law firms and legal departments to adapt their IG strategies for the age of AI.
Key Takeaways:

AI Amplifies Existing IG Risks: AI tools, especially machine learning algorithms, often require access to and process vast amounts of sensitive data to function effectively. This makes robust data security, privacy measures, and strong information governance (IG) frameworks absolutely paramount. Any existing vulnerabilities or weaknesses in your current IG framework can be significantly amplified by the introduction and use of AI, potentially leading to data breaches, privacy violations, and regulatory non-compliance.
Data Lifecycle Management is Crucial: From the initial data ingestion and collection stage, through data processing, storage, and analysis, all the way to data archival or disposal, a comprehensive understanding and careful management of the AI’s entire data lifecycle is essential for maintaining data integrity and ensuring compliance. This includes knowing exactly how data is used for training AI models, for analysis and generating insights, and for any other purposes within the AI system.
Vendor Due Diligence is Non-Negotiable: If you’re considering using third-party AI vendors or cloud-based AI services, conducting rigorous due diligence on these vendors is non-negotiable. This due diligence should focus heavily on evaluating their data security practices, their compliance with relevant industry standards and certifications, and their contractual obligations and guarantees regarding data protection and privacy.
Transparency and Explainability are Key: “Black box” AI systems that make decisions without any transparency or explainability can pose significant risks. It’s crucial to understand how AI algorithms make decisions, especially those that impact individuals, to ensure fairness, accuracy, non-discrimination, and compliance with ethical guidelines and legal requirements. This often requires techniques like model interpretability and explainable AI.
Proactive Policy Development is Essential: Organizations need to proactively develop clear policies, procedures, and guidelines for AI usage within their specific context. These should address critical issues such as data access and authorization controls, data retention and storage policies, data disposal and deletion protocols, as well as model training, validation, and monitoring practices.

The Time to Act is Now:
AI is not a future concern; it’s a present reality. Law firms and legal departments must proactively adapt their information governance processes to mitigate the risks associated with AI and unlock its full potential.

What is AI Washing and Why are Companies Getting Sued?

With the proliferation of artificial intelligence (AI) usage over the last two years, companies are developing AI tools at an astonishing rate. When pitching their AI tools, these companies claim that their products can do certain things and promise and exaggerate their capabilities. AI washing “is a marketing tactic companies employ to exaggerate the amount of AI technology they use in their products. The goal of AI washing is to make a company’s offerings seem more advanced than they are and capitalize on the growing interest in AI technology.”
Isn’t this mere puffery? No, according to the Federal Trade Commission (FTC), Securities and Exchange Commission (SEC), and investors.
The FTC released guidance in 2023, outlining certain questions companies can assess to determine if they are AI washing. It urges companies to determine whether they are overpromising what the algorithm or AI tool can deliver. According to the FTC, “You don’t need a machine to predict what the FTC might do when those claims are unsupported.”
In March 2024, the SEC charged two investment advisors with AI washing by making “false and misleading statements about their use of artificial intelligence.” Both cases were settled for $400,000. The SEC found the two companies had “marketed to their clients and prospective clients that they were using AI in certain ways when, in fact, they were not.” 
Investors are getting into the hunt as well. In February and March 2025, investors sued two companies in securities litigation that alleged AI washing. In the first case, the company allegedly made statements to investors about its AI capabilities and reported “impressive financial results, outlooks and guidance.” It was subsequently the subject of short-seller reports that alleged they were using “manipulative practices” that inflated its numbers and profitability. The litigation alleged that, as a result, the company’s shares declined.
In the second case, the class action named plaintiff alleged that the company overstated “its position and ability to capitalize on AI in the smartphone upgrade cycle,” which caused investors to invest at an artificially inflated price.
Lessons learned from these examples? Look at the FTC’s guidance and assess whether your sales and marketing plan takes AI washing into consideration.

Artists Protest AI Copyright Proposal in the U.K.

British Prime Minister Keir Starmer wants to turn the U.K. into an artificial intelligence (AI) superpower to help grow the British economy by using policies that he describes as “pro-innovation.” One of these policies proposed relaxing copyright protections. Under the proposal, initially unveiled in December 2024, AI companies could freely use copyrighted material to train their models unless the owner of the copyrighted material opted out.
Although some Parliament members called the proposal an effective compromise between copyright holders and AI companies, over a thousand musicians released a “silent album” to protest the proposed changes to U.K. copyright laws. The album, currently streaming on Spotify, includes 12 tracks of only ambient sound. According to the musicians, the silent tracks illustrate empty recording studios and represent the impact they “expect the government’s proposals would have on musicians’ livelihoods.” To further convey their unhappiness with the proposed changes, the title of these twelve songs, when combined, reads, “The British government must not legalize music theft to benefit AI companies.” 
High-profile artists like Elton John, Paul McCartney, Dua Lipa, and Ed Sheeran have also signed a letter urging the British government to avoid implementing these proposed changes. According to the artists, implementing the new rule would effectively give artists’ rights away to big tech companies. 
The British government launched a consultation that sought comments on the potential changes to the copyright laws. The U.K. Intellectual Property Office received over 13,000 responses before the consultation closed at the end of February 2025, which the government will now review as it seeks to implement a final policy.

Skin360 App Can’t Escape Scrutiny under Illinois Biometric Law

A federal district court has denied a motion by Johnson & Johnson Consumer Inc. (JJCI) to dismiss a second amended complaint alleging it violated the Illinois Biometric Information Privacy Act (BIPA) by collecting and storing biometric information through its Neutrogena Skin 360 beauty app without consumers’ informed consent or knowledge. The plaintiffs also allege that the biometric information collected through the app is then linked to their names, birthdates, and other personal information.
Plaintiffs alleged that the Skin360 app is depicted as “breakthrough technology” that provides personalized at-home skin assessments by scanning faces and analyzing skin to diagnose enigmas like wrinkles, fine lines, and dark spots. The app then uses that data to recommend certain Neutrogena products for the consumer to eliminate those concerns. JJCI argued that the Skin360 app recommends products designed to improve skin health, which means that the consumers should be considered patients in a healthcare setting, making BIPA inapplicable.
However, the court disagreed and cited Marino v. Gunnar Optiks LLC, 2024 Ill. App. (1st) 231826 (Aug. 30, 2024), which held that a customer trying on non-prescription sunglasses using an online “try-on” tool is not considered a patient in a healthcare setting. In Marino, the court defined a patient as an individual currently waiting for or receiving treatment or care from a medical professional. Alternatively, Skin360 uses artificial intelligence software to compare a consumer’s skin to a database of images and provides an assessment based on a comparison of these images. Of course, JCCI did not dispute that no medical professionals are involved in providing the service through the Skin360 app.
The court stated that “[e]ven assuming Skin360 provides users with this AI assistant and ‘science-backed information’ the court finds it a reach to consider these services ‘medical care’ under BIPA’s health care exemption; [i]ndeed, Skin360 only recommends Neutrogena products to users of the technology, which suggests it is closer to a marketing and sales strategy rather than to the provision of informed medical care or treatment.”