New Workplace Policies Employers Should Consider
As the workplace landscape continues to evolve, employers must stay ahead of emerging challenges by implementing thoughtful and proactive policies.
In 2025, three key areas stand out as critical for fostering a positive and productive work environment: promoting collaboration and respect, supporting employee well-being, and responsibly integrating artificial intelligence. In this article, we’ll explore how well-crafted policies in these areas can enhance workplace culture, ensure compliance, and boost employee satisfaction.
Policies Promoting Collaboration, Respect, and Opportunity
Diversity, Equity, and Inclusion (“DEI”) is a key employment topic to prioritize in 2025. On January 21, 2025, President Trump signed Executive Order 14173, Ending Illegal Discrimination and Restoring Merit-Based Opportunity, which encourages private employers to end “illegal DEI discrimination and preferences.” This executive action directs federal agencies to promote “the policy of individual initiative, excellence, and hard work” in the private sector, and it directs the Attorney General to submit a report containing recommendations for enforcement measures to end “illegal discrimination and preferences.”
Despite the recent executive action, employers may still implement a policy addressing collaboration, respect, and opportunity in the workplace. In implementing this policy, employers should balance between an initiative that will aim to ensure fair treatment and equal opportunity for all, regardless of background, as opposed to one that incites claims of discrimination. An effective policy on collaboration, respect, and opportunity can foster a positive working environment, promote a sense of belonging and satisfaction, boost morale, and drive innovation.
Well-Being in the Workplace
Workplace well-being has transitioned from a perk to a necessity. Often, when an employee’s well-being deteriorates, so does their job performance. According to the National Alliance on Mental Illness (last updated April 2023) (“NAMI”):
Approximately 1 in 5 adults in the United States experience mental illness each year; and
Approximately 1 in 20 adults in the United States experience a serious mental illness each year.
Additionally, Equal Employment Opportunity Commission (“EEOC”) data shows that charges of discrimination based on mental health conditions (including substance use disorders) are substantial. Well-being not only has a physical and social impact on the individual employee, but it also has a financial impact on the employer as employee well-being impacts productivity levels and healthcare costs.
A comprehensive well-being in the workplace policy provides guidance on collaboration between employees, encourages healthy habits through on-site initiatives, provides access to mental health resources, and implements strategies designed to promote social engagement (for example, a well-being in the workplace policy may offer days off for volunteering activities). An effective well-being in the workplace policy can reduce the stigma surrounding mental health and stress, cultivate a sense of purpose and accomplishment in the workplace, and ultimately enhance job satisfaction and productivity.
AI in the Workplace
The rapid integration and easy access of artificial intelligence (“AI”) in the workplace necessitates clear employment policies. There are several accessible (and often free) AI tools for work that assist employees in drafting emails, preparing summary notes, drafting work materials, and preparing presentations. However, an array of legal issues may arise when employees use AI tools to perform their job duties. While these tools may be useful in promoting efficiency, they also generate legal risks if used improperly—primarily confidentiality. A comprehensive AI policy should address AI usage guidelines (including clearly defining “AI usage,” listing permitted and prohibited uses, and implementing protocols for human oversight), ethical considerations, data privacy, and mandatory training. An effective AI policy can cultivate responsible innovation, build trust, and assist in a smooth transition into an AI-driven work environment.
Conclusion
Although employers typically update their employee handbooks either at the end or beginning of the calendar year, there is never a bad time to implement new policies that address significant issues in the workplace. These policies are only three examples of proactive steps an employer can take to improve their workplace culture and compliance with important laws.
New York Attorney General Proposes Bill to Expand Consumer Protection Law
On March 13, New York Attorney General Letitia James announced the introduction of the Fostering Affordability and Integrity through Reasonable Business Practices Act (FAIR Business Practices Act). The proposed legislation seeks to extend the state’s existing ban on deceptive business practices to also prohibit unfair and abusive practices, aligning New York with 42 other states.
The bill, introduced in both state Senate and Assembly, would enhance enforcement capabilities for the Office of the Attorney General (OAG) and private consumers, including the ability to seek civil penalties and restitution for UDAAP violations. According to Attorney General James, the legislation is needed to tackle a host of consumer harms, including:
Subscription cancellations. Preventing companies from making it unreasonably difficult for consumers to cancel recurring payments.
Debt collection abuses. Prohibiting debt collectors from improperly seizing Social Security benefits or nursing homes from suing relatives of deceased residents for unpaid bills.
Auto dealer practices. Prohibiting car dealerships from withholding a customer’s photo identification until a sale is finalized.
Student loan servicing misconduct. Restricting student loan servicers from steering borrowers into costlier repayment plans.
Exploitation of limited English proficiency consumers. Addressing deceptive practices targeting non-English-speaking consumers.
Junk fees and hidden costs. Reducing unnecessary and deceptive charges in various industries, including healthcare and lending.
Artificial intelligence (AI) scams and online fraud. Strengthening enforcement against AI-driven scams, phishing schemes, and deceptive digital marketing practices.
The proposal has garnered support from former CFPB director Rohit Chopra and former FTC Chair Lina Khan, both of whom have emphasized the need for stronger state-level enforcement against deceptive and abusive business practices.
Putting It Into Practice: New York’s proposed legislation is the latest example of a growing trend among states taking a more active role in consumer protection enforcement (previously discussed here and here). This also highlights how some states are proactively responding to the CFPB’s state-level consumer protection recommendations from January, which encourage the adoption of the “abusive” standard (previously discussed here). With ongoing uncertainty surrounding the future of the CFPB, more states are likely to step in to fill the regulatory void by expanding their own consumer protection laws.
Listen to this post
Virginia Moves to Regulate High-Risk AI with New Compliance Mandates
On February 20, the Virginia General Assembly passed the High-Risk Artificial Intelligence Developer and Deployer Act. If signed into law, Virginia would become the second state, after Colorado, to enact comprehensive regulation of “high-risk” artificial intelligence systems used in critical consumer-facing contexts, such as employment, lending, housing, and insurance.
The bill aims to mitigate algorithmic discrimination and establishes obligations for both developers and deployers of high-risk AI systems.
Scope of Coverage. The Act applies to entities that develop or deploy high-risk AI systems used to make, or that are a “substantial factor” in making, consequential decisions affecting consumers. Covered contexts include education enrollment or opportunity, employment, healthcare services, housing, insurance, legal services, financial or lending services, and decisions involving parole, probation, or pretrial release.
Risk Management Requirements. AI deployers must implement risk mitigation programs, conduct impact assessments, and provide consumers with clear disclosures and explanation rights.
Developer Obligations. Developers must exercise “reasonable care” to protect against known or foreseeable risks of algorithmic discrimination and provide deployers with key system usage and limitation details.
Transparency and Accountability. Both developers and deployers must maintain records sufficient to demonstrate compliance. Developers must also publish a summary of the types of high-risk AI systems they have developed and the safeguards in place to manage risks of algorithmic discrimination.
Enforcement. The Act authorizes the Attorney General to enforce its provisions and seek civil penalties of up to $7,500 per violation.
Safe Harbor. The Act includes a safe harbor from enforcement for entities that adopt and implement a nationally or internationally recognized risk management framework that reasonably addresses the law’s requirements.
So how does this compare to Colorado’s law? Virginia defines “high-risk” more narrowly—limiting coverage to systems that are a “substantial factor” in making a consequential decision, whereas the Colorado law applies to systems that serve as a “substantial” or “sole” factor. Colorado’s law also includes more prescriptive requirements around bias testing and impact assessment content, and provide broader exemptions for small businesses.
Putting It Into Practice: If enacted, the Virginia AI law will add to the growing patchwork of state-level AI regulations. In 2024, at least 45 states introduced AI-related bills, with 31 states enacting legislation or adopting resolutions. States such as California, Connecticut, and Texas have already enacted AI-related statutes . Given this trend, it is anticipated that additional states will introduce and enact comprehensive AI regulations in the near future.
Key Considerations Before Negotiating Healthcare AI Vendor Contracts
The integration of artificial intelligence (AI) tools in healthcare is revolutionizing the industry, bringing efficiencies to the practice of medicine and benefits to patients. However, the negotiation of third-party AI tools requires a nuanced understanding of the tool’s application, implementation, risk and the contractual pressure points. Before entering the negotiation room, consider the following key insights:
I. The Expanding Role of AI in Healthcare
AI’s role in healthcare is rapidly expanding, offering a wide range of applications including real-time patient monitoring, streamlined clinical note-taking, evidence-based treatment recommendations, and population health management. Moreover, AI is transforming healthcare operations by automating staff tasks, optimizing operational and administrative processes, and providing guidance in surgical care. These technological advancements can not only improve efficiency but also enhance the quality of care provided. AI-driven customer support tools are also enhancing patient experiences by offering timely responses and personalized interactions. Even in employment recruiting, AI is being leveraged to identify and attract top talent in the healthcare sector.
With such a wide array of applications, it is crucial for stakeholders to understand the specific AI service offering when negotiating a vendor contract and implementing the new technology. This knowledge ensures that the selected AI solution aligns with the organization’s goals and can be effectively integrated into existing systems, while minimizing each party’s risk.
II. Pre-Negotiation Strategies
Healthcare AI arrangements are complex, often involving novel technologies and products, a wide range of possible applications, important data use and privacy considerations and the potential to significantly impact patient care and patient satisfaction. Further, the regulatory landscape is developing and can be expected to evolve significantly in the coming years. Vendors and customers should consider the following when approaching a negotiation:
Vendor Considerations:
Conduct a Comprehensive Assessment: Understand the problem the product is addressing, expected users, scope, proposed solutions, data involved, potential evolution, and risk level.
Engage Stakeholders: Schedule kick-off calls with the customer’s privacy, IT, compliance, and clinical or administrative teams.
Documentation: Maintain summary documentation detailing model overview, value proposition, processing activities, and privacy/security controls.
Collaborate with Sales: Develop strategies with the sales team and consider trial periods or pilot programs. Plan for the progression of these programs. For example, even if a pilot program is free, data usage terms should still apply.
Customer Considerations:
Evaluate Within AI Governance Scope: Don’t treat an AI contract like a normal tech engagement. Instead, approach this arrangement within a larger AI governance scope, including accounting for the introduction of ethical frameworks, data governance practices, monitoring and evaluation systems, and related guardrails to work in tandem with the product’s applications.
Engage Stakeholders: Collaborate with legal, privacy, IT, compliance, and other relevant stakeholders from the outset.
Consider AI-Specific Contracts: Use AI-specific riders or MSAs and review standard vendor forms to streamline negotiations.
Assess Upstream Contract Requirements: Ensure upstream requirements can be appropriately reflected downstream.
Perform vendor due diligence:As with any nascent industry, some vendors will not survive or may significantly change their focus or products, which might impact support or the long-term viability of the service. Learn about your vendor and ask questions about their financial stability, privacy and security posture.
III. AI Governance and Risk Assessment
Evaluating AI-related risk requires understanding risk across the full lifecycle of an AI product, including its model architecture, training methods, data types, model access, and specific application context. In the healthcare space, this includes understanding the impact to operations, the effect on clinical care and any other impact to patients, the amount of sensitive information involved, and the degree of visibility and/or control the organization has over the model.[1] For example, the risk is much larger with respect to AI that is used to assist clinical decision-making for diagnostics (e.g., assessing static imaging in radiology); whereas, technology used for limited administrative purposes carries a comparatively smaller risk. Here are three resources that healthcare organizations can use to evaluate and address AI-related risks:
A. HEAT Map
A HEAT map can be a helpful tool for evaluating the severity of risks associated with AI systems. It categorizes risks into different “heat” levels (e.g., informational, low, medium, high, and critical). This high-level visual representation can be particularly helpful when a healthcare organization is initially deciding whether to engage a vendor for a new AI product or platform. It can help the organization identify the risk associated with rolling out a given product and prioritize risk management strategies if it moves forward in negotiating an agreement with that vendor.
For example, both the customer and the vendor might consider (and categorize within the HEAT map) what data the vendor will require to perform its services, why the vendor needs it, who will receive the data, and what data rights the vendor might be asking for, how that data is categorized, whether any federal, state or global rules impact the acceptance of that data, and what mitigations are necessary to account for data privacy.
B. NIST AI Risk Management Framework
The National Institute of Standards and Technology (NIST) has created the NIST AI Risk Management Framework to guide organizations in identifying and managing AI-related risks.[2] This framework offers an example of a risk tiering system that can be used to understand and assess the risk profile of a given AI product, and ultimately guide organizations in the creation of risk policies and protocols, evaluation of ongoing AI rollouts, and resolution of any issues that arise. Whether healthcare organizations choose to adopt this risk tiering approach or apply their own, this framework reminds organizations of the many tools at their disposal to manage risk during the rollout of an AI tool, including data protection and retention policies, education of users, incident response protocols, auditing and assessment practices, changes to management controls, secure software development practices, and stakeholder engagement.
C. Attestations and Certifications
Attestations and certificates (e.g., HITRUST, ISO 27001, SOC-2) can also help your organization ensure compliance with industry standard security and data protection practices. Specifically, HITRUST focuses on compliance with healthcare data protection standards, reducing the risk of breaches and ensuring AI systems that handle health data are secure; ISO 27001 provides a framework for managing information security, helping organizations to safeguard AI data against unauthorized access and breaches; and SOC-2 assesses and verifies a service organization’s controls related to security, availability, processing integrity, confidentiality, and privacy, in order to ensure AI services are trustworthy. By engaging in the process to meet these certification standards, the organization will be better equipped to issue-spot potential problems and implement corrective measures. Also, these certifications can demonstrate to the public that the organization takes AI risks seriously, thereby strengthening trust and credibility amongst its patients and business partners.
IV. Contract Considerations
Once parties have assessed their organizational needs, engaged applicable stakeholders/collaborators, and reviewed their risk exposure from an AI governance perspective, they can move forward in negotiating the specific terms of the agreement. Here’s a high-level checklist of the terms and conditions that each party will want to pay careful attention to in negotiations, along with a deeper dive into the considerations surrounding data use and intellectual property (IP) issues:
A. Key Contracting Provisions:
Third-party terms
Privacy and security
Data rights
Performance and IP warranties
Service level agreements (SLAs)
Regulatory compliance
Indemnification (IP infringement, data breaches, etc.)
Limitations of liability and exclusion of damages
Insurance and audit rights
Termination rights and effects
B. Data Use and Intellectual Property Issues
When negotiating the terms and conditions related to data use, ownership, and other intellectual property (IP) issues, each party will typically aim to achieve the following objectives:
Customer Perspective:
Ensure customer will own all inputs, outputs, and derivatives of its data used in the application of the AI model;
Confirm data usage will be restricted to service-related purposes;
Confirm the customer’s right to access data stored by vendor or third-party as needed. For example, the customer might want to require that the vendor provide any relevant data and algorithms in the event of a DOJ investigation or plaintiff lawsuit;[3]
Aim for broad, protective IP liability and indemnity provisions; and
Where patient health information is involved, ensure that it is being used in compliance with HIPAA. Vendors want to train their algorithm on PHI. Unless the algorithm is only being trained for the benefit of the HIPAA-regulated entity and fits within a healthcare operations exception, a HIPAA authorization from the data subject will typically be required to train the algorithm for broader purposes.
Vendor Perspective:
Ensure vendor owns all services, products, documentation, and enhancements thereto;
Access customer data sources for training and improving machine learning models; and
Retain ownership over outputs. From the vendor’s perspective, any customer data that is inputted into the vendor’s model is modified by that model or product, resulting in the blending of information owned by both sides. One potential solution to this shared ownership issue is for the vendor to grant the customer a longstanding license to use that output.
V. Conclusion
In conclusion, negotiating contracts for AI tools in healthcare demands a comprehensive understanding of the technology, data use, risks and liabilities, among other considerations. By preparing effectively and engaging the right stakeholders and collaborators, both vendors and customers can successfully navigate these negotiations.
FOOTNOTES
[1] UC AI Council Risk Assessment Guide.
[2] NIST AI 600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (July 2024).
[3] Paul W. Grimm et al., Artificial Intelligence as Evidence, 19 Northwestern J. of Tech. and Intellectual Prop. 1, 9 (2021).
Listen to this post
Joint Alert Warns of Medusa Ransomware
On March 12, 2025, a joint cybersecurity advisory was issued by the Cybersecurity and Infrastructure Security Agency, the Federal Bureau of Investigation, and the Multi-State Information Sharing and Analysis Center to advise companies about the tactics, techniques and procedures (TTPs), and indicators of compromise (IOCs) to protect themselves against Medusa ransomware.
According to the advisory:
Medusa is a ransomware-as-a-service (RaaS) variant first identified in June 2021. As of February 2025, Medusa developers and affiliates have impacted over 300 victims from a variety of critical infrastructure sectors with affected industries including medical, education, legal, insurance, technology, and manufacturing. The Medusa ransomware variant is unrelated to the MedusaLocker variant and the Medusa mobile malware variant per the FBI’s investigation.
The advisory provides technical details on how Medusa gains access to systems, including phishing campaigns as the primary method for stealing credentials. The group also exploits unpatched software vulnerabilities, which reinforces the importance of timely patching.
The threat actors exfiltrate the victim’s data and then deploy the encryptor, gaze.exe, on files while disabling Windows Defender and other antivirus tools. The encrypted files use the .medusa file extension. They then contact the victim within 48-hours and use the .onion data leak site for communication.
The advisory lists the IOCs and TTPs used in the attacks. IT professionals may wish to review them and apply mitigation tactics. The mitigations listed in the advisory are lengthy and worth consulting.
Insider Threats: Potential Signs and Security Tips
In recent news, New York’s Stram Center for Integrative Medicine reported a security incident involving an employee misusing a patient’s payment card information. According to a breach report filed with the U.S. Department of Health and Human Services Office for Civil Rights, the incident may have involved 15,263 patients’ information—even though the bad actor only misused one patient’s payment card. The individual has been arrested and is no longer employed. According to the Stram Center, social security numbers are not involved, but it is offering complimentary credit monitoring and identity protection services to affected individuals.
When we hear “data breach,” we’re likely to think of ransomware incidents, business email compromises, and other cyberattacks from external threats. However, according to a Cybersecurity Insiders report, 83% of organizations reported at least one insider attack in 2024. According to IBM’s 2024 Cost of a Data Breach report, data breaches resulting from insider threats were the costliest, at $4.99 million on average. While insider threats may not make headlines as frequently, organizations should take measures to mitigate risks surrounding insider data incidents. Insider threats include unintentional errors, such as emailing personal information to the wrong recipient, misplacing documents, and speaking about personal information among those without authorized access. Insider threats also include malicious insider threats, such as disgruntled employees.
Organizations should monitor for several signs that may signal a malicious insider threat:
Timing of access – Malicious insiders may access the network and systems at unusual times. If an employee typically only works night shifts but the user’s access logs suddenly reflect daytime activity, this could indicate potential malicious activity.
Unexpected spikes in network traffic – Atypical spikes in network traffic might reflect that a user is downloading or copying large volumes of data.
Unusual requests – If a user is requesting access to applications or information that are beyond the scope of their role or unusual for team members in similar roles, this could signal malicious intent.
Several security practices can help organizations reduce the risk of insider attacks:
Endpoint monitoring – Constant endpoint monitoring can help organizations analyze user and entity behavior, scan networks, and detect potential early signs of insider activity.
Role-based access – Employees should only have access to the information that they need to fulfill their job responsibilities. Providing employees access on a least-privilege basis helps minimize the risk of unauthorized access and misuse.
Culture of awareness – Regular cybersecurity training, including on best practices such as locking one’s computer and maintaining proper password hygiene, can help minimize unauthorized insider access.
Since malicious insiders often already have some level of existing access to an organization’s systems and knowledge of business practices and organization policies, such threats can cause significant harm. Insider threat prevention should be an integral component of all organizations’ overall cybersecurity posture.
AI Governance: The Problem of Shadow AI
If you hang out with CISOs like I do, shadow IT has always been a difficult problem. Shadow IT refers to refers to “information technology (IT) systems deployed by departments other than the central IT department, to bypass limitations and restrictions that have been imposed by central information systems. While it can promote innovation and productivity, shadow IT introduces security risks and compliance concerns, especially when such systems are not aligned with corporate governance.”
Shadow IT has been a longstanding problem as IT professionals can’t implement security measures and guidelines when they are unaware of its use.
Now that artificial intelligence (AI) is widely used for purposes including work, it is imperative that organizations address its governance, as they previously addressed employees’ use of IT assets. Otherwise, employees will use AI tools without the organization’s knowledge and outside of its acceptable use policies, exacerbating the problem of shadow AI in the organization.
A recent TechRadar article concluded that “you almost certainly have a shadow AI problem.” The risks of having shadow AI in the organization include: “the leakage of sensitive or proprietary data, which is a common issue when employees upload documents to an AI service such as ChatGPT, for example, and its contents become available to users outside of the company. But it could also lead to serious data quality problems where incorrect information is retrieved from an unapproved AI source which may then lead to bad business decisions.” And don’t forget about the problem of hallucinations.
Implementing an AI Governance Program is one way to address the shadow AI problem. AI Governance programs differ depending on business needs, but all of them address who owns the program, AI tools usage, what tools are sanctioned, how AI tools can be used, guardrails around the risks of data loss, data integrity and accuracy, and user training and education. Governing the use of AI tools in an organization is similar to governing the use of IT assets. The most important thing is to get started before shadow AI gets out of hand.
DC Circuit Affirms Decision That Copyright Statute Requires Some Amount of Human Authorship, Leaves More Difficult Questions for Another Day
Does copyright law require that a human create a work? Yesterday the D.C. Circuit in Thaler v. Perlmutter held that it does and that a machine (such as a computer operating a generative AI program) cannot be designated as the author of the work. However, the D.C. Circuit refrained from saying more for now, leaving other questions about the use of AI when creating works for another day.
Dr. Stephen Thaler, a computer scientist who works with artificial intelligence, submitted a copyright application in 2019 for the image below, which he titled “A Recent Entrance to Paradise.” On the application, Thaler identified himself as the claimant, while designating a generative AI platform that he created and called the “Creativity Machine” as the author. For explanation of how the copyright was transferred from the machine as author to himself as claimant, Thaler stated his “ownership of the machine” caused the transfer. He would later argue that some form of work-for-hire transferred ownership to himself.
The Copyright Office denied registration, holding that copyright law requires a human author. Thaler appealed the decision to the District Court for the District of Columbia, which affirmed. As part of his case before the district court, Thaler raised, for the first time, the argument that he was in fact the author based on his creation of the Creativity Machine. However, because he had claimed on the record that the machine was the author throughout the proceedings before the Copyright Office, the district court held that he had waived this argument. Thaler then appealed to the D.C. Circuit Court of Appeals.
The D.C. Circuit’s decision yesterday affirmed both the Copyright Office’s and the district court’s decisions refusing Thaler’s copyright application for registration. On the key issue of copyright authorship, the court held that the text and structure of the Copyright Act requires a human author. Section 201 of the Copyright Act states that ownership “vests initially in the author or authors” of a work. While “author” is undefined, the court looked to at least seven other provisions throughout the Copyright Act that required various acts or states of mind of the author. These included reliance on the author’s life, references to the author’s widow or widower and children, the act of signature required for copyright transfer, and the intent needed to create a joint work of authorship. However, the court deemed none of these requirements applicable to a machine “author.” The court also relied on the Copyright Office’s long-standing policy of refusing registration to nonhuman authors and other appellate court decisions by the Seventh and Ninth circuits that refused claims of copyright authorship inhering in nature, “otherworldly entities,” or animals. Finally, the court held that Thaler’s work-for-hire claim failed at least because the machine had not signed any document designating the work as being made for hire, and that he had waived any claim of personal authorship because he had failed to raise it before the Copyright Office. Therefore, the D.C. Circuit affirmed the denial of registration of the work with the Creativity Machine designated as the author.
While this case is the first to address the question of copyright authorship in the context of generative artificial intelligence, its holding is not unexpected. As briefly referenced above, other appellate courts have addressed the question of nonhuman authorship in other scenarios and come to the same conclusion. Therefore, while Thaler is important for extending the same holding to the context of AI creations, the requirement of human authorship is neither new nor unusual. As the Ninth Circuit held in Naruto v. Slater (a case involving the famous “monkey selfie” photograph), there is a strong presumption in the law that statutes are written with humans as the subjects of rights, not animals or machines. The numerous textual references to the lives, acts, and intentions of authors in the Copyright Act made it easy for the D.C. Circuit not to overturn that presumption here. The D.C. Circuit also held that it did not need to address whether the U.S. Constitution requires a human author under the current case. That issue is left for a future litigant to contend with.
Moreover, Thaler presented his case for machine authorship in the most extreme form possible — a claim that the author was solely the “Creativity Machine,” with no human input at all. As the court references late in the decision, there have now been at least four copyright applications denied registration in whole or in part by the Copyright Office because the author used AI in creating or editing a work, but also relied on human input and human claims of authorship. The D.C. Circuit wisely decided to let that issue await a future ruling. Yet, as a result of that caution, litigants should recognize the limitations of the Thaler decision.
Finally, both Thaler and the other cases coming through the Copyright Office only concern AI creation of visual works of art. They do not concern other fields of creative works, such as written works or music. While the main holding of Thaler — that copyright protection cannot be granted where a machine is the sole creator — will certainly apply in these other fields, the permissible contours of AI-human interaction and their effect on authorship in these other categories of works are even more unclear given the lack of disputes arising to date.
Deep Legal: Transform Corporate Legal Practice with Client-Integrated, Real-Time Risk Monitoring Systems
The traditional practice of law has long been characterized by its reactive nature: clients call when problems arise, documents need review, or litigation looms. But what if the practice of law could be fundamentally reimagined? What if, instead of firefighters arriving after the blaze, attorneys could design sophisticated sprinkler systems that activate at the first sign of smoke?
This transformation is now possible. The advent of AI-powered legal research and analysis tools that can process vast amounts of legal information in seconds is enabling a paradigm shift from reactive counsel to proactive legal architecture. For large law firms serving enterprise clients, this represents perhaps the most significant opportunity in decades to redefine their value proposition.
The legal profession stands at an inflection point. For centuries, attorneys have served as expert navigators brought in to chart a course through troubled waters. Today, they can become architects of sophisticated systems that continuously monitor the legal seaworthiness of their clients’ operations before storms arrive. This new paradigm involves establishing what might best be described as legal security systems, integrated monitoring frameworks that continuously scan client operations for emerging legal risks, flag potential issues before they mature into problems, and provide real-time guidance on mitigation.
Much like cybersecurity systems that monitor networks for intrusions, these legal security systems vigilantly watch for potential regulatory violations, contractual exposures, compliance gaps, and litigation risks. When properly implemented, they transform the attorney’s role from crisis responder to strategic risk manager. The implications of this shift are profound. Law firms can deepen client relationships, develop more predictable revenue streams, and deliver measurably better outcomes. Clients benefit from reduced legal emergencies, lower overall legal spending, and the ability to operate with greater confidence in increasingly complex regulatory environments.
1. The Current Gap in Corporate Legal Protection
Despite significant investments in compliance programs and legal departments, most corporations operate with substantial blind spots in their legal risk management. These gaps persist for several interrelated reasons that technology is now positioned to address.
First, the volume and complexity of regulations governing modern business have expanded exponentially. A global corporation might be subject to tens of thousands of regulatory requirements across dozens of jurisdictions, many of which change frequently. No human team, regardless of size or expertise, can maintain perfect awareness of all applicable legal obligations.
Second, legal risks emerge from the everyday operations of business: contractual commitments made by sales teams, representations in marketing materials, HR decisions, operational changes, or strategic pivots. These activities occur continuously across organizational silos, often without legal review until problems surface.
Third, traditional compliance frameworks rely heavily on periodic audits, manually updated policies, and training programs that quickly become outdated. These approaches, while valuable, cannot keep pace with the dynamic nature of modern business operations.
Finally, corporate clients increasingly expect their outside counsel to function as business partners rather than specialized service providers. They seek attorneys who understand their operations intimately and who proactively identify risks before they materialize, an expectation that traditional service models struggle to fulfill.
An additional factor favoring outside counsel in this evolution is the matter of economies of scale. While in-house legal departments face continual budgetary constraints and typically focus on a single industry or company context, law firms can distribute the investment in sophisticated monitoring systems across multiple clients. By developing expertise and technical infrastructure that serves many clients in similar sectors, outside counsel can offer capabilities that would be prohibitively expensive for any single corporate legal department to build independently.
This scale advantage extends beyond technology to collective intelligence. Outside firms working across an industry accumulate insights about emerging risks, regulatory trends, and effective mitigation strategies that no single company could develop internally. These insights, when encoded into monitoring systems, create a network effect that benefits all clients served by the firm, creating an offering that in-house teams simply cannot replicate.
The result of all these factors is a protection gap that leaves even well-resourced organizations vulnerable to preventable legal challenges. The cost of this gap is measurable not just in litigation expenses and regulatory penalties, but in operational disruptions, reputational damage, and missed business opportunities due to legal uncertainty.
2. Architecting a Legal Security System
Creating an effective legal security system requires a thoughtful architecture that integrates technology, legal expertise, and client operations. While the specific design will vary based on client needs, industry context, and risk profile, certain fundamental components remain consistent.
At its core, a legal security system must establish continuous monitoring capabilities across key risk vectors. These typically include regulatory compliance, contractual obligations, intellectual property protection, employment practices, corporate governance, and industry-specific risk areas. For each vector, the system must connect the sources of potential risk and the indicators that suggest emerging issues.
The foundation of this monitoring capability is a comprehensive legal knowledge base tailored to the client’s specific operations. This knowledge base must encode not just applicable laws and regulations, but how they intersect with the client’s business model, organizational structure, and strategic objectives. It must be continuously updated as laws change and as the client’s operations evolve.
Upon this foundation, firms can implement real-time scanning of client activities against the knowledge base. This might involve reviewing internal communications, analyzing contract terms, monitoring regulatory announcements, or scanning public records for potential litigation risks. Advanced systems might incorporate predictive analytics to identify patterns that historically precede legal problems.
The most sophisticated implementations integrate directly with client systems, enabling real-time legal guidance within operational workflows. For example, a sales contract management system might automatically flag problematic terms before agreements are finalized, or a product development platform might identify potential regulatory hurdles early in the design process.
Critically, these systems must balance comprehensiveness with practicality. Flagging every theoretical legal risk would quickly overwhelm both attorneys and clients with false positives. Effective systems must establish appropriate thresholds for escalation based on risk magnitude, company risk tolerance, and operational context.
3. Implementation Strategies for Outside Counsel
Implementing a legal security system requires a structured approach that balances technological capabilities with the practicalities of client relationships and operations. For large law firms, the implementation process typically unfolds in stages, beginning with a comprehensive risk assessment and culminating in a fully integrated monitoring system.
The first step involves conducting a thorough legal risk assessment for the client organization. This goes beyond traditional legal audits to examine not just current compliance status but the dynamic processes through which legal risks emerge in day-to-day operations. The assessment should identify both the most significant risk areas and the operational contexts in which they typically arise.
Based on this assessment, firms can develop a tailored monitoring framework that prioritizes the most critical risk vectors. This framework should define what will be monitored, how frequently, using what data sources, and with what thresholds for intervention. It should also establish clear protocols for escalation when potential issues are identified.
With the monitoring framework defined, firms can begin building the necessary technological infrastructure. This would involve existing legal technology platforms, custom-developed tools, and integration with client systems. The specific technology stack will vary based on client needs and firm capabilities, but should enable automated scanning, intelligent analysis, and structured escalation.
Throughout implementation, firms must work closely with key stakeholders across the client organization. This includes not just the general counsel’s office but operational leaders whose activities will be monitored. Engaging these stakeholders early helps ensure the system addresses real-world risks, integrates with existing workflows, and gains the organizational buy-in necessary for successful adoption.
The most effective implementations follow an iterative approach, beginning with focused monitoring of high-priority risk areas and expanding over time. This allows for continuous refinement based on feedback and results, while demonstrating immediate value to clients through early wins.
4. Transforming the Business Model: From Billable Hours to Recurring Value
Perhaps the most profound implication of legal security systems is how they reshape the economics of legal practice. As AI dramatically accelerates research and analysis capabilities, many firms are facing an uncomfortable reality: traditional billable hour models increasingly put firm interests at odds with client demands for efficiency. When a task that once took ten hours can be completed in minutes (or seconds), how do firms maintain revenue while passing efficiency gains to clients?
Legal security systems offer a compelling answer. By shifting from discrete billable transactions to ongoing monitoring and risk management, firms can establish subscription-based revenue models that align incentives between counsel and client. Rather than selling time, firms sell outcomes, specifically, maintenance of legal health and early detection of potential issues before they become costly problems.
This model recognizes that legal expertise is most valuable when applied preventatively and continuously, not just during crises. Clients gain predictable legal costs and better outcomes, while firms secure more stable revenue streams and deeper client relationships. The subscription approach also values the significant upfront investment required to build effective monitoring systems, the expertise, knowledge base development, and tech infrastructure that make real-time legal guidance possible.
For firms accustomed to hourly billing, this transition requires both strategic vision and practical execution. Most successful implementations begin with hybrid approaches: maintaining hourly billing for certain services while establishing subscription components for continuous monitoring and preventative counsel. Over time, as both firms and clients grow comfortable with the new model, the subscription elements can expand to encompass broader aspects of the relationship.
Closing Thoughts
The question isn’t whether AI will transform how we deliver legal services to our corporate clients; it’s whether your firm will lead this transformation or struggle to catch up. Legal security systems represent more than just an innovation; they embody a fundamental reimagining of the attorney-client relationship.
Throughout my career, I’ve watched countless innovations promise to revolutionize legal practice, but few have offered such clear and compelling benefits to both law firms and their clients. By embedding our expertise within the daily operations of our clients, we not only protect them more effectively but elevate our own practice from transactional service provider to indispensable strategic partner. The firms that master this approach will define the next generation of legal excellence.
The path to implementation doesn’t require massive infrastructure investments or wholesale practice redesigns. It starts with identifying a single high-value area where continuous monitoring could demonstrably benefit a key client. Perhaps it’s tracking regulatory changes affecting a specific business unit, monitoring contractual compliance across a supply chain, or providing real-time guidance for recurring transaction types. Start small, demonstrate value, and build from there.
I encourage you to take that first step this quarter. Identify one client relationship where this approach could strengthen your position as trusted counsel. Arrange a conversation about their most pressing legal concerns and explore how continuous monitoring might address them more effectively than traditional approaches. You may be surprised by how receptive clients are to this evolution; after all, they’ve been waiting for their law firms to embrace the same data-driven approach that has transformed their own operations. Exceed client expectations and deepen your relationships with “Deep Legal.”
The Symbiotic Future of Quantum Computing and AI
Quantum computing has the potential to revolutionize various fields, but practical deployments capable of solving real-world problems face significant headwinds due to the fragile nature of quantum systems. Qubits, the fundamental units of quantum information, are inherently unstable and susceptible to decoherence—a process by which interactions with the environment cause them to lose their quantum properties. External noise from thermal fluctuations, vibrations, or electromagnetic fields exacerbates this instability, necessitating extreme isolation and control, often achieved by maintaining qubits at ultra-low temperatures. Preserving quantum coherence long enough to perform meaningful computations remains one of the most formidable obstacles, particularly as systems scale.
Another major challenge is ensuring the accuracy and reliability of quantum operations, or “gates.” Quantum gates must manipulate qubits with extraordinary precision, yet hardware imperfections introduce errors that accumulate over time, jeopardizing the integrity of computations. While quantum error correction techniques offer potential solutions, they demand enormous computational resources, dramatically increasing hardware requirements. These physical and technical limitations present fundamental hurdles to building scalable, practical quantum computers.
The Intersection With Neural Networks
One promising approach to mitigating these issues lies in the unexpected ability of classical neural networks to approximate quantum states. As discussed in When Can Classical Neural Networks Represent Quantum States? (Yang et al., 2024), certain neural network architectures—such as recurrent neural networks (RNNs) and convolutional neural networks (CNNs)—can be trained to exhibit quantum properties. This insight suggests that instead of relying entirely on fragile physical qubits, classical neural networks could serve as an intermediary computational layer, learning and simulating quantum behaviors in ways that reduce the burden on quantum processors. Yang further proposes that classical deep learning models may be able to efficiently learn and encode quantum correlations, allowing them to predict and correct errors dynamically, thereby improving fault tolerance without the need for excessive physical qubits.
Neural networks capable of representing quantum states could also enable new forms of hybrid computing. Instead of viewing artificial intelligence (AI) and quantum computing as separate domains, recent research suggests a future where they complement one another. Classical AI models could handle optimization, control, and data preprocessing, while quantum systems tackle computationally intractable problems.
Ultimately, the interplay between quantum mechanics and AI will most likely reshape our approach to computation. While quantum computers remain in their infancy, AI could provide a bridge to unlock their potential. By harnessing classical neural networks to mimic quantum properties, the scientific community may overcome the current limitations of quantum hardware and accelerate the development of practical, scalable quantum systems. The boundary between classical and quantum computation may not be as rigid as once thought.
Chinese Court Again Rules AI-Generated Images Are Eligible for Copyright Protection
On March 7, 2025, the Changshu People’s Court announced that it had ruled that images generated with Artificial Intelligence (AI) are eligible for copyright protection. This is believed to be the second case regarding AI-generated images with the Beijing Internet Court ruled similarly in late 2023. In the instant case, Lin XX generated an image of a half heart in a city waterfront using Midjourney and further used Photoshop to edit the image. An unnamed Changsha real estate company then used the image in a WeChat posting and further built a three-dimensional installation based on the image at one of its developments.
The Court explained that it first reviewed the user agreement of the AI software involved in the case, and clarified that the assets and rights of the pictures produced by using the software service in the Midjourney software user agreement belong to the user, and logged into the creation platform in court to review the login process, user information, and the picture iteration process such as the modification of the prompts. The court held that Lin’s modification of the prompts and the modification of the picture through the image processing software reflected his unique selection and arrangement, and the image generated by this was original and belonged to the works protected by the Copyright Law. The two defendants violated the copyright by disseminating the picture on the Internet without the permission of the copyright owner. At the same time, it was determined that the copyright enjoyed by Lin should be limited to the picture, and the manufacturing of the three-dimensional installation was only based on the image. The real estate company’s design and construction of the corresponding installation did not constitute an infringement of Lin’s copyright. The court then ruled: 1. The infringing party publicly apologized to the plaintiff Lin on its Xiaohongshu [Red Note] account for three consecutive days; 2. The infringing party compensated the plaintiff Lin for economic losses and reasonable expenses totaling 10,000 RMB; 3. The plaintiff Lin’s other claims were rejected. After the first-instance judgment, neither the plaintiff nor the defendant appealed, and the judgment has taken legal effect.
This is the opposite of the decision reached by the U.S. Copyright Office in Zarya of the Dawn (Registration # VAu001480196) that did not recognize copyright in AI-generated images.
The original announcement can be found here (Chinese only).
AppLovin & Its AI: A Lesson in Accuracy
Last week, we explored a recent data breach class action and the litigation risk of such lawsuits. Companies need to be aware of litigation risk not only arising from data breaches, but also from shareholder class actions related to privacy concerns.
On March 5, 2025, a class action securities lawsuit was filed against AppLovin Corporation and its Chief Executive Officer and Chief Financial Officer (collectively, the defendants). AppLovin is a mobile advertising technology business that operates a software-based platform connecting mobile game developers to new users. AppLovin offers a software platform and an app. In the lawsuit, the plaintiff alleges that the defendants misled investors regarding AppLovin’s artificial intelligence (AI)-powered digital ad platform, AXON.
According to the complaint, the defendants made material representations through press releases and statements on earnings calls about how an upgrade to AppLovin’s AXON AI platform would provide improvements over the platform’s earlier version. The complaint further alleged that the defendants made numerous statements indicating that AppLovin’s financial growth in 2023 and 2024 was driven by improvements to the AXON technology. The defendants reportedly stated that AppLovin’s increase in net revenue per installation of the mobile app and the volume of installations was a result of the improved AXON technology.
The complaint further states that on, February 25, 2025, two short seller reports were published that linked AppLovin’s digital ad platform growth not to AXON, but to exploitative app permissions that carried out “backdoor” installations without users noticing. According to the reports, AppLovin used a code that purportedly allowed it to bind to consumers’ permissions for AppHub, Android’s centralized Google repository where app developers can upload and distribute their apps. The complaint claims that by attaching to AppHub’s one-click direct installations as its own, AppLovin directly downloaded apps onto consumers’ phones without their knowledge.
The research reports also state that AppLovin was reverse-engineering advertising data from Meta platforms and using manipulative practices, such as having ads click on themselves and forcing shadow downloads, to inflate its installation and profit figures. One of the research reports states that AppLovin was “intentionally vague about how its AI technology actually works,” and that the company used its upgraded AXON technology as a “smokescreen to hide the true drivers of its mobile gaming and e-commerce initiatives, neither of which have much to do with AI.” The reports further assert that the company’s “recent success in mobile gaming stems from the systematic exploitation of app permissions that enable advertisements themselves to force-feed silent, backdoor app installations directly onto users’ phones.” The complaint details the findings from the reports and alleges that AppLovin’s misrepresentations led to artificially inflated stock prices, which materially declined because of the research report findings.
On a company blog post in response to the research reports, the CEO wrote that “every download [of AppLovin] results from an explicit user choice—whether via the App Store or our Direct Download experience.”
As organizations begin integrating AI into their operations, they should be cautious in making representations regarding AI as a profitability driver. Executive leaders responsible for issuing press releases and leading earnings calls relating to a company’s technology practices should also understand how these technologies function and ensure that any statements they make are accurate. Whether such allegations are true or not, litigation around materially false representations can prove costly to an organization, both from a financial and reputation perspective.