Navigating the AI Frontier: Why Information Governance Matters More Than Ever

Artificial Intelligence (AI) is rapidly transforming the legal landscape, offering unprecedented opportunities for efficiency and innovation. However, this powerful technology also introduces new challenges to established information governance (IG) processes. Ignoring these challenges can lead to significant risks, including data breaches, compliance violations, and reputational damage.
“AI Considerations for Information Governance Processes,” a recent paper published by Iron Mountain, delves into these critical considerations, providing a framework for law firms and legal departments to adapt their IG strategies for the age of AI.
Key Takeaways:

AI Amplifies Existing IG Risks: AI tools, especially machine learning algorithms, often require access to and process vast amounts of sensitive data to function effectively. This makes robust data security, privacy measures, and strong information governance (IG) frameworks absolutely paramount. Any existing vulnerabilities or weaknesses in your current IG framework can be significantly amplified by the introduction and use of AI, potentially leading to data breaches, privacy violations, and regulatory non-compliance.
Data Lifecycle Management is Crucial: From the initial data ingestion and collection stage, through data processing, storage, and analysis, all the way to data archival or disposal, a comprehensive understanding and careful management of the AI’s entire data lifecycle is essential for maintaining data integrity and ensuring compliance. This includes knowing exactly how data is used for training AI models, for analysis and generating insights, and for any other purposes within the AI system.
Vendor Due Diligence is Non-Negotiable: If you’re considering using third-party AI vendors or cloud-based AI services, conducting rigorous due diligence on these vendors is non-negotiable. This due diligence should focus heavily on evaluating their data security practices, their compliance with relevant industry standards and certifications, and their contractual obligations and guarantees regarding data protection and privacy.
Transparency and Explainability are Key: “Black box” AI systems that make decisions without any transparency or explainability can pose significant risks. It’s crucial to understand how AI algorithms make decisions, especially those that impact individuals, to ensure fairness, accuracy, non-discrimination, and compliance with ethical guidelines and legal requirements. This often requires techniques like model interpretability and explainable AI.
Proactive Policy Development is Essential: Organizations need to proactively develop clear policies, procedures, and guidelines for AI usage within their specific context. These should address critical issues such as data access and authorization controls, data retention and storage policies, data disposal and deletion protocols, as well as model training, validation, and monitoring practices.

The Time to Act is Now:
AI is not a future concern; it’s a present reality. Law firms and legal departments must proactively adapt their information governance processes to mitigate the risks associated with AI and unlock its full potential.

Artists Protest AI Copyright Proposal in the U.K.

British Prime Minister Keir Starmer wants to turn the U.K. into an artificial intelligence (AI) superpower to help grow the British economy by using policies that he describes as “pro-innovation.” One of these policies proposed relaxing copyright protections. Under the proposal, initially unveiled in December 2024, AI companies could freely use copyrighted material to train their models unless the owner of the copyrighted material opted out.
Although some Parliament members called the proposal an effective compromise between copyright holders and AI companies, over a thousand musicians released a “silent album” to protest the proposed changes to U.K. copyright laws. The album, currently streaming on Spotify, includes 12 tracks of only ambient sound. According to the musicians, the silent tracks illustrate empty recording studios and represent the impact they “expect the government’s proposals would have on musicians’ livelihoods.” To further convey their unhappiness with the proposed changes, the title of these twelve songs, when combined, reads, “The British government must not legalize music theft to benefit AI companies.” 
High-profile artists like Elton John, Paul McCartney, Dua Lipa, and Ed Sheeran have also signed a letter urging the British government to avoid implementing these proposed changes. According to the artists, implementing the new rule would effectively give artists’ rights away to big tech companies. 
The British government launched a consultation that sought comments on the potential changes to the copyright laws. The U.K. Intellectual Property Office received over 13,000 responses before the consultation closed at the end of February 2025, which the government will now review as it seeks to implement a final policy.

Skin360 App Can’t Escape Scrutiny under Illinois Biometric Law

A federal district court has denied a motion by Johnson & Johnson Consumer Inc. (JJCI) to dismiss a second amended complaint alleging it violated the Illinois Biometric Information Privacy Act (BIPA) by collecting and storing biometric information through its Neutrogena Skin 360 beauty app without consumers’ informed consent or knowledge. The plaintiffs also allege that the biometric information collected through the app is then linked to their names, birthdates, and other personal information.
Plaintiffs alleged that the Skin360 app is depicted as “breakthrough technology” that provides personalized at-home skin assessments by scanning faces and analyzing skin to diagnose enigmas like wrinkles, fine lines, and dark spots. The app then uses that data to recommend certain Neutrogena products for the consumer to eliminate those concerns. JJCI argued that the Skin360 app recommends products designed to improve skin health, which means that the consumers should be considered patients in a healthcare setting, making BIPA inapplicable.
However, the court disagreed and cited Marino v. Gunnar Optiks LLC, 2024 Ill. App. (1st) 231826 (Aug. 30, 2024), which held that a customer trying on non-prescription sunglasses using an online “try-on” tool is not considered a patient in a healthcare setting. In Marino, the court defined a patient as an individual currently waiting for or receiving treatment or care from a medical professional. Alternatively, Skin360 uses artificial intelligence software to compare a consumer’s skin to a database of images and provides an assessment based on a comparison of these images. Of course, JCCI did not dispute that no medical professionals are involved in providing the service through the Skin360 app.
The court stated that “[e]ven assuming Skin360 provides users with this AI assistant and ‘science-backed information’ the court finds it a reach to consider these services ‘medical care’ under BIPA’s health care exemption; [i]ndeed, Skin360 only recommends Neutrogena products to users of the technology, which suggests it is closer to a marketing and sales strategy rather than to the provision of informed medical care or treatment.”

The Big Six Items That Family Offices Need to Consider in 2025

Across all industries, family offices and their owners and management teams face rapidly evolving challenges, opportunities, and risks in the dynamic environment that is 2025. Here are six issues that family offices should consider and be mindful of this year.
1. Impending Sunset after December 31 of Temporarily Doubled Federal Estate, Gift and Generation-Skipping Transfer Tax Exemption — or Maybe Not?
In 2025, the Internal Revenue Service (IRS) increased the lifetime estate and gift tax exemption to $13.99 million per individual ($27.98 million per married couple). Clients who maximized their previous exemption ($13.61 million per individual in 2024), can now make additional gifts of up to $380,000 ($760,000 per married couple) in 2025 without triggering gift tax. Clients who have not used all (or any) of their exemption to date should be particularly motivated to make lifetime gifts because, under current law, the lifetime exemption is scheduled to sunset. 
Since the 2017 Tax Cuts and Jobs Act, the lifetime exemption has been indexed for inflation each year. Understandably, clients have grown accustomed to the steady and predicable increase in their exemption. However, absent congressional action, if the exemption lapses, the lifetime estate and gift tax (and generation-skipping transfer tax) exemption will be cut in half to approximately $7.2 million per individual ($14.4 million per married couple) at the start of 2026. That being said, as a result of the Republican trifecta in the 2024 election, it is very plausible that the temporarily doubled exemption may be extended for some additional period of time as part of the budget reconciliation process, which allows actions by majority vote in the Senate (with the vice president to cast the deciding vote in the event of a tie). This is in contrast to the ordinary rules of procedure that require 60 votes out of 100 in the Senate for Congressional action. But there are no assurances that such an extension will occur, and any legislation may not be enacted (if at all) until very late in the year. 
To ensure that no exemption is forfeited, clients should consider reaching out to their estate planning and financial advisors to ensure they have taken full advantage of their lifetime exemption. If the exemption decreases at the start of 2026, unused exemption will be lost. Indeed, absent Congressional action to extend the temporarily doubled exemption, this is a use-it-or-lose-it situation. 
2. Buy-Sell Agreements and Their Role in Business Succession Planning
The death, disability, or retirement of a controlling owner in a family-controlled business can wreak havoc on the entity that the owner may have spent a lifetime building from scratch. If not adequately planned for, such events can lead to the forced sale of the business out of family hands to an unrelated third party. 
A buy-sell agreement is an agreement between the owners of a business, or among the owners of the business and the entity, that provides for the mandatory purchase (or right of first refusal) of an owner’s equity interest, by the other owners or by the business itself (or some combination of the two), upon the occurrence of specified triggering events described in the agreement. Such triggering events can include the death, disability, retirement, withdrawal or termination of employment, bankruptcy and sometimes even the divorce of an owner. Buy-sell agreements may be adapted for use by all types of business entities, including C corporations, S corporations, partnerships, and limited liability companies. 
Last June, in Connelly v. United States, the US Supreme Court affirmed a decision of the Eighth Circuit Court of Appeals in favor of the government concerning the estate tax treatment of life insurance proceeds that are used to fund a corporate redemption obligation under a buy-sell agreement. The specific question presented was whether, in determining the fair market value of the corporate shares, there should be any offset to take into account the redemption obligation to the decedent’s estate under a buy-sell agreement. The Supreme Court concluded that there should be no such offset. In doing so, the Supreme Court resolved a conflict that had existed among the federal circuit courts of appeal on this offset issue. 
As a result of the Supreme Court’s decision, buy-sell agreements that are structured as redemption agreements should be reviewed by business owners that expect to have taxable estates. In many cases it may be desirable instead to structure the buy-sell agreement as a cross-purchase agreement. 
For further information, please see our article that addresses the Connelly decision and its implications: US Supreme Court Affirms the Eighth Circuit’s Decision in Favor of the Government Concerning the Estate Tax Treatment of Life Insurance Proceeds Used to Fund a Corporate Redemption Obligation. 
3. Be Very Careful in Planning With Family Limited Partnerships and Family Limited Liability Companies
The September 2024 Tax Court memorandum decision of Estate of Fields v. Commissioner, T.C. Memo. 2024-90, provides a cautionary tale of a bad-facts family limited partnership (FLP) that caused estate tax inclusion of the property transferred to the FLP under both sections 2036(a)(1) and (2) of the Internal Revenue Code with loss of discounts for lack of control and lack of marketability. In doing so, the court applied the Tax Court’s 2017 holding in Estate of Powell v. Commissioner, 148 T.C. 392 (2017) — the ability of the decedent as a limited partner to join together with other partners to liquidate the FLP constitutes a section 2036(a)(2) estate tax trigger — and raises the specter of accuracy-related penalties that may loom where section 2036 applies.  
Estate of Fields illustrates that, if not carefully structured and administered, planning with family entities can potentially render one worse off than not doing any such planning at all. 
4. The IRS Gets Aggressive in Challenging Valuation Issues 
The past year and a half has seen the IRS become very aggressive in challenging valuation issues for gift tax purposes.
First, in Chief Counsel Advice (CCA) 202352018, the IRS’s National Office, providing advice to an IRS examiner in the context of a gift tax audit, addressed the gift tax consequences of modifying a grantor trust to add tax reimbursement clause, finding there to be a taxable gift. The facts of this CCA involved an affirmative consent by the beneficiaries to a trust modification to allow the trustee to reimburse the grantor for the income taxes attributable to the trust’s grantor trust status. Significantly, the IRS admonished that its principles could also apply in the context of a beneficiary’s failure to object to a trustee’s actions, or in the context of a trust decanting. 
Next, in a pair of 2024 Tax Court decisions — the Anenberg and McDougall cases — the IRS challenged early terminations of qualified terminable interest property (QTIP) marital trusts in favor of the surviving spouse that were then followed by the surviving spouse’s sale of the distributed trust property to irrevocable trusts established for children. While the court in neither case found there to be a gift by the surviving spouse, the Tax Court in McDougall determined that the children made a gift to the surviving spouse by surrendering their remainder interests in the QTIP trust. 
5. The Show Continues: The CTA No Longer Applicable to US Citizens and Domestic Companies
After an on-again-off-again pause of three months beginning in late 2024, the Corporate Transparency Act (CTA) is back in effect, but only for foreign reporting companies. On March 2, the US Department of the Treasury (Treasury) announced it will not enforce reporting requirements for US citizens or domestic companies (or their beneficial owners).
Pursuant to Treasury’s announcement, the CTA will now only apply to foreign entities registered to do business in the United States. These “reporting companies” must provide beneficial ownership information (BOI) and company information to the Financial Crimes Enforcement Network (FinCEN) by specified dates and are subject to ongoing reporting requirements regarding changes to previously reported information. To learn more about the CTA’s specific requirements, please see our prior client alert (note that the CTA no longer applies to domestic companies or US citizens, and the deadlines mentioned in the alert have since been modified, as detailed in the following paragraph).
On February 27, FinCEN announced it would not impose fines or penalties, nor take other enforcement measures against reporting companies that fail to file or update BOI by March 21. FinCEN also stated it will publish an interim final rule with new reporting deadlines but did not indicate when the final rule can be expected. Treasury’s March 2 announcement indicates that the government is expecting to issue a proposed rule to narrow the scope of CTA reporting obligations to foreign reporting companies only. No further details are available at this time, but domestic reporting companies may consider holding off on filing BOI reports until the government provides additional clarity on reporting requirements. Foreign reporting companies should consider assembling required information and being prepared to file by the March 21 deadline, while remaining vigilant about further potential changes to reporting requirements in the meantime.  
On the legislative front, earlier this year, the US House of Representatives passed the Protect Small Businesses from Excessive Paperwork Act of 2025 (H.R. 736) on February 10, in an effort to delay the CTA’s reporting deadline. The bill aims to extend the BOI reporting deadline for companies formed before January 1, 2024, until January 1, 2026. The bill is currently before the US Senate, but it is unclear whether it will pass in light of the latest updates.
6. Ethical and Practical Use of AI in Estate Planning
The wave of innovative and exciting artificial intelligence (AI) tools has taken the legal community by storm. While AI opens possibilities for all lawyers, advisors in the estate planning and family office space should carefully consider whether, and when, to integrate AI into their practice. 
Estate planning is a human-centered field. To effectively serve clients, advisors develop relationships over time, provide secure and discrete services, and make recommendations based on experience, compassion, and intuition. 
Increasingly, AI tools have emerged that are marketed towards estate planning and family office professionals. These tools can (1) assist planners with summarizing complex estate planning documents and asset compilations, (2) generate initial drafts of standard estate planning documents, and (3) translate legal jargon into client-friendly language. Though much of the technology is in the initial stages, the possibilities are exciting. 
While estate planning and family office professionals should remain optimistic and open about the emerging AI technology, the following recommendations should be top of mind: 

First, advisors must scrutinize the data privacy policies of all AI tools. Advisors should be careful and cautious when engaging with any AI program that requires the input of sensitive or confidential documents to protect the privacy of your clients. 
Next, advisors should stay up to date on the statutory and case law developments, as the legal industry is still developing its stance on AI. 
Finally, advisors should honor and prioritize the personal and human nature of estate planning and family advising. Over-automating one’s practice can come at the expense of building strong client relationships. 

Listen to this article

AI in Business: The Risks You Can’t Ignore

Artificial Intelligence (AI) is revolutionizing business operations, offering advancements in efficiency, decision-making, and customer engagement. However, its rapid integration into business processes brings forth a spectrum of legal and financial risks that enterprises must navigate to ensure compliance and maintain trust.
The Broad Legal Definition of AI and Its Implications
In the United States, the legal framework defines AI far more expansively than the average person might expect, potentially encompassing a wide array of software applications. Under 15 U.S.C. § 9401(3), AI is any machine-based system that:

makes predictions, recommendations, or decisions,
uses human-defined objectives, and
influences real or virtual environments.

This broad definition implies that even commonplace tools like Excel macros could be subject to AI regulations. As Neil Peretz of Enumero Law notes, such an expansive definition means that businesses across various sectors must now re-appraise all of their software usage to ensure compliance with new AI laws.
Navigating the Evolving Regulatory Landscape
The regulatory environment for AI is rapidly evolving. The European Union’s AI Act, for instance, classifies AI systems into risk categories, imposing strict compliance requirements on high-risk applications. In the United States, various states are introducing AI laws, requiring companies to stay abreast of changing regulations.
According to Jonathan Friedland, a partner with Much Shelist, P.C., who represents boards of directors of PE-backed and other privately owned companies, developments in artificial intelligence are happening so quickly that many companies of even modest size are spending significant time developing compliance programs to ensure adherence to applicable laws.
One result, according to Friedland, is that “[a]s one might expect, the sheer number of certificate programs, online courses, and degrees now offered in AI is exploding. Everyone seems to be getting into the game,” Friedland continues, “for example, the International Association of Privacy Professionals, a global organization previously focused on privacy and data protection, recently started offering its ‘Artificial Intelligence Governance Professional certification.” The challenge for companies, according to Friedland, is “to invest appropriately without overdoing it.”
Navigating Bias and Discrimination in AI Systems
Legal challenges have been associated with algorithmic bias and accountability, which claim that historical data used to train AI often reflects societal inequalities, which AI systems can further perpetuate.
Sean Griffin, of Longman & Van Grack, highlights cases where AI tools have led to allegations of discrimination, such as a lawsuit against Workday, where an applicant claimed the company’s AI system systematically rejected Black and older candidates. Similarly, Amazon discontinued an AI recruiting tool after discovering it favored male candidates, revealing the potential for AI to reinforce societal biases.
To mitigate these risks, businesses should implement regular audits of their AI systems to identify and address biases. This includes diversifying training data and establishing oversight mechanisms to ensure fairness in AI-driven decisions.
Addressing Data Privacy Concerns
AI’s reliance on vast datasets, often containing personal and sensitive information, raises significant data privacy issues. AI-powered tools might be able to infer sensitive information, such as health risks from social media activity, potentially bypassing traditional privacy safeguards.
Because AI systems potentially have access to a wide range of data, compliance with data protection regulations like the GDPR and CCPA is crucial. Businesses must ensure that data used in AI systems is collected and processed lawfully, with explicit consent where necessary. Implementing robust data governance frameworks and anonymizing data can help mitigate privacy risks.
Ensuring Transparency and Explainability
The complexity of AI models, particularly deep learning systems, often results in ‘black box’ scenarios where decision-making processes are opaque. This lack of transparency can lead to challenges in accountability and trust. Businesses should be mindful of the risks associated with engaging third parties to develop or operate their AI solutions. In many areas of decision-making, explainability is required, and a black-box approach will not suffice. For example, when denying someone for consumer credit, specific adverse action reasons need to be provided to the applicant.
To address this, businesses should strive to develop AI models that are interpretable and can provide clear explanations for their decisions. This not only aids in regulatory compliance but also enhances stakeholder trust.
Managing Cybersecurity Risks
AI systems are both targets and tools in cybersecurity. Alex Sharpe points out that cybercriminals are leveraging AI to craft sophisticated phishing attacks and automate hacking attempts. Conversely, businesses can employ AI for threat detection and rapid incident response.
The legal risks associated with AI in financial services highlight the importance of managing cybersecurity risks. Implementing robust cybersecurity measures, such as encryption, access controls, and continuous monitoring, is essential to protect AI systems from threats. Regular security assessments and updates can further safeguard against vulnerabilities.
Considering Insurance as a Risk Mitigation Tool
Given the multifaceted risks associated with AI, businesses should evaluate the extent to which certain types of insurance can help them manage and reduce risks. Policies such as commercial general liability, cyber liability, and errors and omissions insurance can offer protection against various AI-related risks.
Businesses can benefit from auditing business-specific AI risks and considering insurance as a risk mitigation tool. Regularly reviewing and updating insurance coverage ensures that it aligns with the evolving risk landscape associated with AI deployment.
Conclusion
While AI offers transformative potential for businesses, it also introduces significant legal and financial risks. By proactively addressing issues related to bias, data privacy, transparency, cybersecurity, and regulatory compliance, enterprises can harness the benefits of AI while minimizing potential liabilities.
AI tends to tell the prompter what they want to hear, whether it’s true or not, underscoring the importance of governance, accountability, and oversight in its adoption. Organizations that establish clear policies and risk management strategies will be best positioned to navigate the AI-driven future successfully.

To learn more about this topic view Corporate Risk Management / Remembering HAL 9000: Thinking about the Risks of Artificial Intelligence to an Enterprise. The quoted remarks referenced in this article were made either during this webinar or shortly thereafter during post-webinar interviews with the panelists. Readers may also be interested to read other articles about risk management and technology.
©2025. DailyDACTM, LLC d/b/a/ Financial PoiseTM. This article is subject to the disclaimers found here.

Data Processing Evaluation and Risk Assessment Requirements Under California’s Proposed CCPA Regulations

As we have previously detailed here, the latest generation of regulations under the California Consumer Privacy Act (CCPA), drafted by the California Privacy Protection Agency (CPPA), have advanced beyond public comments are closer to becoming final. These include regulations on automated decision-making technology (ADMT), data processing evaluation and risk assessment requirements and cybersecurity audits.
Assessments and Evaluations Overview
The new ADMT notice, opt-out and access and appeal obligations and rights go into immediate effect upon the regulation’s effective date, which follows California Office of Administrative Law (OAL) approval and would either be subject to the quarterly regulatory implementation schedule in the Government Code, or as has been the case with prior CCPA regulations immediately on OAL sign-off. We will not know if the CPPA will again seek a variance from the schedule until they submit the final rulemaking package.
Moving on to evaluations and risk assessments, the draft regulations do propose a phase-in, but only in part. Evaluations must be undertaken beginning on the regulation’s effective date, whereas assessment requirements apply to practices commencing on the effective date, but there is a 24 month period to complete, file certifications and abridged versions, and make available for inspection.
However, since Colorado, which like California, has very detailed requirements for conducting and documenting assessments, and New Hampshire, Oregon, Texas, Montana, Nebraska, and New Jersey already require assessments, Delaware and Minnesota will this summer, and Indiana, Rohde Island and Kentucky will by the new year, query whether the California phase-in is of much use. Out of the 20 state consumer privacy laws, all but Utah and Iowa require assessments.
Further, without at least a cursory assessment, how can you determine if the notice, opt-out and access and appeal rights apply?
So, what is the difference between an evaluation and an assessment?
First, they are required by different provisions. Evaluations are required by Section 7201, and risk assessments by Section 7150.
Next, there is no phase-in of evaluations as with risk assessments.
Risk assessments are much more complex and prescribed, and are at the core of a risk benefit judgment decision, and must be available for inspection and abridged summaries must be filed.
The content of the evaluation, which need not be published or subject to inspection demand outside of discovery, need only address if the process and technology is effective, in other words, materially error free, and if it discriminates against a protected class, in other words free of material bias. As such, they have similarities to assessments under the Colorado AI Act, effective next year but likely to be amended before then, and the recently passed Virginia HB 2094 AI bill that may or may not get signed by Governor Yougkin.
Thus, an evaluation alone won’t help you determine if the ADMT notice, opt-out and access and appeal rights apply, nor meet the risk assessment requirements. While it is a separate analysis, it can be incorporated into assessments assuming a company begins those immediately. 
Also, evaluations are not required for selling and processing of sensitive personal information (PI), as are assessments, and assessments are only required for identification processing to the extent AI is trained to do so, whereas any processing for identification is subject to an evaluation. Since CCBA is part of behavioral advertising, which is part of extensive profiling, sharing needs to be addressed in both evaluations and assessments.
Finally, under Section 7201, a business must implement policies, procedures, and training to ensure that the physical or biological identification or profiling works as intended for the business’s proposed use and does not discriminate based on protected classes.
So on to assessments, what activities need to be assessed?
First selling or sharing. All 18 states that require assessments require them for this; though, for the non-California states, the trigger is processing for targeted advertising rather than “sharing”, which is broader than sharing for CCBA, but the California regulations catch up with the new concept of behavioral advertising.
Next, processing of sensitive personal information. The same 18 states require assessments for the processing of sensitive data, with differing definitions. For instance, what is considered children’s personal data differs considerably. Notably, the California draft Regulation amendments would raise the age from 13 to 16, and Florida is under 18. There is also variation in the definition of health data. 
Note, while the Nevada and Washington (and potential New York) consumer health laws do not explicitly require assessments, they are practically needed, and Vermont’s data broker law requires initial risk assessments and a process for evaluating and improving the effectiveness of safeguards.
Other Risk Assessment Triggers
Assessments are mandatory before using ADMT to make or assist in making a significant decision, which is “profiling in furtherance of decisions that produce legal or similarly significant effects concerning a consumer.” This is a General Data Protection Regulation (GDPR) and European Data Protection Board (EDPB) inspired provision. The other states that require assessments also have a similar obligation, although the definitions may differ somewhat. In California, “Decisions that produce legal or similarly significant effects concerning a consumer” means decisions that result in the provision or denial of financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment opportunities, healthcare services, or access to essential goods or services. Only California gives guidance on what are essential goods or services, by means a parenthetical “(e.g., groceries, medicine, hygiene products, or fuel). Critics are concerned that this limits algorithmic custom pricing, sometimes derogatively referred to as surveillance pricing, or even AI of consumer behavior to decide where to open or close stores, though the aggregate or de-identified data should suffice for that. There is considerable guidance out of the EU, which can be looked to, though is clearly not binding. The EU approach is quite broad.
Speaking of looking to the EU, beware that the California and Colorado regulations diverge considerably from what is required under GDPR assessments, and keep in mind the material differences between GDPR with its lawful basis and legitimate interest tests and the US laws with opt-out concepts.
Uniquely amongst the states, California proposes the concept of extensive profiling, which covers any:
1) work or educational profiling;
2) public profiling; or
3) behavioral advertising.
Note however, that whilst behavioral advertising is said to include CCBA, it is broader and is defined as “the targeting of advertising to a consumer based on the consumer’s personal information obtained from the consumer’s activity—both across businesses, distinctly-branded websites, applications, or services, (i.e., CCBA) and within the business’s own distinctly-branded websites, applications, or services.” Significantly, this closes the gap between CCBA and the non-California regulation of targeted advertising, by including entirely 1st party behavioral advertising.
There is a carve out for “nonpersonalized advertising” as defined by CCPA Section .140(t), which means advertising and marketing that is based solely on a consumer’s personal information derived from the consumer’s current interaction with the business with the exception of the consumer’s precise geolocation. But also, note that here the exception is specifically limited to where the PI is not disclosed to third parties (i.e., not a processor or contractor). This has led some to argue that this guts the carve out. However, if personal data was disclosed to a third party, that would likely be a sale, especially given the breadth of the concept of “over valuable consideration” in the eyes of the regulators. So, the approach really is not inconsistent with the current treatment of contextual advertising.
PI to train ADMT or AI
Assessments are also proposed to be required for processing of PI to train ADMT or AI. This is another uniquely California concept, at least under state consumer privacy laws, and the California Chamber of Commerce and others, including some members of the legislature, have argued that, like other aspects of the proposed regulation’s treatment of ADMT, it goes beyond the Agency’s statutory authority. It is interesting to note that one of the topics included in the US House Committee on Energy and Commerce’s request for information to inform federal privacy legislation this week is the role of privacy and consumer protection standards in AI regulation and specifically the impact of state privacy law regulation of ADMT and profiling on US AI leadership. Another topic of focus is “the degree to which US privacy protections are fragmented at the state level and costs associated with fragmentation,” which seems to be inviting a preemption scope debate. So by the time at least this part of the regulation requires action, it may possibly be curtailed by federal law. That said, evaluations and assessments are practically necessary to guide compliance and information governance and to date repeated attempts at federal consumer privacy legislation have been unsuccessful.
Assessment Details
Most state laws do not have any specifics regarding how to conduct or document risk assessments, with the notable exception of Colorado. When it started assessment rulemaking, the Agency stated that it would look to try to create interoperability with Colorado and would also look to the guidance by the EDPB. While both can be seen to have influenced California’s proposed requirements, California adds to these.
Some of the content requirements are factual, such as purposes of processing and categories of PI. Others are more evaluative, such as the quality of the PI and the expected benefits and potential negative impacts of the processing, and how safeguards may mitigate those risks of harm. Nine examples are included in Section 7152(a)(5) to guide analysis.
Section 7152(a)(3) calls for analysis of specific operational elements for the processing.
Focus on Operational Elements
These operational elements are listed here[1] and can be seen as not only getting under the hood of the processing operations but also informing consumer expectations, and the risks and benefit analysis that is the heart of an assessment. Note, in particular, the inquiries into retention and logic, the latter meaning ‘built-in’ assumptions, limitations, and parameters that inform, power or constrain the processing, particularly as concerns ADMT.
Analysis and Conclusions
The assessment must not only document those processing details and the risk / benefit and risk mitigation analysis, but the conclusions and what was approved and/or disapproved.
The draft regulations call for participation by all relevant stakeholders, and they must be specifically named, as must the identification of the person responsible for the analysis and conclusions.
Filing and Certification
California diverges from the other states with respect to reporting requirements. Annually a responsible executive must certify to the CCPA that the business assessed all applicable processing activities, and an abridged assessment must be filed for each processing activity actually initiated. This will make it very apparent which businesses are not conducting assessments.
Further, the draft regulations limit what is required in the abridged assessments to largely factual statements:

The triggering processing activity;
The purposes;
The categories of personal information, including any sensitive categories; and
The safeguards undertaken.

Note that the risk / benefit analysis summary is not a part of the filing.
Inspection and Constitutional and Privilege Issues
Contrast that with the detailed risk / benefit analysis required by the full assessment, which, like all of the other states that require or will require assessments, is subject to inspection upon request.
This GDPR-inspired approach to showing how you made decisions calls for publication of value judgments, which, as I have opined in an article that is in your materials (see a synopsis here), is likely unconstitutional compelled speech. While the 9th Circuit in the X Corp and NetChoice cases struck down harm assessment and transparency requirements in the context of children’s online safety, the Court distinguished compelling disclosure of subjective opinions about a company’s products and activities from requiring disclosure of merely product facts. There is no 1st Amendment in GDPR-land, so we will have to wait and see if the value judgment elements of assessments can really be compelled for inspection.
Inspections also raise serious questions about attorney-client and work product privilege. Some states specifically provide that inspections of assessments is not a waiver of privilege, and/or that they will be maintained as confidential and/or are not subject to public records access requests. The draft regulations do not; however, the CCPA itself provides that the Act shall not operate to infringe on evidentiary privileges. At any event, consider labeling legal analysis and counsel as such and maintaining them apart from what is maintained for inspection.[2]

[1] Planned method for using personal information; disclosures to the consumer about processing, retention period for each category of personal information, categories of third parties with access to consumers’ personal information, relationship with the consumer, technology to be used in the processing, number of consumers whose personal information will be processed and the logic used.
[2] Note – Obtaining educational materials from Squire Patton Boggs Services Ireland, Limited, or our resellers, does not create an attorney-client relationship with any Squire Patton Boggs entity and should be used under the direction of legal counsel of your choice.

Are Workforce Reductions Coming to the Private Sector? And, if so, How Should Companies Handle Them?

Massive federal workforce reductions (once a rare event) have been featured prominently in the news lately, along with reports of criticism about the way they are occurring. Will private companies follow suit? Some economic signs, such as the continuing low unemployment rate, do not point in that direction. However, layoffs increased 28% in January as compared to the previous month, and WARN filings, as well as increasing company announcements of projected future layoffs, tell a different story.
Moreover, short-term causes, such as the impact of tariffs and cuts in government contracting, as well as slightly longer-term developments such as the effects of artificial intelligence, suggest more workforce reductions may be coming in the near future.
While no one likes a reductions-in-force (RIFs), there is a right way — and a wrong way — to conduct them. RIFs require meticulous planning and execution. While each job action must be analyzed according to its unique facts and circumstances, the following steps can promote fairness, minimize the disruption, stabilize morale, and reduce legal risks in conducting a RIF:

Continue to Employ Good Performance Management: This may reduce the number of necessary reductions and make the decisions easier or harder (e.g., if everyone is rated “excellent” on annual performance reviews, they do not function as useful tools in the RIF context).
Consider Other Options: Is a voluntary program an option? Can the company achieve the cost savings by other means, such as eliminating contractors rather than employees?
Once: Try to make the reductions occur at one time, rather than as a staggered series of actions. Among other things, doing so can help reduce uncertainty and anxiety among the remaining workforce.
WARN: If you meet the federal Worker Adjustment and Retraining Notification (WARN) Act layoff thresholds, or the typically lower-threshold state law criteria if applicable, issue a WARN Act notice at least 60 days before the layoffs.
Process: Employ a consistent process with defined and fair reduction criteria that is aligned with the collective bargaining agreement (if unionized) and company policy.
Documentation: Memorialize decisions, using template documents that are easy for managers to complete, and retain those documents.
Avoid Discrimination: Make sure that your process is devoid of unlawful discrimination. Be especially careful about age discrimination, which is increasing statistically, and avoid using criteria that is likely to negatively impact older workers.
Legal Review: Work with legal to conduct a privileged disparate impact statistical analysis (to ensure that the layoffs do not inadvertently impact employees on the basis of protected characteristics) as well as a legal review to reduce risk.
Severance and Compliant Agreements: Pay severance whenever possible and obtain a release. It is worth it to eliminate risk and stabilize morale. Comply with the Older Workers’ Benefit Protection Act’s disclosure, review, and revocation requirements. Make sure to revise your standard agreements to account for recent state law changes (e.g., precluding non-disparagement and confidentiality clauses prohibiting discrimination-related communications).

If You Are Uptight About AI, This May Relax You

While AI has many people uptight, Aescape has developed technology to help you relax – AI robotic massage. Aescape touts that it combines the timeless art of massage with robotics and artificial intelligence to deliver an exceptional massage experience every time. The “Aertable” (i.e., the massage table) has bolsters, headrests, and armrests that are all adjustable to provide a customized fit during each session. It also has continuous feedback which allows for real-time adjustments to optimize comfort. The “Aerscan” system captures 1.2 million data points, precisely mapping your body’s muscle structure to create a unique blueprint for a highly personalized massage experience. “Aerpoints” replicate the seven touch techniques of a skilled therapist, simulating the knuckle, thumb, cupped hand, blade of hand, palm, forearm, and elbow. The “Aerview” provides personal control so you can adjust the pressure, manage the music, or customize the display to create a session tailored to your preferences, needs and mood. The company has developed “Aerwear” a high-compression performance fabric that enhances body detection for the system and allows Aerpoints to move smoothly over your body. Wearing it is mandatory during the massage. The tables are equipped with advanced safety features, including force sensors and pause and emergency stop features to prevent or abate issues if things go wrong. Aescape is a classic example of an application of AI and robotics that will interact with humans. We will see many more such applications from this point forward. While Aescape seems to have anticipated some of the potential problems that can arise, any AI robotic application that interacts with humans has the potential for a variety of legal issues. The following are some of the general legal issues that may be relevant to AI robotic applications that interact with humans. But the actual issues will vary by application.
Despite the mixed feelings by some therapists about this technology, the $19B massage industry faces significant challenges. At least some of these challenges can be addressed by AI robotic massage. Some of the problems relate to delivering consistent, high-quality experiences, client satisfaction can vary from one session to another, some locations have a shortage of skilled therapists and therapists work limited hours. AI robotic massage can address many of these issues due to its consistency and 24/7 availability.
Aescape seems to be gaining traction. After launching with Equinox in New York City, Aescape reported exceptional consumer adoption with high utilization and repeat rates, driving a notable spike in gym memberships. This led to a national expansion to 60 Equinox locations. Aescape has also had success with leading hospitality brands (e.g., luxury hotels) and some NBA and NFL teams.
By now many questions are likely going through your mind. Can robots really replace the “human touch” aspect of massages? Will this technology replace massage therapists, leading to job loss? Can AI assist human massage therapists? It is beyond the scope of this post to cover all of these and other valid questions. But if you are interested, these topics are well-covered in the following articles – Will AI Impact the Employment of Massage Therapists? and 10 Ways Massage Therapists Can Use AI.
Aescape is a classic example of an application of AI and robotics that will interact with humans. We will see many more such AI robotic personal services applications from this point forward. While Aescape seems to have anticipated some of the potential problems that can arise, any AI robotic application that interacts with humans has the potential for a variety of legal issues. The following are some of the general legal issues that may be relevant to AI robotic applications that interact with humans. But the actual issues will vary by application.
Liability and malpractice: One often raised concern with autonomous applications is their safety and reliability. Despite best efforts to anticipate and provide failsafes for potential problems, this remains a risk. Technology malfunctions can cause physical harm. This raises concerns about potential liability issues. Will harmed clients have a claim in the nature of “malpractice” or product liability, or both? To complicate matters further, for harms resulting from AI-assisted, human massage, how should the liability be allocated between the technology provider and the therapist? In some cases, it may be difficult to obtain insurance for such applications, especially for new, unproven AI technology. If you are a location (spa, health club, etc.) deploying the technology, it would be wise to ensure you have an effective indemnity.
Privacy and data protection: To optimize the personalization of AI driven applications, a lot of personal data is needed. Aescape’s system claims to scan and store detailed body data, mapping over 1 million 3D data points of a person’s body. Massage therapists often inquire whether the client has any injuries, recent surgeries or medical conditions. More generally, AI robotic massage technology can employ a database to analyze a client’s physical condition, medical history, preferences and other personal information to create a customized massage tailored to their individual needs. All of this raises privacy concerns about how this sensitive personal information, including information typically covered by HIPPA, is stored, used, and protected. From a practical perspective, some clients may be less willing to share their sensitive personal information and medical history with a machine. While privacy is always important, there may unique considerations in crafting a privacy policy in these cases and it will be prudent to prioritize transparency and obtain explicit consents from clients before incorporating AI into their sessions. There may be legal questions on whether clients are fully informed about the nature of the robotic massage, the data collected and how it is used, and whether clients can provide informed consent, especially given the novelty of the technology.
Professional licensing: Massage therapists require licenses. Will AI systems need to be “licensed” in a manner like human massage therapists. If so, how this would be implemented? Or will certain jurisdictions prohibit unlicensed, non-humans. And while most massages are not deemed to be medical treatment, some can be. To the extent AI robotic massage crosses that line, it could involve the unauthorized practice of medicine.
Regulatory compliance: As a new technology, AI robotic massage systems may face challenges in meeting existing regulations for massage therapy and medical devices, where applicable. There could be a need for new regulatory frameworks to address this emerging field.
Consumer protection and marketing: There could be legal issues related to ensuring the safety and effectiveness of the robotic massage systems, as well as truthful marketing of their capabilities. The FTC has warned companies about overstating the capabilities of AI technology.
Intellectual Property: As with any new technology, there may be patent disputes or copyright issues related to the AI algorithms and robotic designs used in these systems. It is prudent to work with IP counsel to protect your IP and assess whether you might be infringing on any third-party IP.
These are some of the potential issues in the complex legal landscape that AI robotic applications may face. Other issues will undoubtedly arise.

The Ever-Evolving Landscape with Artificial Intelligence and Employment

Long before the recent mainstream popularization of ChatGPT and generative Artificial Intelligence (AI) that caught the public eye, private companies – as well as government agencies – had already been quick to incorporate AI tools into their business. From housing to finances to hiring, AI permeated the pores of business because it satisfied the one thing businesses aim to accomplish – maximizing efficiency to increase profit margins. According to a 2023 article from the ACLU, approximately 70% of companies and 99% of Fortune 500 companies had already implemented some form of AI and automation tools to increase “efficiency” in the hiring process. The global market size of the AI recruitment industry was around $618 million as of 2024 and is expected to surge to $1053 million by 2032.
The use of AI in the hiring process can help reduce the workload of recruiters by scanning thousands of resumes and filling positions faster. While in theory this sounds promising, it can lend itself to incidences of various types of illegal discrimination that employers should be aware of in the workplace.
Employment Discrimination and AI. 
In 2022 the EEOC filed a case against a Shanghai China-based English language tutor known as iTutor over the company’s use of a software programmed to automatically reject both female candidates over the age of 55 and male candidates over the age of 60 for tutoring roles. EEOC alleged a violation of the Age Discrimination in Employment Act (ADEA). Although the tutors were not considered employees, but instead independent contractors, which is out of the purview of the ADEA, in 2023 the U.S. District Court for the Eastern District of New York ordered iTutor to pay $365,000 to over 200 job candidates who were rejected as a result of automatic screening from iTutor’s employed software. Furthermore, the Court approved the EEOC’s consent decree outlining the required antidiscrimination training iTutor must undergo as a form of injunctive relief.
More recently, 2024, in Mobley v. Workday, Inc. a plaintiff alleged that Workday’s AI screening tools violated federal and California anti-discrimination laws, including Title VII, the Age Discrimination in Employment Act (ADEA), and the Americans with Disabilities Act (ADA). Derek Mobley, an African American man over 40 with anxiety and depression, claimed that Workday’s AI tools rejected his applications to over 100 jobs without a single offer. The Northern District of California allowed the disparate impact discrimination claims to proceed, recognizing Workday as an “agent” of the employers using its AI tools. This case is still pending resolution.
The Tale of Two Administrations
The increased use of AI, as expected, has drawn attention from the federal government, specifically the executive branch. Both the Biden and Trump Administrations have taken keen interest in AI.
On October 30, 2023, President Joe Biden signed Executive Order 14110, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” This executive order aimed to establish a comprehensive national approach to governing AI technologies. EO 1410’s main goals were to promote competition in the AI industry, prevent AI-enabled threats to civil liberties and national security, and ensure U.S. global competitiveness in AI. The order emphasized the need to govern AI technologies to realize their potential benefits while mitigating associated risks.
In turn, newly elected President Donald Trump on January 23, 2025, signed an EO titled “Removing Barriers to American Leadership in Artificial Intelligence.” In addition to revoking the Biden Administrations EO, this order focuses on removal of federal constraints on AI, encouraging businesses more freedom to innovate. By giving more powers to employers the EO encourages employers to address job security and prioritize upskilling and reskilling their workforces.
In essence, Biden’s order focused on protecting workers and ensuring ethical AI use, while Trump’s order emphasizes deregulation and innovation, placing more responsibility on employers to manage AI’s impact on employment.
The Plan Moving Forward for Employers
Given the developing legal precedents and influence by the executive branch, employers may consider the elimination of AI in the recruitment process created by themselves or third parties. Elimination in its entirety may not be the solution; however, it is essential for key stakeholders and decision makers to understand the data, training methods, and programming used by their vendors and the AI tool developers. Employers must understand the data pool the program has been trained on and what the limitations and exclusions of that data pool are. If there are indications from a vendor that the AI tool will likely exclude applicants of a protected class, the employer should avoid its use entirely to protect itself from litigation.
Employers should also consider auditing AI tools to confirm that the algorithmic screening complies with federal and state discrimination laws, regardless of whether applicable law already requires an audit. Furthermore, employers should assemble a team to oversee the tool’s use for any biases that may become apparent.
Further consideration should be given to protecting businesses from the expected onslaught of litigation challenging AI in employment. This is due to the fact that Trump’s executive order puts the onus on employers to manage AI innovation. Although executive orders are not “the law” per se, they do carry some influence. Such influence may cause employees and applicants to claim discrimination in relation to their employment opportunities.
Consulting an attorney who specializes in employment litigation can provide further guidance on the interplay between AI and employment concerns on your business.

China’s Supreme People’s Court Work Report – Number of Punitive Damages Awards Up 44%

On March 8, 2025, at the third session of the 14th National People’s Congress, Zhang Jun, President of the Supreme People’s Court (SPC), delivered the SPC’s annual work report. President Zhang highlighted the increase in punitive damages awarded in intellectual property cases including the 640 million RMB award in an electric vehicle trade secret case (although only the non-monetary portion of the judgement has been satisfied as the unnamed defendant is bankrupt). Punitive damages were awarded in 460 cases of malicious infringement with serious circumstances, an increase of 44.2% year-on-year. Overall, 494,000 intellectual property cases were concluded, an increase of 0.9% year-on-year.

Excerpts related to intellectual property from the work report follow.
II. Serving high-quality development with strict and fair justice
Serve the construction of a unified national market. Strengthen anti-monopoly and anti-unfair competition judicial work. The SPC issued judicial interpretations on anti-monopoly civil litigation, and identified 31 cases of monopoly, a year-on-year increase of 2.1 times. 10,000 cases of unfair competition such as infringement of trade secrets and collusion in bidding were concluded, a year-on-year increase of 0.7%. In one case, Sun and others set up a peer company in anonymity and misappropriated the technical secrets of the original unit for more than 10 years. The court applied punitive damages and ordered the company and Sun and others to pay joint compensation of 160 million RMB.
Serve the development of new quality productivity. 494,000 intellectual property cases were concluded, an increase of 0.9% year-on-year. Serve the key core technology research and industrial development. Strengthen judicial protection of intellectual property rights in the fields of new generation information technology, high-end equipment, biomedicine, new materials, etc., and promote the transformation of innovative achievements. In the six years since its establishment, the Intellectual Property Court of the Supreme People’s Court has concluded nearly 20,000 technical intellectual property appeals, among which the number and proportion of cases involving strategic emerging industries have increased year by year, reaching 1,233 in 2024, accounting for 32.3%. Properly handle cases involving artificial intelligence disputes in accordance with the law, support the legal application of artificial intelligence; punish infringements using artificial intelligence technology, and promote standardized and orderly development. Strictly protect innovation in accordance with the law. Punitive damages were applied to 460 cases of malicious infringement with serious circumstances, an increase of 44.2% year-on-year. The “new energy vehicle chassis” technical secret infringement case was heard in accordance with the law, and punitive damages of 640 million RMB were awarded. The infringer was ordered to stop the infringement, and the calculation standard of delayed performance fee was clarified to encourage its automatic performance. Explore the information disclosure mechanism of related cases and punish those who disrupt the innovation order in the name of rights protection. A company maliciously registered and hoarded trademarks in large quantities based on the business names and main identifying parts of trademarks that were used previously by others, and made profits by suing others for infringement. The Hunan Court ruled to dismiss the lawsuit and imposed a fine of 100,000 RMB as punishment.
Serving the expansion of high-level opening-up.
…In a patent infringement case, based on an urgent application by a domestic science and technology enterprise, the court made the first behavior preservation ruling in the field of intellectual property rights in China in accordance with the law with an anti-antisuit injunction, supporting the right holder’s legitimate rights protection. Subsequently, Chinese and foreign parties reached a package settlement on 16 lawsuits involving 6 courts at home and abroad. 
Work Plan for 2025
First, seek progress while maintaining stability, and perform duties and responsibilities in line with the promotion of Chinese-style modernization.
…Strengthen anti-monopoly and anti-unfair competition justice. Guarantee innovation and creation in accordance with the law, and help develop new quality productivity. 
Third, strict management and loving care, forging a loyal, clean and responsible iron army of the courts in the new era.
…Build a professional trial personnel pool for intellectual property, finance, bankruptcy, foreign-related, maritime, environmental resources, etc., improve the medium- and long-term training plan for outstanding personnel, and jointly cultivate socialist rule of law talents with universities.

The full text is available here (Chinese only).

Leveraging Artificial Intelligence to Reach Favorable Settlement Outcomes

“We’ve been sued” — words few want to hear. Being served with an unexpected lawsuit can throw your entire organization into disarray. Even expected litigation can leave you scrambling to figure out the next steps. Yet, these days litigation seems to be a cost of doing business. And, as with any expense, you strive to minimize those costs — whether financial or operational.
The goal? Resolve a dispute swiftly and in the most favorable, cost-effective way possible. The game plan? Analyze relevant law and facts to assess the merits of the claims and form an appropriate strategy, which often includes determining whether an early settlement is feasible. Artificial intelligence (AI) can help.
Knowledge: The Key to Successful Settlement
It’s no secret the vast majority of lawsuits eventually settle. But that often means settling on the eve of trial, only after the parties have engaged in prolonged — and expensive — discovery (as our colleagues recently discussed in greater detail). All things equal, most parties would prefer to resolve their disputes sooner, limiting the costs they incur. That doesn’t mean you should roll over and agree to the other side’s demands. Instead, to place yourself in the best position to negotiate, you need to understand your case — both its strengths, and, perhaps more importantly, its weaknesses. Otherwise, you risk coming to the bargaining table unprepared.
Marshaling and identifying key facts is therefore crucial, and you want to do it quickly. This, of course, requires ingesting and analyzing potentially large amounts of information. Fortunately, technology continues to make this easier than ever. Long gone are the days of unloading trucks of documents for teams of associates to scour page-by-page. For years, discovery has instead been characterized by varying forms of electronically stored information (ESI), like e-mails, Word documents, text messages, and business-related messaging applications. As ESI proliferated, so did the availability of technology-assisted review (TAR) software. While some variation of TAR can be adapted to virtually every case, significant training by attorneys is still required, and these systems ultimately do little to expedite the identification of “key” documents. Enter artificial intelligence.
Artificial Intelligence: The Document Super-Reviewer
AI tools now can transform what typically would be a few weeks of document review into, literally, just a few days. By feeding an AI program a “prompt” outlining the case’s key legal and factual issues, AI can process and analyze tens of thousands of documents overnight. Although it’s not quite as simple as flipping a switch, the time and cost advantages are enormous and provide a leg up in developing your litigation strategy and assessing potential settlement. The benefits of an AI program are amplified when implemented by experienced counsel who know how to write effective prompts and then manage and validate the results.
Foley has been on the forefront of implementing AI into high-stakes commercial litigation and can attest to its benefits firsthand. Just last year, for example, we were among the first firms to integrate Relativity’s new generative AI software, Relativity aiR, into our review platforms and deploy it in active litigation. There, our client found itself facing an expedited discovery schedule and the prospect of a truly business-altering injunction. Working in tandem with our highly skilled litigation support and discovery experts, we were able to quickly, and effectively, implement aiR into our case workflow. This expedited our document review processes and rapidly identified key documents, which we ultimately used to undermine the plaintiff’s case and negotiate a quick, favorable settlement.
And while it’s true we can use AI to help secure favorable settlements early in litigation, we are equipped to also wield it as a powerful tool throughout all phases of your case, even if a settlement doesn’t materialize or isn’t advisable. The benefits of AI include:

Identification of Key Documents: AI tools can predict and identify documents crucial to the merits of your case. This includes helpful and harmful documents.
Increased Efficiency: Instead of assembling a team of dozens of attorneys to review thousands of documents over the span of weeks or months, AI can be trained to rapidly analyze the same number of documents, potentially overnight.
Increased Security: Rather than distributing and outsourcing large document reviews to third-party vendors, the review process is kept securely within the confines of your trusted outside counsel. Fewer eyes on documents means fewer confidentiality concerns.
Lower Litigation Costs: Fewer attorneys reviewing documents leads to a lower bill. These savings can be reallocated to other aspects of case strategy or reinvested across other parts of your organization.
Standardized Accuracy and Effectiveness: AI minimizes potential inconsistencies that are more common with large attorney review teams. For example, what constitutes a “key” document to one reviewer may be overlooked by another. Human error, as much as we try to avoid it, still happens. AI operates more objectively, treating similar documents consistently.
Generation of Plain Language Document Summaries: Day-to-day business documents and correspondence can sometimes be challenging to follow, especially if they are full of unfamiliar technical jargon. AI can be used to generate high-level summaries of key documents or important issues in your case. These summaries are written in plain, readily accessible language, which facilitates seamless analysis and counseling.
Reallocation of Attorney Resources: The less time (and money) spent on document review, the better (for attorneys and clients alike). This allows attorneys to spend more time reviewing and analyzing highly relevant information and developing successful case strategies.
Enhanced Legal Research: Online legal research services, like the Westlaw and LexisNexis services used by Foley, also have embraced the power of AI. Attorneys can now input their research questions in everyday language, and the AI programs respond quickly with concise, well-reasoned answers with citations to supporting authorities.

Settling on Your Terms
Every litigator knows, notwithstanding that hundreds of thousands of documents may be produced in discovery, there often are only a handful of “key” or “hot” documents that actually drive a case’s outcome. With a properly implemented AI platform, parties can identify these documents quicker than ever before. Armed with this information, parties can take control over their lawsuit and implement successful litigation strategies, including effective settlement positions where appropriate.
For example, imagine you’ve been sued by a former employee for wrongful termination, but you found (through your AI platform) a series of documents that irrefutably show that the employee, in fact, willingly resigned to pursue other opportunities. You can now use those documents as the centerpiece of your settlement discussions and obtain a quick, satisfactory resolution without breaking the bank. Alternatively, imagine that you found “bad” documents that corroborated the plaintiff’s allegations. Rather than engage in prolonged discovery in a case that you are unlikely to win, you’re now able to pursue and secure a fair settlement that avoids unnecessary time and litigation expense.
Ultimately, the role of AI will continue to grow in all aspects of life and business. The legal industry is no exception, and litigants should embrace it. We have. By leveraging emerging AI technology, we can help your organization take control over its legal battles.
This article is one in a series dedicated to legal topics at the intersection of counseling and the courtroom. To read our last article, click here.