Privacy Tip #435 – Threat Actors Go Retro: Using Snail Mail for Scams

We have educated our readers about phishing, smishing, QRishing, and vishing scams, and now we’re warning you about what we have dubbed “snailing.” Yes, believe it or not, threat actors have gone retro and are using snail mail to try to extort victims. TechRadar is reporting that, according to GuidePoint Security, an organization received several letters in the mail, allegedly from the BianLian cybercriminal gang, stating:
“I regret to inform you that we have gained access to [REDACTED] systems and over the past several weeks have exported thousands of data files, including customer order and contact information, employee information with IDs, SSNs, payroll reports, and other sensitive HR documents, company financial documents, legal documents, investor and shareholder information, invoices, and tax documents.”

The letter alleges that the recipient’s network “is insecure and we were able to gain access and intercept your network traffic, leverage your personal email address, passwords, online accounts and other information to social engineer our way into [REDACTED] systems via your home network with the help of another employee.” The threat actors then demand $250,000-$350,000 in Bitcoin within ten days. They even offer a QR code in the letter that directs the recipient to the Bitcoin wallet.
It’s comical that the letters have a return address of an actual Boston office building.
GuidePoint Security says the letters and attacks mentioned in them are fake and are inconsistent with BianLian’s ransom notes. Apparently, these days, even threat actors get impersonated. Now you know—don’t get scammed by a snailing incident.

MS-ISAC Loses Funding and Cooperative Agreement with CIS

The Cybersecurity and Infrastructure Security Agency (CISA) confirmed on Tuesday, March 11, 2025, that the Multi-State Information Sharing and Analysis Center (MS-ISAC) will lose its federal funding and cooperative agreement with the Center for Internet Security. MS-ISAC’s mission “is to improve the overall cybersecurity posture of U.S. State, Local, Tribal, and Territorial (SLTT) government organizations through coordination, collaboration, cooperation, and increased communication.”
According to its website, MS-ISAC is a cybersecurity partner for 17,000 State, Local, Tribal, and Territorial (SLTT) government organizations, and offers its “members incident response and remediation support through our team of security experts” and develops “tactical, strategic, and operational intelligence, and advisories that offer actionable information for improving cyber maturity.” The services also include a Security Operations Center, webinars addressing recent threats, evaluations of cybersecurity maturity, advisories and notifications, and weekly top malicious domain reports.
All of these services assist governmental organizations that do not have adequate resources to respond to cybersecurity threats. Information sharing has been essential to prevent government entities from becoming victims. State and local governments have relied on this information sharing for resilience. Dismantling MS-ISAC will make it harder for governmental entities to obtain timely information about cybersecurity threats for preparedness. It is an organized place for governmental entities to share information about cyber threats and attacks and to learn from others’ experiences.
According to CISA, the dismantling of MS-ISAC will save $10 million. State representatives rely on the information shared by MS-ISAC. It may save the federal government minimal dollars, but when state and local governments are adversely affected and become victims of cyberattacks, this savings will be dwarfed by the amount spent on future attacks without MS-ISAC’s assistance. Responding to state and local government cyberattacks still expends taxpayer dollars. This shift is an unhelpful one that will leave state and local governments in the dark and at increased risk. This is a short-sighted strategy by the administration.

Protecting Your Business: AI Washing and D&O Insurance

Artificial intelligence (AI) is en vogue. As it rapidly reshapes industries, companies are racing to integrate and market AI-driven solutions and products. But how much is too much? Some companies are finding out the hard way.
The legal risks associated with AI, especially those facing corporate leadership, are growing as quickly as the technology itself. As we explained in a recent post, directors and officers risk personal liability, both for disclosing and failing to disclose how their businesses are using AI. Two recent securities class action lawsuits illustrate the risks associated with AI-related misrepresentations, underscoring the need for management to have a clear and accurate understanding of how the business is using AI and the importance of ensuring adequate insurance coverage for AI-related liabilities.
AI Washing: A Growing Legal Risk
Built on the same premise as “greenwashing,” AI washing is on the rise. In its simplest terms, AI washing refers to the practice of exaggerating or misrepresenting the role AI plays in a company’s products or services. Just last week, two more securities lawsuits were filed against corporate executives based on alleged misstatements about how their companies were using AI technologies. These latest lawsuits, much like the Innodata and Telus lawsuits we previously wrote about, serve as early warnings for companies navigating AI-related disclosure issues.
Cesar Nunez v. Skyworks Solutions, Inc.
On March 4, 2025, a plaintiff shareholder filed a putative securities class action lawsuit against semiconductor products manufacturer Skyworks Solutions and certain of its directors and officers in the US District Court for the Central District of California. See Cesar Nunez v. Skyworks Solutions, Inc. et al. Docket No. 8:25-cv-00411 (C.D. Cal. Mar. 4, 2025).
Among other things, the lawsuit alleges that Skyworks misrepresented its position and ability to capitalize on AI in the smartphone upgrade cycle, leading investors to purchase the company’s securities at “artificially inflated prices.”
Quiero v. AppLovin Corp.
A similar lawsuit was filed the next day against mobile technology company AppLovin and certain of its executives. See Quiero v. AppLovin Corp. et al. Docket No. 4:25-cv-02294 (N.D. Cal. Mar. 5, 2025).
The Applovin complaint alleges, among other things, that AppLovin misled investors by misleadingly touting its use of “cutting-edge AI technologies” “to more efficiently match advertisements to mobile games, in addition to expanding into web-based marketing and e-commerce.” According to the complaint, these misleading statements coincided with the reporting of “impressive financial results, outlooks, and guidance to investors, all while using dishonest advertising practices.”
Risk Mitigation and the Role of D&O Insurance
Our recent posts have shown how AI can implicate coverage under all lines of commercial insurance. The Skyworks and AppLovin lawsuits underscore the specific importance of comprehensive D&O liability insurance as part of any corporate risk management solution.
As we discussed in a previous post, companies may wish to assess their D&O programs from multiple angles to maximize protection against AI-washing lawsuits. Key considerations include:

Policy Review: Ensuring that AI-related losses are covered and not excluded under exclusions like cyber or technology exclusions.
Regulatory Coverage: Confirming that policies provide coverage not only for shareholder claims but also regulator claims and government investigations.
Coordinating Coverages: Evaluating liability coverages, especially D&O and cyber insurance, holistically to avoid or eliminate gaps in coverage.
AI-Specific Policies: Considering the purchase of AI-focused endorsements or standalone policies for additional protection.
Executive Protection: Verifying adequate coverage and limits, including “Side A” only or difference-in-condition coverage, to protect individual officers and directors, particularly if corporate indemnification is unavailable.
New “Chief AI Officer” Positions: Chief information security officers (CISOs) remain critical in monitoring cyber-related risks but are not the only emerging positions to fit into existing insurance programs. Although not a traditional C-suite position, more and more companies are creating “chief AI officer” positions to manage the multi-faceted and evolving use of AI technologies. Ensuring that these positions are included within the scope of D&O and management liability coverage is essential to affording protection against AI-

In sum, a proactive approach—especially when placing or renewing policies—can help mitigate the risk of coverage denials and enhance protection against AI-related legal challenges. Engaging experienced insurance brokers and coverage counsel can further strengthen policy terms, close potential gaps and facilitate comprehensive risk coverage in the evolving AI landscape.

What is AI Washing and Why are Companies Getting Sued?

With the proliferation of artificial intelligence (AI) usage over the last two years, companies are developing AI tools at an astonishing rate. When pitching their AI tools, these companies claim that their products can do certain things and promise and exaggerate their capabilities. AI washing “is a marketing tactic companies employ to exaggerate the amount of AI technology they use in their products. The goal of AI washing is to make a company’s offerings seem more advanced than they are and capitalize on the growing interest in AI technology.”
Isn’t this mere puffery? No, according to the Federal Trade Commission (FTC), Securities and Exchange Commission (SEC), and investors.
The FTC released guidance in 2023, outlining certain questions companies can assess to determine if they are AI washing. It urges companies to determine whether they are overpromising what the algorithm or AI tool can deliver. According to the FTC, “You don’t need a machine to predict what the FTC might do when those claims are unsupported.”
In March 2024, the SEC charged two investment advisors with AI washing by making “false and misleading statements about their use of artificial intelligence.” Both cases were settled for $400,000. The SEC found the two companies had “marketed to their clients and prospective clients that they were using AI in certain ways when, in fact, they were not.” 
Investors are getting into the hunt as well. In February and March 2025, investors sued two companies in securities litigation that alleged AI washing. In the first case, the company allegedly made statements to investors about its AI capabilities and reported “impressive financial results, outlooks and guidance.” It was subsequently the subject of short-seller reports that alleged they were using “manipulative practices” that inflated its numbers and profitability. The litigation alleged that, as a result, the company’s shares declined.
In the second case, the class action named plaintiff alleged that the company overstated “its position and ability to capitalize on AI in the smartphone upgrade cycle,” which caused investors to invest at an artificially inflated price.
Lessons learned from these examples? Look at the FTC’s guidance and assess whether your sales and marketing plan takes AI washing into consideration.

Navigating the AI Frontier: Why Information Governance Matters More Than Ever

Artificial Intelligence (AI) is rapidly transforming the legal landscape, offering unprecedented opportunities for efficiency and innovation. However, this powerful technology also introduces new challenges to established information governance (IG) processes. Ignoring these challenges can lead to significant risks, including data breaches, compliance violations, and reputational damage.
“AI Considerations for Information Governance Processes,” a recent paper published by Iron Mountain, delves into these critical considerations, providing a framework for law firms and legal departments to adapt their IG strategies for the age of AI.
Key Takeaways:

AI Amplifies Existing IG Risks: AI tools, especially machine learning algorithms, often require access to and process vast amounts of sensitive data to function effectively. This makes robust data security, privacy measures, and strong information governance (IG) frameworks absolutely paramount. Any existing vulnerabilities or weaknesses in your current IG framework can be significantly amplified by the introduction and use of AI, potentially leading to data breaches, privacy violations, and regulatory non-compliance.
Data Lifecycle Management is Crucial: From the initial data ingestion and collection stage, through data processing, storage, and analysis, all the way to data archival or disposal, a comprehensive understanding and careful management of the AI’s entire data lifecycle is essential for maintaining data integrity and ensuring compliance. This includes knowing exactly how data is used for training AI models, for analysis and generating insights, and for any other purposes within the AI system.
Vendor Due Diligence is Non-Negotiable: If you’re considering using third-party AI vendors or cloud-based AI services, conducting rigorous due diligence on these vendors is non-negotiable. This due diligence should focus heavily on evaluating their data security practices, their compliance with relevant industry standards and certifications, and their contractual obligations and guarantees regarding data protection and privacy.
Transparency and Explainability are Key: “Black box” AI systems that make decisions without any transparency or explainability can pose significant risks. It’s crucial to understand how AI algorithms make decisions, especially those that impact individuals, to ensure fairness, accuracy, non-discrimination, and compliance with ethical guidelines and legal requirements. This often requires techniques like model interpretability and explainable AI.
Proactive Policy Development is Essential: Organizations need to proactively develop clear policies, procedures, and guidelines for AI usage within their specific context. These should address critical issues such as data access and authorization controls, data retention and storage policies, data disposal and deletion protocols, as well as model training, validation, and monitoring practices.

The Time to Act is Now:
AI is not a future concern; it’s a present reality. Law firms and legal departments must proactively adapt their information governance processes to mitigate the risks associated with AI and unlock its full potential.

Artists Protest AI Copyright Proposal in the U.K.

British Prime Minister Keir Starmer wants to turn the U.K. into an artificial intelligence (AI) superpower to help grow the British economy by using policies that he describes as “pro-innovation.” One of these policies proposed relaxing copyright protections. Under the proposal, initially unveiled in December 2024, AI companies could freely use copyrighted material to train their models unless the owner of the copyrighted material opted out.
Although some Parliament members called the proposal an effective compromise between copyright holders and AI companies, over a thousand musicians released a “silent album” to protest the proposed changes to U.K. copyright laws. The album, currently streaming on Spotify, includes 12 tracks of only ambient sound. According to the musicians, the silent tracks illustrate empty recording studios and represent the impact they “expect the government’s proposals would have on musicians’ livelihoods.” To further convey their unhappiness with the proposed changes, the title of these twelve songs, when combined, reads, “The British government must not legalize music theft to benefit AI companies.” 
High-profile artists like Elton John, Paul McCartney, Dua Lipa, and Ed Sheeran have also signed a letter urging the British government to avoid implementing these proposed changes. According to the artists, implementing the new rule would effectively give artists’ rights away to big tech companies. 
The British government launched a consultation that sought comments on the potential changes to the copyright laws. The U.K. Intellectual Property Office received over 13,000 responses before the consultation closed at the end of February 2025, which the government will now review as it seeks to implement a final policy.

Skin360 App Can’t Escape Scrutiny under Illinois Biometric Law

A federal district court has denied a motion by Johnson & Johnson Consumer Inc. (JJCI) to dismiss a second amended complaint alleging it violated the Illinois Biometric Information Privacy Act (BIPA) by collecting and storing biometric information through its Neutrogena Skin 360 beauty app without consumers’ informed consent or knowledge. The plaintiffs also allege that the biometric information collected through the app is then linked to their names, birthdates, and other personal information.
Plaintiffs alleged that the Skin360 app is depicted as “breakthrough technology” that provides personalized at-home skin assessments by scanning faces and analyzing skin to diagnose enigmas like wrinkles, fine lines, and dark spots. The app then uses that data to recommend certain Neutrogena products for the consumer to eliminate those concerns. JJCI argued that the Skin360 app recommends products designed to improve skin health, which means that the consumers should be considered patients in a healthcare setting, making BIPA inapplicable.
However, the court disagreed and cited Marino v. Gunnar Optiks LLC, 2024 Ill. App. (1st) 231826 (Aug. 30, 2024), which held that a customer trying on non-prescription sunglasses using an online “try-on” tool is not considered a patient in a healthcare setting. In Marino, the court defined a patient as an individual currently waiting for or receiving treatment or care from a medical professional. Alternatively, Skin360 uses artificial intelligence software to compare a consumer’s skin to a database of images and provides an assessment based on a comparison of these images. Of course, JCCI did not dispute that no medical professionals are involved in providing the service through the Skin360 app.
The court stated that “[e]ven assuming Skin360 provides users with this AI assistant and ‘science-backed information’ the court finds it a reach to consider these services ‘medical care’ under BIPA’s health care exemption; [i]ndeed, Skin360 only recommends Neutrogena products to users of the technology, which suggests it is closer to a marketing and sales strategy rather than to the provision of informed medical care or treatment.”

The Big Six Items That Family Offices Need to Consider in 2025

Across all industries, family offices and their owners and management teams face rapidly evolving challenges, opportunities, and risks in the dynamic environment that is 2025. Here are six issues that family offices should consider and be mindful of this year.
1. Impending Sunset after December 31 of Temporarily Doubled Federal Estate, Gift and Generation-Skipping Transfer Tax Exemption — or Maybe Not?
In 2025, the Internal Revenue Service (IRS) increased the lifetime estate and gift tax exemption to $13.99 million per individual ($27.98 million per married couple). Clients who maximized their previous exemption ($13.61 million per individual in 2024), can now make additional gifts of up to $380,000 ($760,000 per married couple) in 2025 without triggering gift tax. Clients who have not used all (or any) of their exemption to date should be particularly motivated to make lifetime gifts because, under current law, the lifetime exemption is scheduled to sunset. 
Since the 2017 Tax Cuts and Jobs Act, the lifetime exemption has been indexed for inflation each year. Understandably, clients have grown accustomed to the steady and predicable increase in their exemption. However, absent congressional action, if the exemption lapses, the lifetime estate and gift tax (and generation-skipping transfer tax) exemption will be cut in half to approximately $7.2 million per individual ($14.4 million per married couple) at the start of 2026. That being said, as a result of the Republican trifecta in the 2024 election, it is very plausible that the temporarily doubled exemption may be extended for some additional period of time as part of the budget reconciliation process, which allows actions by majority vote in the Senate (with the vice president to cast the deciding vote in the event of a tie). This is in contrast to the ordinary rules of procedure that require 60 votes out of 100 in the Senate for Congressional action. But there are no assurances that such an extension will occur, and any legislation may not be enacted (if at all) until very late in the year. 
To ensure that no exemption is forfeited, clients should consider reaching out to their estate planning and financial advisors to ensure they have taken full advantage of their lifetime exemption. If the exemption decreases at the start of 2026, unused exemption will be lost. Indeed, absent Congressional action to extend the temporarily doubled exemption, this is a use-it-or-lose-it situation. 
2. Buy-Sell Agreements and Their Role in Business Succession Planning
The death, disability, or retirement of a controlling owner in a family-controlled business can wreak havoc on the entity that the owner may have spent a lifetime building from scratch. If not adequately planned for, such events can lead to the forced sale of the business out of family hands to an unrelated third party. 
A buy-sell agreement is an agreement between the owners of a business, or among the owners of the business and the entity, that provides for the mandatory purchase (or right of first refusal) of an owner’s equity interest, by the other owners or by the business itself (or some combination of the two), upon the occurrence of specified triggering events described in the agreement. Such triggering events can include the death, disability, retirement, withdrawal or termination of employment, bankruptcy and sometimes even the divorce of an owner. Buy-sell agreements may be adapted for use by all types of business entities, including C corporations, S corporations, partnerships, and limited liability companies. 
Last June, in Connelly v. United States, the US Supreme Court affirmed a decision of the Eighth Circuit Court of Appeals in favor of the government concerning the estate tax treatment of life insurance proceeds that are used to fund a corporate redemption obligation under a buy-sell agreement. The specific question presented was whether, in determining the fair market value of the corporate shares, there should be any offset to take into account the redemption obligation to the decedent’s estate under a buy-sell agreement. The Supreme Court concluded that there should be no such offset. In doing so, the Supreme Court resolved a conflict that had existed among the federal circuit courts of appeal on this offset issue. 
As a result of the Supreme Court’s decision, buy-sell agreements that are structured as redemption agreements should be reviewed by business owners that expect to have taxable estates. In many cases it may be desirable instead to structure the buy-sell agreement as a cross-purchase agreement. 
For further information, please see our article that addresses the Connelly decision and its implications: US Supreme Court Affirms the Eighth Circuit’s Decision in Favor of the Government Concerning the Estate Tax Treatment of Life Insurance Proceeds Used to Fund a Corporate Redemption Obligation. 
3. Be Very Careful in Planning With Family Limited Partnerships and Family Limited Liability Companies
The September 2024 Tax Court memorandum decision of Estate of Fields v. Commissioner, T.C. Memo. 2024-90, provides a cautionary tale of a bad-facts family limited partnership (FLP) that caused estate tax inclusion of the property transferred to the FLP under both sections 2036(a)(1) and (2) of the Internal Revenue Code with loss of discounts for lack of control and lack of marketability. In doing so, the court applied the Tax Court’s 2017 holding in Estate of Powell v. Commissioner, 148 T.C. 392 (2017) — the ability of the decedent as a limited partner to join together with other partners to liquidate the FLP constitutes a section 2036(a)(2) estate tax trigger — and raises the specter of accuracy-related penalties that may loom where section 2036 applies.  
Estate of Fields illustrates that, if not carefully structured and administered, planning with family entities can potentially render one worse off than not doing any such planning at all. 
4. The IRS Gets Aggressive in Challenging Valuation Issues 
The past year and a half has seen the IRS become very aggressive in challenging valuation issues for gift tax purposes.
First, in Chief Counsel Advice (CCA) 202352018, the IRS’s National Office, providing advice to an IRS examiner in the context of a gift tax audit, addressed the gift tax consequences of modifying a grantor trust to add tax reimbursement clause, finding there to be a taxable gift. The facts of this CCA involved an affirmative consent by the beneficiaries to a trust modification to allow the trustee to reimburse the grantor for the income taxes attributable to the trust’s grantor trust status. Significantly, the IRS admonished that its principles could also apply in the context of a beneficiary’s failure to object to a trustee’s actions, or in the context of a trust decanting. 
Next, in a pair of 2024 Tax Court decisions — the Anenberg and McDougall cases — the IRS challenged early terminations of qualified terminable interest property (QTIP) marital trusts in favor of the surviving spouse that were then followed by the surviving spouse’s sale of the distributed trust property to irrevocable trusts established for children. While the court in neither case found there to be a gift by the surviving spouse, the Tax Court in McDougall determined that the children made a gift to the surviving spouse by surrendering their remainder interests in the QTIP trust. 
5. The Show Continues: The CTA No Longer Applicable to US Citizens and Domestic Companies
After an on-again-off-again pause of three months beginning in late 2024, the Corporate Transparency Act (CTA) is back in effect, but only for foreign reporting companies. On March 2, the US Department of the Treasury (Treasury) announced it will not enforce reporting requirements for US citizens or domestic companies (or their beneficial owners).
Pursuant to Treasury’s announcement, the CTA will now only apply to foreign entities registered to do business in the United States. These “reporting companies” must provide beneficial ownership information (BOI) and company information to the Financial Crimes Enforcement Network (FinCEN) by specified dates and are subject to ongoing reporting requirements regarding changes to previously reported information. To learn more about the CTA’s specific requirements, please see our prior client alert (note that the CTA no longer applies to domestic companies or US citizens, and the deadlines mentioned in the alert have since been modified, as detailed in the following paragraph).
On February 27, FinCEN announced it would not impose fines or penalties, nor take other enforcement measures against reporting companies that fail to file or update BOI by March 21. FinCEN also stated it will publish an interim final rule with new reporting deadlines but did not indicate when the final rule can be expected. Treasury’s March 2 announcement indicates that the government is expecting to issue a proposed rule to narrow the scope of CTA reporting obligations to foreign reporting companies only. No further details are available at this time, but domestic reporting companies may consider holding off on filing BOI reports until the government provides additional clarity on reporting requirements. Foreign reporting companies should consider assembling required information and being prepared to file by the March 21 deadline, while remaining vigilant about further potential changes to reporting requirements in the meantime.  
On the legislative front, earlier this year, the US House of Representatives passed the Protect Small Businesses from Excessive Paperwork Act of 2025 (H.R. 736) on February 10, in an effort to delay the CTA’s reporting deadline. The bill aims to extend the BOI reporting deadline for companies formed before January 1, 2024, until January 1, 2026. The bill is currently before the US Senate, but it is unclear whether it will pass in light of the latest updates.
6. Ethical and Practical Use of AI in Estate Planning
The wave of innovative and exciting artificial intelligence (AI) tools has taken the legal community by storm. While AI opens possibilities for all lawyers, advisors in the estate planning and family office space should carefully consider whether, and when, to integrate AI into their practice. 
Estate planning is a human-centered field. To effectively serve clients, advisors develop relationships over time, provide secure and discrete services, and make recommendations based on experience, compassion, and intuition. 
Increasingly, AI tools have emerged that are marketed towards estate planning and family office professionals. These tools can (1) assist planners with summarizing complex estate planning documents and asset compilations, (2) generate initial drafts of standard estate planning documents, and (3) translate legal jargon into client-friendly language. Though much of the technology is in the initial stages, the possibilities are exciting. 
While estate planning and family office professionals should remain optimistic and open about the emerging AI technology, the following recommendations should be top of mind: 

First, advisors must scrutinize the data privacy policies of all AI tools. Advisors should be careful and cautious when engaging with any AI program that requires the input of sensitive or confidential documents to protect the privacy of your clients. 
Next, advisors should stay up to date on the statutory and case law developments, as the legal industry is still developing its stance on AI. 
Finally, advisors should honor and prioritize the personal and human nature of estate planning and family advising. Over-automating one’s practice can come at the expense of building strong client relationships. 

Listen to this article

AI in Business: The Risks You Can’t Ignore

Artificial Intelligence (AI) is revolutionizing business operations, offering advancements in efficiency, decision-making, and customer engagement. However, its rapid integration into business processes brings forth a spectrum of legal and financial risks that enterprises must navigate to ensure compliance and maintain trust.
The Broad Legal Definition of AI and Its Implications
In the United States, the legal framework defines AI far more expansively than the average person might expect, potentially encompassing a wide array of software applications. Under 15 U.S.C. § 9401(3), AI is any machine-based system that:

makes predictions, recommendations, or decisions,
uses human-defined objectives, and
influences real or virtual environments.

This broad definition implies that even commonplace tools like Excel macros could be subject to AI regulations. As Neil Peretz of Enumero Law notes, such an expansive definition means that businesses across various sectors must now re-appraise all of their software usage to ensure compliance with new AI laws.
Navigating the Evolving Regulatory Landscape
The regulatory environment for AI is rapidly evolving. The European Union’s AI Act, for instance, classifies AI systems into risk categories, imposing strict compliance requirements on high-risk applications. In the United States, various states are introducing AI laws, requiring companies to stay abreast of changing regulations.
According to Jonathan Friedland, a partner with Much Shelist, P.C., who represents boards of directors of PE-backed and other privately owned companies, developments in artificial intelligence are happening so quickly that many companies of even modest size are spending significant time developing compliance programs to ensure adherence to applicable laws.
One result, according to Friedland, is that “[a]s one might expect, the sheer number of certificate programs, online courses, and degrees now offered in AI is exploding. Everyone seems to be getting into the game,” Friedland continues, “for example, the International Association of Privacy Professionals, a global organization previously focused on privacy and data protection, recently started offering its ‘Artificial Intelligence Governance Professional certification.” The challenge for companies, according to Friedland, is “to invest appropriately without overdoing it.”
Navigating Bias and Discrimination in AI Systems
Legal challenges have been associated with algorithmic bias and accountability, which claim that historical data used to train AI often reflects societal inequalities, which AI systems can further perpetuate.
Sean Griffin, of Longman & Van Grack, highlights cases where AI tools have led to allegations of discrimination, such as a lawsuit against Workday, where an applicant claimed the company’s AI system systematically rejected Black and older candidates. Similarly, Amazon discontinued an AI recruiting tool after discovering it favored male candidates, revealing the potential for AI to reinforce societal biases.
To mitigate these risks, businesses should implement regular audits of their AI systems to identify and address biases. This includes diversifying training data and establishing oversight mechanisms to ensure fairness in AI-driven decisions.
Addressing Data Privacy Concerns
AI’s reliance on vast datasets, often containing personal and sensitive information, raises significant data privacy issues. AI-powered tools might be able to infer sensitive information, such as health risks from social media activity, potentially bypassing traditional privacy safeguards.
Because AI systems potentially have access to a wide range of data, compliance with data protection regulations like the GDPR and CCPA is crucial. Businesses must ensure that data used in AI systems is collected and processed lawfully, with explicit consent where necessary. Implementing robust data governance frameworks and anonymizing data can help mitigate privacy risks.
Ensuring Transparency and Explainability
The complexity of AI models, particularly deep learning systems, often results in ‘black box’ scenarios where decision-making processes are opaque. This lack of transparency can lead to challenges in accountability and trust. Businesses should be mindful of the risks associated with engaging third parties to develop or operate their AI solutions. In many areas of decision-making, explainability is required, and a black-box approach will not suffice. For example, when denying someone for consumer credit, specific adverse action reasons need to be provided to the applicant.
To address this, businesses should strive to develop AI models that are interpretable and can provide clear explanations for their decisions. This not only aids in regulatory compliance but also enhances stakeholder trust.
Managing Cybersecurity Risks
AI systems are both targets and tools in cybersecurity. Alex Sharpe points out that cybercriminals are leveraging AI to craft sophisticated phishing attacks and automate hacking attempts. Conversely, businesses can employ AI for threat detection and rapid incident response.
The legal risks associated with AI in financial services highlight the importance of managing cybersecurity risks. Implementing robust cybersecurity measures, such as encryption, access controls, and continuous monitoring, is essential to protect AI systems from threats. Regular security assessments and updates can further safeguard against vulnerabilities.
Considering Insurance as a Risk Mitigation Tool
Given the multifaceted risks associated with AI, businesses should evaluate the extent to which certain types of insurance can help them manage and reduce risks. Policies such as commercial general liability, cyber liability, and errors and omissions insurance can offer protection against various AI-related risks.
Businesses can benefit from auditing business-specific AI risks and considering insurance as a risk mitigation tool. Regularly reviewing and updating insurance coverage ensures that it aligns with the evolving risk landscape associated with AI deployment.
Conclusion
While AI offers transformative potential for businesses, it also introduces significant legal and financial risks. By proactively addressing issues related to bias, data privacy, transparency, cybersecurity, and regulatory compliance, enterprises can harness the benefits of AI while minimizing potential liabilities.
AI tends to tell the prompter what they want to hear, whether it’s true or not, underscoring the importance of governance, accountability, and oversight in its adoption. Organizations that establish clear policies and risk management strategies will be best positioned to navigate the AI-driven future successfully.

To learn more about this topic view Corporate Risk Management / Remembering HAL 9000: Thinking about the Risks of Artificial Intelligence to an Enterprise. The quoted remarks referenced in this article were made either during this webinar or shortly thereafter during post-webinar interviews with the panelists. Readers may also be interested to read other articles about risk management and technology.
©2025. DailyDACTM, LLC d/b/a/ Financial PoiseTM. This article is subject to the disclaimers found here.

Data Processing Evaluation and Risk Assessment Requirements Under California’s Proposed CCPA Regulations

As we have previously detailed here, the latest generation of regulations under the California Consumer Privacy Act (CCPA), drafted by the California Privacy Protection Agency (CPPA), have advanced beyond public comments are closer to becoming final. These include regulations on automated decision-making technology (ADMT), data processing evaluation and risk assessment requirements and cybersecurity audits.
Assessments and Evaluations Overview
The new ADMT notice, opt-out and access and appeal obligations and rights go into immediate effect upon the regulation’s effective date, which follows California Office of Administrative Law (OAL) approval and would either be subject to the quarterly regulatory implementation schedule in the Government Code, or as has been the case with prior CCPA regulations immediately on OAL sign-off. We will not know if the CPPA will again seek a variance from the schedule until they submit the final rulemaking package.
Moving on to evaluations and risk assessments, the draft regulations do propose a phase-in, but only in part. Evaluations must be undertaken beginning on the regulation’s effective date, whereas assessment requirements apply to practices commencing on the effective date, but there is a 24 month period to complete, file certifications and abridged versions, and make available for inspection.
However, since Colorado, which like California, has very detailed requirements for conducting and documenting assessments, and New Hampshire, Oregon, Texas, Montana, Nebraska, and New Jersey already require assessments, Delaware and Minnesota will this summer, and Indiana, Rohde Island and Kentucky will by the new year, query whether the California phase-in is of much use. Out of the 20 state consumer privacy laws, all but Utah and Iowa require assessments.
Further, without at least a cursory assessment, how can you determine if the notice, opt-out and access and appeal rights apply?
So, what is the difference between an evaluation and an assessment?
First, they are required by different provisions. Evaluations are required by Section 7201, and risk assessments by Section 7150.
Next, there is no phase-in of evaluations as with risk assessments.
Risk assessments are much more complex and prescribed, and are at the core of a risk benefit judgment decision, and must be available for inspection and abridged summaries must be filed.
The content of the evaluation, which need not be published or subject to inspection demand outside of discovery, need only address if the process and technology is effective, in other words, materially error free, and if it discriminates against a protected class, in other words free of material bias. As such, they have similarities to assessments under the Colorado AI Act, effective next year but likely to be amended before then, and the recently passed Virginia HB 2094 AI bill that may or may not get signed by Governor Yougkin.
Thus, an evaluation alone won’t help you determine if the ADMT notice, opt-out and access and appeal rights apply, nor meet the risk assessment requirements. While it is a separate analysis, it can be incorporated into assessments assuming a company begins those immediately. 
Also, evaluations are not required for selling and processing of sensitive personal information (PI), as are assessments, and assessments are only required for identification processing to the extent AI is trained to do so, whereas any processing for identification is subject to an evaluation. Since CCBA is part of behavioral advertising, which is part of extensive profiling, sharing needs to be addressed in both evaluations and assessments.
Finally, under Section 7201, a business must implement policies, procedures, and training to ensure that the physical or biological identification or profiling works as intended for the business’s proposed use and does not discriminate based on protected classes.
So on to assessments, what activities need to be assessed?
First selling or sharing. All 18 states that require assessments require them for this; though, for the non-California states, the trigger is processing for targeted advertising rather than “sharing”, which is broader than sharing for CCBA, but the California regulations catch up with the new concept of behavioral advertising.
Next, processing of sensitive personal information. The same 18 states require assessments for the processing of sensitive data, with differing definitions. For instance, what is considered children’s personal data differs considerably. Notably, the California draft Regulation amendments would raise the age from 13 to 16, and Florida is under 18. There is also variation in the definition of health data. 
Note, while the Nevada and Washington (and potential New York) consumer health laws do not explicitly require assessments, they are practically needed, and Vermont’s data broker law requires initial risk assessments and a process for evaluating and improving the effectiveness of safeguards.
Other Risk Assessment Triggers
Assessments are mandatory before using ADMT to make or assist in making a significant decision, which is “profiling in furtherance of decisions that produce legal or similarly significant effects concerning a consumer.” This is a General Data Protection Regulation (GDPR) and European Data Protection Board (EDPB) inspired provision. The other states that require assessments also have a similar obligation, although the definitions may differ somewhat. In California, “Decisions that produce legal or similarly significant effects concerning a consumer” means decisions that result in the provision or denial of financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment opportunities, healthcare services, or access to essential goods or services. Only California gives guidance on what are essential goods or services, by means a parenthetical “(e.g., groceries, medicine, hygiene products, or fuel). Critics are concerned that this limits algorithmic custom pricing, sometimes derogatively referred to as surveillance pricing, or even AI of consumer behavior to decide where to open or close stores, though the aggregate or de-identified data should suffice for that. There is considerable guidance out of the EU, which can be looked to, though is clearly not binding. The EU approach is quite broad.
Speaking of looking to the EU, beware that the California and Colorado regulations diverge considerably from what is required under GDPR assessments, and keep in mind the material differences between GDPR with its lawful basis and legitimate interest tests and the US laws with opt-out concepts.
Uniquely amongst the states, California proposes the concept of extensive profiling, which covers any:
1) work or educational profiling;
2) public profiling; or
3) behavioral advertising.
Note however, that whilst behavioral advertising is said to include CCBA, it is broader and is defined as “the targeting of advertising to a consumer based on the consumer’s personal information obtained from the consumer’s activity—both across businesses, distinctly-branded websites, applications, or services, (i.e., CCBA) and within the business’s own distinctly-branded websites, applications, or services.” Significantly, this closes the gap between CCBA and the non-California regulation of targeted advertising, by including entirely 1st party behavioral advertising.
There is a carve out for “nonpersonalized advertising” as defined by CCPA Section .140(t), which means advertising and marketing that is based solely on a consumer’s personal information derived from the consumer’s current interaction with the business with the exception of the consumer’s precise geolocation. But also, note that here the exception is specifically limited to where the PI is not disclosed to third parties (i.e., not a processor or contractor). This has led some to argue that this guts the carve out. However, if personal data was disclosed to a third party, that would likely be a sale, especially given the breadth of the concept of “over valuable consideration” in the eyes of the regulators. So, the approach really is not inconsistent with the current treatment of contextual advertising.
PI to train ADMT or AI
Assessments are also proposed to be required for processing of PI to train ADMT or AI. This is another uniquely California concept, at least under state consumer privacy laws, and the California Chamber of Commerce and others, including some members of the legislature, have argued that, like other aspects of the proposed regulation’s treatment of ADMT, it goes beyond the Agency’s statutory authority. It is interesting to note that one of the topics included in the US House Committee on Energy and Commerce’s request for information to inform federal privacy legislation this week is the role of privacy and consumer protection standards in AI regulation and specifically the impact of state privacy law regulation of ADMT and profiling on US AI leadership. Another topic of focus is “the degree to which US privacy protections are fragmented at the state level and costs associated with fragmentation,” which seems to be inviting a preemption scope debate. So by the time at least this part of the regulation requires action, it may possibly be curtailed by federal law. That said, evaluations and assessments are practically necessary to guide compliance and information governance and to date repeated attempts at federal consumer privacy legislation have been unsuccessful.
Assessment Details
Most state laws do not have any specifics regarding how to conduct or document risk assessments, with the notable exception of Colorado. When it started assessment rulemaking, the Agency stated that it would look to try to create interoperability with Colorado and would also look to the guidance by the EDPB. While both can be seen to have influenced California’s proposed requirements, California adds to these.
Some of the content requirements are factual, such as purposes of processing and categories of PI. Others are more evaluative, such as the quality of the PI and the expected benefits and potential negative impacts of the processing, and how safeguards may mitigate those risks of harm. Nine examples are included in Section 7152(a)(5) to guide analysis.
Section 7152(a)(3) calls for analysis of specific operational elements for the processing.
Focus on Operational Elements
These operational elements are listed here[1] and can be seen as not only getting under the hood of the processing operations but also informing consumer expectations, and the risks and benefit analysis that is the heart of an assessment. Note, in particular, the inquiries into retention and logic, the latter meaning ‘built-in’ assumptions, limitations, and parameters that inform, power or constrain the processing, particularly as concerns ADMT.
Analysis and Conclusions
The assessment must not only document those processing details and the risk / benefit and risk mitigation analysis, but the conclusions and what was approved and/or disapproved.
The draft regulations call for participation by all relevant stakeholders, and they must be specifically named, as must the identification of the person responsible for the analysis and conclusions.
Filing and Certification
California diverges from the other states with respect to reporting requirements. Annually a responsible executive must certify to the CCPA that the business assessed all applicable processing activities, and an abridged assessment must be filed for each processing activity actually initiated. This will make it very apparent which businesses are not conducting assessments.
Further, the draft regulations limit what is required in the abridged assessments to largely factual statements:

The triggering processing activity;
The purposes;
The categories of personal information, including any sensitive categories; and
The safeguards undertaken.

Note that the risk / benefit analysis summary is not a part of the filing.
Inspection and Constitutional and Privilege Issues
Contrast that with the detailed risk / benefit analysis required by the full assessment, which, like all of the other states that require or will require assessments, is subject to inspection upon request.
This GDPR-inspired approach to showing how you made decisions calls for publication of value judgments, which, as I have opined in an article that is in your materials (see a synopsis here), is likely unconstitutional compelled speech. While the 9th Circuit in the X Corp and NetChoice cases struck down harm assessment and transparency requirements in the context of children’s online safety, the Court distinguished compelling disclosure of subjective opinions about a company’s products and activities from requiring disclosure of merely product facts. There is no 1st Amendment in GDPR-land, so we will have to wait and see if the value judgment elements of assessments can really be compelled for inspection.
Inspections also raise serious questions about attorney-client and work product privilege. Some states specifically provide that inspections of assessments is not a waiver of privilege, and/or that they will be maintained as confidential and/or are not subject to public records access requests. The draft regulations do not; however, the CCPA itself provides that the Act shall not operate to infringe on evidentiary privileges. At any event, consider labeling legal analysis and counsel as such and maintaining them apart from what is maintained for inspection.[2]

[1] Planned method for using personal information; disclosures to the consumer about processing, retention period for each category of personal information, categories of third parties with access to consumers’ personal information, relationship with the consumer, technology to be used in the processing, number of consumers whose personal information will be processed and the logic used.
[2] Note – Obtaining educational materials from Squire Patton Boggs Services Ireland, Limited, or our resellers, does not create an attorney-client relationship with any Squire Patton Boggs entity and should be used under the direction of legal counsel of your choice.

Are Workforce Reductions Coming to the Private Sector? And, if so, How Should Companies Handle Them?

Massive federal workforce reductions (once a rare event) have been featured prominently in the news lately, along with reports of criticism about the way they are occurring. Will private companies follow suit? Some economic signs, such as the continuing low unemployment rate, do not point in that direction. However, layoffs increased 28% in January as compared to the previous month, and WARN filings, as well as increasing company announcements of projected future layoffs, tell a different story.
Moreover, short-term causes, such as the impact of tariffs and cuts in government contracting, as well as slightly longer-term developments such as the effects of artificial intelligence, suggest more workforce reductions may be coming in the near future.
While no one likes a reductions-in-force (RIFs), there is a right way — and a wrong way — to conduct them. RIFs require meticulous planning and execution. While each job action must be analyzed according to its unique facts and circumstances, the following steps can promote fairness, minimize the disruption, stabilize morale, and reduce legal risks in conducting a RIF:

Continue to Employ Good Performance Management: This may reduce the number of necessary reductions and make the decisions easier or harder (e.g., if everyone is rated “excellent” on annual performance reviews, they do not function as useful tools in the RIF context).
Consider Other Options: Is a voluntary program an option? Can the company achieve the cost savings by other means, such as eliminating contractors rather than employees?
Once: Try to make the reductions occur at one time, rather than as a staggered series of actions. Among other things, doing so can help reduce uncertainty and anxiety among the remaining workforce.
WARN: If you meet the federal Worker Adjustment and Retraining Notification (WARN) Act layoff thresholds, or the typically lower-threshold state law criteria if applicable, issue a WARN Act notice at least 60 days before the layoffs.
Process: Employ a consistent process with defined and fair reduction criteria that is aligned with the collective bargaining agreement (if unionized) and company policy.
Documentation: Memorialize decisions, using template documents that are easy for managers to complete, and retain those documents.
Avoid Discrimination: Make sure that your process is devoid of unlawful discrimination. Be especially careful about age discrimination, which is increasing statistically, and avoid using criteria that is likely to negatively impact older workers.
Legal Review: Work with legal to conduct a privileged disparate impact statistical analysis (to ensure that the layoffs do not inadvertently impact employees on the basis of protected characteristics) as well as a legal review to reduce risk.
Severance and Compliant Agreements: Pay severance whenever possible and obtain a release. It is worth it to eliminate risk and stabilize morale. Comply with the Older Workers’ Benefit Protection Act’s disclosure, review, and revocation requirements. Make sure to revise your standard agreements to account for recent state law changes (e.g., precluding non-disparagement and confidentiality clauses prohibiting discrimination-related communications).