Colorado’s Artificial Intelligence Act (CAIA) Updates: A Summary of CAIA’s Consumer Protections When Interacting with Artificial Intelligence Systems
During the 2024 legislative session, the Colorado General Assembly passed Senate Bill 24-205, which is known as the Colorado Artificial Intelligence Act (CAIA). This law will take effect on February 1, 2026, and requires developers and deployers of a high-risk AI system to protect Colorado residents (“consumers”) from risks of algorithmic discrimination. Notably, the Act also requires that developers or deployers must disclose to consumers that they are interacting with an AI system. Colorado Gov. Jared Polis, however, had some concerns in 2024 and expected that the legislators would refine key definitions and update the compliance structure before the effective date in February 2026.
As Colorado moves forward toward implementation, the Colorado AI Impact Task Force issued its recommendations for updates in its February 1, 2025 Report. These updates — along with the description of the Act — are covered below.
Background
A “high-risk” AI system is defined to include any machine-based system that infers outputs from data inputs and has a material legal or similar effect on the provision, denial, cost, or terms of a product or service. The statute identifies various sectors that involve consequential decisions, such as decisions related to healthcare, employment, financial or credit, housing, insurance, or legal services. Additionally, CAIA has numerous carve-outs for technologies that perform narrow tasks or certain functions, such as cybersecurity, data storage, and chatbots.
Outside of use case scenarios, CAIA also imposes on developers of AI systems the duty to prevent algorithmic discrimination and protect consumers from any known or foreseeable risks arising from the use of the AI system. A developer is one that develops or modifies an AI system used in the state of Colorado. Among other things, a developer must make documentation available for the intended uses and potential harmful uses of the high-risk AI system.
Similarly, CAIA also regulates a person that is doing business in Colorado and deploys a high-risk AI system for Colorado residents to use (the “deployer”). Deployers face stricter regulations and must inform consumers when AI is involved in a consequential decision. The Act requires deployers to implement a risk management policy and program to govern the use of the AI system. Further, the deployers must report any identified discrimination to the Attorney General’s Office within 90 days and must allow consumers to appeal AI-based decisions or request human review of the decision when possible.
Data Privacy and Consumer Rights
Consumers have the right to opt out of data processing related to AI-based decisions and may appeal any AI-based decisions. This opt-out provision may impact further automated decision-making related to the Colorado resident and the processing of personal data profiling of that consumer. The deployer must also disclose to the consumer when a high-risk AI system has been used in the decision-making process that results in an adverse decision to the consumer.
Exemptions
The CAIA contains various exemptions, including for entities operating under other regulatory regimes (e.g., insurers, banks, and HIPAA-covered entities) or for the use of certain approved technologies (e.g., technology cleared, approved, or certified by a federal agency, such as the FAA or FDA). But there are some caveats, however. For example, HIPAA-covered entities are exempt to the extent they are providing healthcare recommendations that are generated by an AI system that require the HIPAA-covered entity to take action to implement the recommendation and are not considered to be “high risk.” Small businesses are exempt to the extent that they employ fewer than 50 full-time employees and do not train the system with their own data. Thus, deployers should closely analyze the available exemptions to ensure their activities fall squarely within the recommendations.
Updates
As highlighted in the recent Colorado AI Impact Task Force Report, the report encourages additional changes to CAIA before it is enforced in February 2026. The current concerns deal with ambiguities, compliance burdens, and various stakeholder concerns. The Governor is concerned with whether the guardrails inhibit innovation and AI progress in the State.
The Colorado AI Impact Task Force notes that there is consensus to refine documentation and notification requirements. However, there is less consensus on how to adjust the definition of “consequential decisions.” Reworking the exemptions to the definition of covered systems is also a change desired by both industry and the public.
Other potential changes to the CAIA depend on how interconnected sections are potentially revised in relation to other related provisions. For example, changes to the definition of “algorithmic discrimination” depend on other issues related to obligations of developers and deployers to prevent algorithmic discrimination and related enforcement. Similarly, intervals for impact assessments may be affected greatly by changes to the definition of “intentional and substantial modification” to high-risk AI systems. Further, those impact assessments are interrelated with the developer’s risk management programs and will likely implicate any proposed changes to either impact assessments or risk management programs.
Lastly, there remains firm disagreement on amendments related to several definitions. “Substantial factor” is one debated definition that will take a creative approach to define the scope of AI technologies subject to the CAIA. Similarly, “duty of care” is hotly contested for developers and deployers and whether to remove that concept or replace it with more stringent obligations. Other debated topics that are subject to change include the exemption for small business, the opportunity to cure incidents of non-compliance, trade secret exemptions, consumer right to appeal, and the scope of attorney general rulemaking.
Guidance
Given that most stakeholders recognize that changes are needed, any business impacted by the CAIA should continue to watch the developments in the legislative process for potential changes that could drastically impact the scope and requirements of the Colorado AI Act.
Takeaways
Businesses should assess whether they, or their vendors, use any AI system that could be considered high risk under the CAIA. Some recommendations include:
Assess AI usage and consider whether that use is within the definition of the CAIA, including whether any exemptions are available
Conduct an AI risk assessment consistent with the Colorado AI Act
Develop an AI compliance plan that is consistent with the CAIA consumer protections regarding notification and appeal processes
Continue to monitor the updates to the CAIA
Evaluate contracts with AI vendors to ensure that necessary documentation is provided by the developer or deployer
Colorado has taken the lead as one of the first states in the nation to enact sweeping AI laws. Other states will likely look to the progress of Colorado and enact similar legislation or make improvements where needed. Therefore, watching the CAIA and its implementation is of great importance in the burgeoning field of consumer-focused AI systems that impact consequential decisions in the consumer’s healthcare, financial well-being, education, housing, or employment.
Listen to this post
Belgian DPA Finds Broad Tax Information Transfers to IRS Unlawful
The Belgian Data Protection Authority recently ruled that a Belgian government entity, FPS Finance, cannot transfer the personal data of “accidental Americans” to the IRS. According to the decision, the transfers needed to cease for several reasons.
The case was brought by a dual US-Belgian citizen, who, while a US citizen by birth, did not reside in the US or otherwise have any significant connections to the US (i.e., an “accidental American”). He argued that his personal information should not be transferred to the US, even though the US’s Foreign Account Tax Compliance Act requires all US citizens to report their tax information to the US to combat terrorism and prevent tax evasion. That law is enforced in Belgium through a 2014 bilateral treaty, which was entered into before the GDPR’s effective date. The Belgian tax authority argued that it could make the transfer under a GDPR exception (Article 96), which allows pre-GDPR international agreements, such as this one, to remain in place if they comply with the law in effect at the time. Thus, the Belgian DPA examined not only whether the transfer violated GDPR (as the individual argued) but also whether it violated the laws in existence at the time the treaty was signed.
The Belgian DPA found that the transfers did not comply with pre-GDPR law because the amount of information being transferred exceeded what was necessary to meet the specified purposes. Further, the FATCA was not compliant with current GDPR standards. The Belgian DPA also emphasized that FATCA, as implemented, lacked sufficient safeguards to protect the personal data of EU residents, especially those with tenuous or accidental ties to the US. The Belgian DPA gave FPS Finance a year to modify its transfer process. This included minimizing the amount of data transferred, conducting a data transfer impact assessment, and giving individuals more information about its data processing activities.
Putting it Into Practice: This decision is a reminder that there may an increase in scrutiny of data transfers to the US. While the facts in this case were narrow, we expect that there may be other, similar, decisions in the future.
Listen to this post
Affirmative Artificial Intelligence Insurance Coverages Emerge
It was only a matter of time before new insurance coverages targeting the risks posed by artificial intelligence (AI) would hit the market. That time is now.
As the use of AI continues to proliferate, so too does our understanding of the risks presented by this broad and powerful technology. Some risks appear novel in form while others mirror traditional exposures that have long been viewed as insurable causes of loss. AI-related risks are made all the more novel because the meaning of AI itself is not only up for debate, but is constantly evolving as the technology matures. This mixture of old and new has the potential to create coverage gaps in even the most comprehensive insurance programs. Hence the development of specialized, AI-specific insurance solutions. In just the past few weeks, two new affirmative AI coverages have entered the market, signaling an acceleration in this trend.
Armilla’s Affirmative AI Coverage
On April 30, 2025, Armilla Insurance Services launched an AI liability insurance policy underwritten by certain underwriters at Lloyd’s, including Chaucer Group. This product is among the first to offer clear, affirmative coverage for AI-related risks, rather than relying on protections embedded in legacy policies.
While the introduction of this new, affirmative coverage should have no impact on the availability of coverage for AI-related losses that meet the terms of coverage under existing insurance policies such as cyber, directors and officers (D&O), or technology errors and omissions (E&O), this new product should address any unique exposures not contemplated under traditional coverages. Risks specifically contemplated under Armilla’s policy include AI hallucinations, deteriorating AI model performance, and mechanical failures or deviations from expected behavior. Armilla’s affirmative coverage may offer greater certainty for policyholders in an increasingly uncertain risk environment.
Google Cloud’s Entry into AI Risk Management
Earlier in 2025, Google took its own significant step into AI-specific risk mitigation by announcing a partnership with insurers Beazley, Chubb, and Munich Re. This collaboration introduces a tailored cyber insurance solution specifically designed to provide affirmative AI coverage that Google Cloud customers can purchase from the insurers Google has partnered with.
Customers that purchase the Google-specific insurance coverage receive a Google policy Endorsement that provides a suite of protections that can include business interruption coverage for failures in Google Cloud services, liability coverage for certain bodily injury or property damage, and protection for trade secret losses linked to malfunctioning AI tools. By embedding insurance directly into its cloud offerings, Google has taken a proactive role in delivering technological innovation, while also managing the associated risks.
Insuring the AI Future
The emergence of affirmative AI insurance products marks a key shift in the industry’s approach to managing AI-driven risks. With companies like Armilla leading the charge, insurers are beginning to address perceived coverage gaps that traditional policies may overlook. As momentum builds, 2025 is likely to bring a continued rollout of AI-specific coverages tailored to this evolving landscape. Collectively, these developments reflect a growing recognition across the industry of the distinct and complex nature of AI-related risk.
Spring 2025 Kattison Avenue
Against the backdrop of many significant developments in the advertising law space, we are thrilled to release the Spring 2025 issue of Kattison Avenue. In this edition, you will find updates on the Trump administration’s imposition of tariffs on imports and their impact on retailers and consumers, UK efforts to improve online safety for children, recent decisions by the National Advertising Division (NAD) affecting advertisers and influencers, and considerations for businesses using Generative AI (GenAI) in their day-to-day operations.
First, Intellectual Property Partner and Advertising, Marketing and Promotions Co-Chair Christopher Cole writes about businesses that rely on tariffed imports that are considering itemizing “tariff-related” costs separately to explain the price hikes to consumers. Chris notes that, while attributing part of the cost to tariffs is not categorically prohibited, calculating and disclosing the precise amount of tariff surcharges will be subject to truth-in-advertising principles such as the California Honest Pricing Law. Then, London Deputy Managing Partner Terry Green discusses the United Kingdom’s robust efforts to improve online safety for kids and recent guidance that all platforms under the Office of Communications’ (Ofcom) Online Safety Act (OSA) must comply with to mitigate children’s exposure to harmful content.
Up next, Intellectual Property Associate Catherine O’Brien summarizes recent NAD decisions targeting third-party marketing by celebrities and influencers. Katie describes the NAD’s recent evaluations, as part of its routine monitoring program, of social media posts by third parties that found unsubstantiated claims or failure to meet disclosure standards, emphasizing that brands must exercise meaningful control over advertising claims that are made on their behalf. Finally, an article by Intellectual Property Partner Michael Justus explains that GenAI vendors, models and use cases are not all created equal. He advises companies to complete due diligence before selecting model providers, carefully scrutinize use cases, and implement policies and training that reflect enterprise risk tolerance.
In This Issue
Tips For Companies Crafting Tariff Surcharge Disclosures
Byte-Sized Protection: Keeping Kids Safe Online, One Risk Assessment at a Time
Influencers Say the Darndest Things: National Advertising Division Targets Third-Party Marketing in Recent Decisions
Choose Your GenAI Model Providers, Models and Use Cases Wisely
News to Know
Read the Full Newsletter Here
State Regulators Poised to Increase Enforcement Efforts as Trump Administration Executes Deregulation Agenda
In the first three months of the second Trump administration, federal regulators have signaled a shift in priorities while enforcing federal securities violations and consumer protection laws. In fact, the administration has effectively shuttered the Consumer Financial Protection Bureau (CFPB) and effected significant changes to the Securities and Exchange Commission’s (SEC or Commission) organizational structure and enforcement procedures. As federal regulators shift their focus, state attorneys general have shown a willingness to ramp up enforcement efforts. States have various tools at their disposal, including enforcing existing federal and state consumer financial protection and securities laws and amending state law to expand their regulatory enforcement authority.
The variances in state law and appetites of the state attorneys general may result in a patchwork style of enforcement across the United States. Moreover, states with a more aggressive enforcement approach, such as New York and Massachusetts, may also spur other states to action.
Trump Administration’s Deregulation Agenda Expected to Impact the SEC Enforcement Program
Developments in Washington strongly suggest that the SEC under the Trump administration will depart from aggressive and novel enforcement strategies that characterized the previous administration. In the first few days of the current administration, President Trump announced a “massive” deregulation initiative,1 which we expect will impact the breadth and volume of SEC enforcement activity. Some changes already taking place at the SEC include:
Refocusing the Commission’s Enforcement Approach. During his confirmation hearing, SEC Chair Paul Atkins said he “will strive to protect investors from fraud, to keep politics out of how our securities laws and regulations are applied, and to advance clear rules of the road that encourage investment in our economy.”2 He further called “for the SEC to return to its core mission” of “investor protection; fair, orderly, and efficient markets; and capital formation.”3 Senate Banking Chairman Tim Scott observed that the Commission under Chair Atkins will “roll back harmful Biden-era policies” and “provide regulatory clarity for digital assets.”4
Focus on Investor Fraud Protection. Going forward, many SEC observers expect the primary enforcement priority will shift to protecting investors from clear cases of fraud, rather than pursuing broader or more innovative regulatory actions.5
Reduction in Enforcement Division Authority. The SEC revoked the Director of the Division of Enforcement’s ability to initiate investigations, formally centralizing decision-making at the Commission level.6
Increased White House Oversight. Executive orders now restrict the SEC’s independent rule-making authority7 and embed a White House liaison in key decision‑making processes.8
Coordination with DOGE. Another executive order directs the SEC to coordinate its rule-making efforts with the DOGE government task force, which could lead to further agency restructuring and efficiency measures.9
Office Closures and Staff Reductions. The SEC reportedly canceled leases for major regional offices and plans to eliminate regional director positions, reducing the agency’s physical presence and staff autonomy.10
Focus on Big Firms. Then-Acting SEC Chair Mark Uyeda suggested in an April 8, 2025, speech that the SEC could prioritize enforcement actions against larger, more complex investment advisers and firms, leaving oversight of smaller firms to state regulators.11
Reduction in Crypto Enforcement. There will be fewer enforcement actions against the crypto industry, with a preference for rule-making and public guidance over enforcement actions to clarify the regulatory status of crypto assets.12
Cryptocurrency Enforcement Taken on by the States
Particularly in the area of cryptocurrency, states are poised to step up enforcement activity. This follows the disbanding of the Department of Justice (DOJ) National Cryptocurrency Enforcement Team and Deputy Attorney General Todd Blanche’s announcement that the DOJ will “no longer pursue litigation or enforcement actions that have the effect of superimposing regulatory frameworks on digital assets[.]”13 The DOJ will instead focus on prosecuting individuals who victimize digital asset investors or use digital assets in furtherance of criminal conduct, including terrorism, human trafficking, and gang financing.14
Several states have already taken steps to ramp up enforcement actions relating to cryptocurrency, with others likely to follow. Recent examples include:
New York
In recent years, the Attorney General has filed multiple lawsuits against crypto platforms for selling or purchasing crypto tokens without registering in the state.
In March 2023, sued KuCoin for failing to register as a securities and commodities broker-dealer under New York law.
Secured a consent order in December 2023 banning KuCoin from trading securities and commodities in New York, requiring $16.7 million in refunds to investors and $5.3 million in penalties.15
Iowa
Attorney General filed lawsuits against Lux Vending, LLC (Bitcoin Depot) and GDP Holdings LLC (Coin Flip), operators of cryptocurrency ATMs, alleging insufficient policies and procedures to identify and block scams.16
Claims include Iowa Consumer Fraud Act violations, unfair and deceptive practices, and misrepresentation.17
Asserted that companies profit from fees charged to consumers sending cryptocurrency to scammers and fail to warn or protect users adequately.
Pennsylvania
Attorney General issued a public warning to consumers about scams involving cryptocurrency ATMs.18
Provided tips for identifying scams and encouraged scam victims to contact the Attorney General’s office.
Indicated potential for future legal action against crypto companies operating ATMs in the state.
States Prepare for the Uncertain Future of the CFPB
Likewise, as the Trump administration seeks to dismantle the CFPB, states are preparing to fill the gap in regulation and enforcement of consumer protection violations.19 Recent actions states are taking to prepare for the CFPB enforcement gap include the following:20
Amicus Brief Filing. Twenty-three state attorneys general filed an amicus brief supporting the National Treasury Employees Union’s action to block the shutdown of the CFPB, emphasizing the Bureau’s historical partnership with states in consumer protection cases.21
Independent Authority Under CFPA. States leverage their independent authority under the Consumer Financial Protection Act (CFPA) to bring civil actions against covered persons or providers for unfair, deceptive, and abusive acts or practices.22 Michigan’s attorney general, for example, brought a claim under the CFPA against an online lender for offering loans with exorbitant interest rates, resulting in a settlement that stopped the lender from marketing and extending new loans to Michigan consumers.23
New York’s FAIR Act Proposal.
New York Attorney General Letitia James proposed the Fostering Affordability and Integrity through Reasonable Business Practices Act (FAIR), which would expand the state’s consumer protection law to cover “unfair” and “abusive” practices, allowing for broader enforcement authority.24
The FAIR Act would permit the New York attorney general to bring claims for a single instance of unfair, deceptive, or abusive activity, rather than being limited to conduct impacting the public at large.
The FAIR Act would enable the New York attorney general to address a wide range of conduct, including predatory loans, fraudulent landlord-tenant transactions, and other prohibited activities affecting individuals.
New York Banking Fee Regulations. New York’s Department of Financial Services proposed regulations to eliminate exploitative and deceptive banking fees, such as prohibiting overdraft fees on overdrafts of less than $20 and charging overdraft fees that exceed the overdrawn amount.25
Massachusetts Junk Fee Regulations. Massachusetts Attorney General Joy Campbell issued new regulations under the state’s consumer protection law to curb “junk fees.”26 The regulations require companies to disclose the total price of a product or service upfront and provide clear information regarding additional charges.
Expanded State Enforcement for Financial Markets
States are also increasingly scrutinizing new financial products and digital platforms, with particular attention to the trading of event contracts and the practices of online investment platforms. Recent actions in Massachusetts highlight how state regulators are responding to perceived risks and potential violations in these emerging areas.
In March, Massachusetts Secretary of the Commonwealth Bill Galvin issued a subpoena to Robinhood over its launch of a prediction markets hub, which allows users to bet on the outcomes of events such as March Madness basketball tournaments.27 Galvin raised concerns about integrating gambling-like features on a platform popular with young investors, suggesting these event contracts are designed to lure users away from sound investing.
Massachusetts previously filed an enforcement action against Robinhood for improper “gamification” features, resulting in a $7.5 million settlement for violations of state securities laws.28
The current investigation may focus on potential violations of Massachusetts’s Fiduciary Rule, which requires broker-dealers and investment advisors to act with utmost care and loyalty to customers and make recommendations solely in the customer’s best interest.29
Conclusion
Unlike past White House transitions, when federal regulators’ priorities remained relatively consistent, the Trump administration’s agenda has and will likely continue to significantly curtail the scope and volume of actions brought by federal regulators. However, we can expect state attorneys general, regulators, and legislators to increase enforcement efforts against financial markets participants. We will continue to monitor state-level initiatives very closely and will alert our financial markets clients to any significant developments.
1 The White House, “Fact Sheet: President Donald J. Trump Launches Massive 10-to-1 Deregulation Initiative” (Jan. 31, 2025), https://www.whitehouse.gov/fact-sheets/2025/01/fact-sheet-president-donald-j-trump-launches-massive-10-to-1-deregulation-initiative/.
2 Paul Atkins, Opening Statement Before the Senate Banking Committee, Nomination Hearing of Paul Atkins (Mar. 27, 2025).
3 Id.
4 Sen. Tim Scott, “Scott Applauds Paul Atkins’ Confirmation as SEC Chairman” (Apr. 9, 2025), Senate Banking Committee, https://www.banking.senate.gov/newsroom/majority/scott-applauds-paul-atkins-confirmation-as-sec-chairman.
5 On April 29, 2025, for example, the SEC filed a complaint against a CEO of an investment advisory firm and business development company alleging the CEO defrauded investors by making material misrepresentations in offering documents provided to prospective investors and engaged in self-dealing by extending loans to two companies in which the CEO had undisclosed financial interests. See Securities and Exchange Comm’n v. Derek R. Taller, 25 Civ. 3537, S.D.N.Y. (April 29, 2025).
6 Delegation of Authority to Director of the Division of Enforcement, 90 Fed. Reg. 12105 (Mar. 10, 2025).
7 Exec. Order 14215, 90 Fed. Reg. 10447 (Feb. 24, 2025).
8 Exec. Order 14215, 90 Fed. Reg. 10447 (Feb. 24, 2025).
9 Jessica Corso & Jon Hill, “Atkins Suggests He May Open SEC’s Doors To DOGE” (Mar. 27, 2025), Law360, https://www.law360.com/banking/articles/2316005.
10 Carl Ayers, “RCW exclusive: Leases on three SEC regional offices to end” (Mar. 7, 2025), Regulatory Compliance Watch, http://regcompliancewatch.com/rcw-exclusive-leases-on-three-sec-regional-offices-to-end/.
11 Mark T. Uyeda, “Remarks to the Annual Conference on Federal and State Securities Cooperation” (Apr. 8, 2025), SEC, http://sec.gov/newsroom/speeches-statements/uyeda-nasaa-040825.
12 Mark T. Uyeda, “Remarks at the Crypto Task Force’s Inaugural Roundtable” (Mar. 21, 2025), SEC, https://www.sec.gov/newsroom/speeches-statements/uyeda-remarks-crypto-roundtable-032125.
13 DOJ, “Memorandum for All Department Employees” (Apr. 7, 2025), https://www.justice.gov/dag/media/1395781/dl?inline.
14 The SEC has similarly reduced its investigations and enforcement in the area of cryptocurrency. In late April, for example, PayPal disclosed in its quarterly Form 10-Q report that the SEC was closing an inquiry, opened in November 2023, regarding PayPal’s PYUSD stablecoin, which pegs its value to the U.S. dollar.
15 New York Stipulation and Consent (Dec. 8, 2023), https://ag.ny.gov/sites/default/files/settlements-agreements/kucoin-stipulation-and-consent.pdf.
16 Iowa Attorney Department of Justice, “Attorney General Bird Sues Crypto ATM Companies for Costing Iowans More than $20 Million” (Feb. 26, 2025), https://www.iowaattorneygeneral.gov/newsroom/attorney-general-bird-sues-crypto-atm-companies-for-costing-iowans-more-than-20-million.
17 Id.
18 Pennsylvania Attorney General, “AG Sunday Warns Pennsylvanians of Rise in Scams Involving Bitcoin ATMs” (Feb. 25, 2025), https://www.attorneygeneral.gov/taking-action/ag-sunday-warns-pennsylvanians-of-rise-in-scams-involving-bitcoin-atms/.
19 Note that the materials relied upon by Katten for purposes of this advisory do not appear publicly on the CFPB’s website. However, the materials reviewed appear on CFPB letterhead and, as described herein, are consistent with public positions agency leadership has taken with respect to the nature of future agency activities in light of the recent presidential election.
20 For a closer look at what the CFPB’s new leadership proposes, see Katten’s recent advisory, “CFPB Suggests Shift In Supervision and Enforcement Priorities.”
21 National Treasury Employees Union, et al v. Russell Vought, et al., No. 25-cv-00381, Dkt. No. 24 (D. DC. Feb. 21, 2025), https://www.marylandattorneygeneral.gov/News%20Documents/022125_DC_DCt_Amicus.pdf.
22 Id. at 4.
23 Dana Nessel, Attorney General of the State of Michigan v. Huggy Lamar Price, et al., No. 19-cv-13078, Dkt. No. 1 (E.D. Mich. Oct. 18, 2019), https://www.michigan.gov/ag/-/media/Project/Websites/AG/releases/2019/october/Complaint_FILED.pdf?rev=ed465f8086f147629de063292258e59c&hash=96ABAB057544A8516DEC0A12D0C4FC88.
24 N.Y Gen. Bus. Law FAIR Business Practices Act at § 349.
25 New York Governor, “Protecting Consumers: Governor Hochul Cracks Down on Exploitative Overdraft Fees Targeting Low-Income New Yorkers” (Jan. 22, 2025), https://www.governor.ny.gov/news/protecting-consumers-governor-hochul-cracks-down-exploitative-overdraft-fees-targeting-low.
26 Mass. Attorney General, “AG Campbell Releases ‘Junk Fee’ Regulations to Help Consumers Avoid Unnecessary Costs” (Mar. 3, 2025), https://www.mass.gov/news/ag-campbell-releases-junk-fee-regulations-to-help-consumers-avoid-unnecessary-costs.
27 “Massachusetts regulator subpoenas Robinhood over sports betting” (Mar. 24, 2025), CNN, https://www.cnn.com/2025/03/24/business/regulators-probe-robinhood-prediction-markets-march-madness/index.html.
28 Id.
29 950 Code Mass. Regs. § 12.207(1)(a).
“SECOND AND FINAL NOTICE LETTER”: The Anti-Robocall Multistate Litigation Task Force Warns Lingo Telecom After “Involvement” In nearly 200 Million Illegal Scam Robocalls
All 51 state AGs (including DC’s) have now banded together to create a litigation taskforce to stop illegal robocalls.
This is the first time in history such a taskforce has been created and it is absolutely incredible the commitment and resources that have been devoted to this national state-level movement.
This taskforce is looking to make examples of illegal robocallers–and the networks and carriers that provide access to them.
Well in April, several carriers were warned against continuing to carry illegal calls including Lingo Telecom, LLC. (Per the letter Lingo also does business as BullsEye; Trinsic Communications; Excel Telecommunications; Clear Choice Communications; VarTec Telecom; Impact Telecom; Startec, Americatel, and Lingo.)
The allegations against Lingo here are just extraordinary.
The warning letters are more or less the same and contain the rather intimidating headline: SECOND AND FINAL NOTICE LETTER from the Anti Robocall Multistate Litigation Task Force Concerning Lingo Telecom, LLC’s Continued Involvement in Suspected Illegal Robocall Traffic
Eesh.
The rest of the letter isn’t much nicer:
The Anti-Robocall Multistate Litigation Task Force’s (“Task Force”)1 investigation of Lingo Telecom, LLC (“Lingo”) has shown that Lingo has transmitted, and continues to transmit, suspected illegal robocall traffic on behalf of one or more of its customers. This Notice is the Task Force’s second and final attempt to informally apprise you of the Task Force’s concerns regarding Lingo’s call traffic, and to caution Lingo that it should scrutinize the call traffic of its current customers, evaluate the efficacy of its existing robocall mitigation policies, and cease transmitting illegal traffic on behalf of its current customers.
The letter goes on to recount Lingo’s perceived sins, including 630 traceback notices since 2019. Yikes.
Plus 282 tracebacks were sent after a taskforce CID.
Man, these guys sound dirty– at least in view of the taskforce.
Plus the taskforce relied on David Frankel’s RAPTOR data to determine there were 120 suspicious calls transmitted by Lingo from 102 unique calling numbers, exhibiting characteristics indicative of calls that are violations of federal and state laws and all of these calls were signed by Lingo with a C Level STIR/SHAKEN attestation, indicating that Lingo received the call without a signature.
Gees.
But this is all small potatoes. Now we talk about Amazon scams. Check these numbers out.
Per the letter:
89,100 of a sample of Amazon/Apple imposter robocalls are estimated to be attributable to Lingo;
Nationwide, therefore, the taskforce estimates Lingo carried approximately 44.5 million of these scam robocalls;
297,200 of a sample of SSA/IRS government imposter robocalls were attributable to Lingo;
Total approximately 148.6 million of these scam robocalls are estimated to beattributable to Lingo.
My goodness. What a train wreck. Who is representing these guys and are they even TRYING to behave in a legal fashion (if the allegations are true, I mean)?
While the letter here is pretty damning I do wonder if things are really this bad over there why the taskforce hasn’t taken action. I mean a warning letter is one thing but a multi-billion dollar enforcement action is better.
And this is coming from a defense lawyer.
Five Tips for Navigating Antitrust Risk from Algorithmic Pricing Software
In recent years, both private plaintiffs and the government have increasingly scrutinized businesses’ use of “algorithmic pricing” software, leading to a wave of antitrust lawsuits and enforcement actions. These softwares, used in a wide array of industries, employ algorithms to analyze market conditions, to generate pricing recommendations. While these tools can help businesses optimize their pricing strategies, they also can raise antitrust concerns, particularly when they involve confidential competitor data or appear to facilitate price coordination among competitors.
One of the first antitrust challenges to algorithmic pricing came in 2015, when the U.S. Department of Justice charged an executive at an e-commerce site with conspiring with various competitors to adopt the same algorithm to set their respective prices. The trend gained momentum in 2022, when several lawsuits accused a software company and various property managers of using software to coordinate rental rates in multifamily housing, allegedly in violation of Section 1 of the Sherman Antitrust Act. Since then, new lawsuits have continued to emerge, and antitrust enforcers are showing an increased focus on algorithmic pricing. DOJ recently weighed in on an “algorithmic collusion” case, arguing that using a common pricing algorithm can qualify as concerted action under the Sherman Act.
Algorithmic collusion claims are still relatively new, and courts have not yet settled on clear legal standards to govern these claims, but recent cases offer insight on potential risks. Below, we outline five key considerations for companies using or thinking about adopting pricing algorithms.
1. Consult with counsel before using software that may incorporate competitors’ confidential price information. When pricing software relies on competitors’ confidential data to make pricing recommendations, it potentially can raise legal concerns. Some courts have allowed lawsuits to proceed when a software’s algorithm ran on confidential pricing data shared by competitors, with one court likening such algorithms to a “melting pot of confidential competitor information.” By contrast, several algorithmic collusion claims have been dismissed when they involved software that relied only on publicly available data. Reviewing a competitor’s publicly listed prices is generally legal under antitrust laws, whereas using private competitor data has sometimes been construed to suggest illegal collusion. Therefore, before using software that may incorporate competitors’ confidential pricing data, it is advisable to consult with counsel to consider how the software works and assess the potential risks.
2. Do not automatically follow the software’s pricing recommendations. Some of the antitrust lawsuits involving algorithmic pricing have alleged that businesses have given up control over their pricing decisions by blindly following a software’s pricing recommendations. A former FTC commissioner once explained the plaintiffs’ theory this way: “If it isn’t ok for a guy named Bob to do it [tell competitors how to set their prices], then it probably isn’t ok for an algorithm to do it either.” It is therefore significant that, in at least one case, defendants successfully obtained dismissal of the complaint in part by pointing to allegations that they sometimes overrode the software’s recommendations rather than blindly following them. Therefore, to reduce legal risk, businesses should consider maintaining the discretion to make their own independent pricing decisions rather than automatically accepting a software’s recommendation. This might mean implementing policies or trainings that empower employees to deviate from the software’s recommendations at times. At a minimum, it means keeping a “human in the loop” to monitor the software and ensure it does not work in unintended ways. The key is to retain some degree of human control over pricing instead of fully outsourcing pricing decisions to an algorithm — especially if you have reason to believe competitors may be doing the same.
3. Do not discuss algorithmic pricing software with competitors. Many lawsuits alleging algorithmic collusion rely on the concept of a “hub-and-spoke” conspiracy, where the software provider supposedly acts as the central “hub” of the alleged conspiracy and its customers (competing businesses) are the “spokes.” Importantly, to prove a “hub-and-spoke” conspiracy, a plaintiff must show not only that there were agreements between the hub and the spokes but also that the competing spokes all reached an agreement with one another — what courts call the “rim” of the wheel. In past cases, plaintiffs have cited discussions between competitors at industry conferences, webinars, and meetings hosted by the software provider as circumstantial evidence of an agreement. To lessen these sorts of risks, avoid discussing your use of algorithmic pricing software — or your competitors’ use of it — at such events or in any other contexts your competitors may learn about.
4. Avoid talking about algorithmic pricing in terms of “profit maximization” and, instead, emphasize its procompetitive benefits. Some lawsuits have cited statements from software providers and users claiming algorithmic pricing helps raise prices and increase profits. Some courts have viewed such statements as an “invitation” to collude, where the users allegedly agreed to join the asserted conspiracy by adopting the software. To reduce these kinds of risks, avoid making statements — whether in marketing materials, board presentations, or investor reports — that suggest any pricing software is being used to push prices above competitive levels. Instead, emphasize the software’s procompetitive benefits such as helping businesses offer lower prices than their rivals or helping to match supply with demand. Documenting these benefits ahead of time can help strengthen your defense if a legal challenge arises.
5. Consult an antitrust lawyer before using algorithmic pricing software. Antitrust laws surrounding algorithmic pricing are still evolving, and what seems like low-risk behavior today could become actionable conduct in the future. With ongoing court cases and appeals, as well as various legislative efforts at the city, state, and federal levels, the legal landscape is changing quickly.
$4MM TCPA SETTLEMENT: Another Seven Figure TCPA Settlement Haunts BigLaw as Truist Agrees to Pay Big Bucks
Repeat after me:
Hire big law. Expect a big loss.
Indeed you can’t even say “big loss” without “big law.” Its built into the words.
Its the same script every time.
Big law gets assigned the case. They litigate it for years charging god knows how much. Eventually get to a place where they realize they can’t win. So they recommend settlement on a classwide basis for millions of dollars.
Just terrible.
The latest victim of this ongoing TCPAWorld phenomenon is Truist Bank. They trusted a biglaw firm to help guide them through a TCPA class action in North Carolina against the powerful Keith Keogh and, well, it didn’t end up going very well.
It went so poorly, in fact, that Truist agreed to pay over $4mm to resolve the claims of just 5,998 class members per the Court’s preliminary approval order.
That means Truist agreed to pay about $666.00–look at that number– per class member in a TCPA settlement a RIDICULOUS over payment on a per-class member basis. (Pre-certification TCPA class actions like this should settle for well under $100.00 per head– and the Czar’s class settlements are usually down under $20.00 per person (look it up!!!).)
Its all about the counsel you choose folks.
The class definition here is very telling and suggests the parties cooked it up based on class data sets of Truist’s choosing:
The subscribers or regular users of the 5,998 telephone numbers assigned to cellular telephone service in the United States to which Truist placed a prerecorded telephone call concerning an unrelated account between February 10, 2019, and August 31, 2022
Compare this to the original class definitions in the complaint:
All persons subscribing to a telephone number assigned to cellulartelephone service in the United States to which Defendant placed aprerecorded telephone call concerning an unrelated account sincethe date that is four years prior to the filing of this complaint.
All persons subscribing to a telephone number assigned to cellulartelephone service in the United States to which Defendant placedprerecorded telephone calls concerning at least two unrelatedaccounts since the date that is four years prior to the filing of thiscomplaint
Night and day.
Wonder if anyone will object? Hmmm.
I understand Truist was looking for the lowest number they could get to here–hence the odd class definition– but there is just no value to this settlement at all in my opinion. Just a waste of millions of dollars.
But this is how it is.
Big law just can’t seem to win class certification motions.
And what it is, is reality.
Truist pays $4.1MM to settle with Keogh. #Biglaw strikes again. And the TCPAWorld keeps turning.
Germany: Bureaucracy Out, Digital In? The New Government’s Plans for Labour and Employment
After long negotiations between the Christian Democrats and the Social Democrats, the parties agreed to establish a coalition to form the new government and Friedrich Merz was eventually elected on 6 May 2025 as new Chancelor of Germany. The coalition agreement published by the parties offers insight into their agenda. While not the primary focus of the agreement, there are several initiatives that aim to address certain labour and employment issues of relevance to the German market.
Streamlining the future of work
The coalition agreement outlines several key initiatives designed to enhance Germany’s competitiveness as a business hub, particularly by furthering digitalisation and streamlining bureaucracy. This commitment is also reflected in their plans for addressing L&E-related issues:
Promoting qualified immigration, particularly by digitalising processes in an effort to accelerate the recognition of professional qualifications from other countries
Further reducing the written form requirements in employment law, e.g. for contracts under the Part-Time and Limited Term Employment Act (Teilzeit- und Befristungsgesetz). For further details on the previous changes that took effect in January 2025, please refer to our recent blog post on the Bureaucracy Relief Act.
Digitalisation of collective labour rights
Collective labour law is particularly impacted by the effort to digitalise employment processes:
Enabling the use of online works council meetings (Betriebsratssitzung) and works meetings (Betriebsversammlung) as an alternative to in-person meetings
Implementing an optional digital voting process for the works council elections in 2026
Right to digital access, i.e. the right to use existing digital communication channels as an alternative to the notice board for advertising among others collective labour events and opportunities
Improving Flexibility
The new government is also seeking to implement a change to the Working Hours Act (Arbeitszeitgesetz) that would allow for maximum weekly instead of daily working hours. The current position is a daily maximum of eight (or in exceptions, ten) working hours.
To comply with the EU Working Time Directive, a maximum of 48 weekly working hours would generally be permitted. Exceptions would have to be made for certain workers, e.g., for those working nightshift. Additionally, a new concept is required to allow for the increase in flexibility while still ensuring the workers’ health, safety and adequate rest time. The coalition agreement does not provide any specifics as to how this will be achieved.
According to coalition parties, the adjustment is intended to enhance the compatibility of family and work. However, while the new regulations would not constitute an increase in weekly working hours, they are likely to benefit employers by allowing for more flexible schedules due to the decreased regulations. Examples could be agreeing on a permanent 4-day week with no reduction in pay or the option to offset short-term spikes in workload by ordering work for more than 10 hours a day. Once these changes are implemented, employee handbooks or works agreements referencing maximum working hours may require changes to comply with the new regulations.
The parties also plan to implement an obligation to digitally record working hours for employers. Following the implementation, a transition period will be established during which small and mid-size companies will be exempt from the new requirements. However, the obligation does not extend to trust-based working hours. Therefore, the decision to pursue this option remains at the discretion of employers.
A further initiative aimed squarely at increasing productivity is exempting overtime income of full-time employees from income tax. The definition of overtime in this context is any working time that exceeds 34 hours in the case of employees with a CBA, or 40 hours in the case of employees without a CBA.
If employers offer bonuses to part-time employees for increasing their working hours, these bonuses remain tax-free according to the parties’ plans. It remains to be seen how the coalition will deal with attempts to exploit such bonuses.
Allowing for a smooth transition after reaching retirement age
Many employers and employees are interested in maintaining their existing employment relationship after the employee reaches the standard retirement age. However, given the restrictions in the Part-Time and Limited Term Employment Act, most flexible solutions are not viable. In most cases, employers are currently only able to establish long-term employment relationships that do not adequately address the challenges associated with such employment.
The coalition agreement now includes a plan to lift the ban on pre-employment after reaching the standard retirement age in the Part-Time and Limited Term Employment Act. This would allow employees to remain in a familiar work environment while transitioning to a reduced or limited role within their organisation. Lifting the ban would be a welcome change for both parties to an employment relationship as it would provide reliable planning and legal certainty.
The effort to encourage individuals to remain in the workforce after reaching the standard retirement age also includes plans to exempt up to EUR 2,000 of such employees’ income from income taxes.
Strengthening unions
The coalition parties plan to make compliance with collective bargaining agreements a prerequisite for the awarding of federal contracts worth EUR 50,000 or more and for start-ups with “innovative services” in the first four years after their establishment for projects worth EUR 100,000 or more.
The parties also aim to enhance the appeal of trade union memberships by offering tax incentives for their members.
Other initiatives
While these initiatives are also part of the coalition agreement, how or even if they will be implemented is less certain for some than others:
Raising the minimum wage to EUR 15 per hour by 2026, which is explicitly labeled as something that may be feasible
Implementing a legal framework for AI at the workplace
Summary
The agreement encompasses a combination of measures that are favourable to employers and those that are principally intended to strengthen employee rights. However, none of them legally binding. Thus, the agreement is, in essence, a mere collection of potential initiatives. It is not feasible for it to be realised in its entirety within the next four years. Immediate action is therefore not required. Nevertheless, it provides the most comprehensive insight into the incoming government’s plans and as a result, what employers may expect in upcoming legislative periods.
First-of-Its-Kind: Teen Privacy Law Passes in Arkansas
On April 22, 2025, Arkansas enacted the Arkansas Children and Teens’ Online Privacy Protection Act (HB 1717, Act 952), making it the first state to expand core federal children’s privacy protections to teens. The law, effective July 1, 2026, applies to for-profit websites, online services, apps, and mobile applications that are directed to children (under 13) or teens (ages 13-16), or that have actual knowledge they are collecting personal information from these groups.
The Act establishes a two-tiered framework: parental consent is required to collect personal information from children, while either the teen or their parent may consent in the case of users aged 13 to 16. Operators must also provide clear notice of their data practices, respect deletion and correction requests, and implement reasonable security measures. The statute broadly defines personal information to include not only contact details and identifiers, but also biometric data, geolocation, and any information linked or reasonably linkable to a child, teen, or parent.
The law prohibits targeted advertising to minors using their personal information and limits data collection to what is necessary for the specific service or transaction. Operators are not required to implement age verification, but are expected to comply where they have actual knowledge of a user’s age. Importantly, enforcement authority is vested exclusively in the Arkansas Attorney General; the law does not create a private right of action.
HB 1717 reflects growing state-level momentum to address youth privacy concerns amid the absence of federal privacy reform. Businesses that operate online platforms accessible to Arkansas users, particularly those relying on personalized advertising or handling sensitive data, should evaluate their compliance posture now to prepare for the law’s 2026 effective date.
Two New AI Laws, Two Different Directions (For Now)
Key Takeaways
Colorado legislature rejects amendments to the Colorado AI Act (CAIA).
Proposed amendments sought multiple exemptions, exceptions and clarifications.
Utah legislature enacts amendments that include a safe harbor for mental health chatbots.
Utah’s safe harbor provision includes a written policy and procedures framework.
In Colorado last week, highly anticipated amendments to its AI Act were submitted to the legislature. But, in a surprising turn of events this week, every single one of the proposed amendments was rejected, setting the stage for a sprint to February 1, 2026, the effective date of Colorado’s first-of-its-kind law impacting how AI is to be used with consumers.
Meanwhile, in Utah, which enacted an AI law last year that increases consumer protection but also encourages responsible innovation, amendments to its AI Policy Act (UAIP) took effect this week. The amendments are due in part to guidance found in the Best Practices for the Use of AI by Mental Health Therapists, published in April by Utah’s Office of AI Policy (OAIP).
We recently highlighted how a global arms race may mean U.S. states are best positioned to regulate AI risks, as evidenced by Colorado and Utah’s approaches, and how other states are emphasizing existing laws they say “have roles to play.” While there is still a lot of uncertainty, the outcome of the amendments in Colorado and Utah is instructive.
Colorado’s Rejected Amendments
A lot can be learned by what was rejected in Colorado this week, especially as other states, such as Connecticut, are considering adopting their own versions of an AI law for consumer protection, and as those that have already rejected such laws, such as Virginia, prepare to reconsider newer versions with wider input.
In some ways, it is not surprising that the amendments were rejected. They included opposing input from the technology sector and consumer advocates.1 This included technical changes such as exempting specified technologies from the definition of “high risk” and creating an exception for developers that disclose system model weights (e.g., parameters, biases).
The amendments also included non-technical changes, such as eliminating the duty of a developer or deployer of a high-risk AI system to use reasonable care to protect consumers. This was always going to be untenable. But there were others that made sense, such as providing exemptions for systems below certain investment or revenue thresholds ($10 and $5 million), which is why it is surprising that all the amendments were rejected, including an amendment that would have delayed the attorney general’s authority to enforce CAIA violations until 2027. Given the scope of the proposed amendments that have now been considered and rejected, it appears extraordinary circumstances would be needed for CAIA’s effective date to be delayed.
Utah’s AI Amendments
As previously noted, the UAIP endeavors to enable innovation through a regulatory sandbox for responsible AI development, regulatory mitigation agreements, and policy and rulemaking by the OAIP. Recently, the OAIP released findings and guidance for the mental health industry that were adopted by the legislature as amendments to the Act.
The guidance comprises 54 pages, the first 40 of which describe potential benefits and important risks associated with AI. It then examines use cases of AI in mental health therapy, especially in relation to inaccurate AI outputs, and sets forth best practices across these categories:
Informed consent;
Disclosure;
Data privacy and safety;
Competence;
Patient needs; and
Continuous monitoring and reassessment.
Emphasis is placed on competence. For example, the guidance states that therapists must maintain a high level of competence, which “involves continuous education and training to understand these AI technologies’ capabilities, limitations, and proper use.” This is consistent with how the UAIP specifies that businesses cannot blame AI for errors and violations.2
The guidance further states that mental health therapists should know “how frequently and under what circumstances one should expect the AI tool to produce inaccurate or undesirable outputs,” thus seeming to create a duty of care not only for AI system developers and deployers but also users. The guidance refers to these as “digital literacy” requirements.
Also, through its emphasis on continuous monitoring and reassessment, the guidance states that therapists, “to the best of their abilities,” should regularly and critically challenge AI outputs for inaccuracies and biases and intervene promptly if the AI produces incorrect, incomplete or inappropriate content or recommendations.
Based on the guidance, House Bill 452 was enacted and includes provisions relating to the regulation of mental health chatbots that use AI technologies, including the protection of personal information, restrictions on advertising, disclosure requirements and the remedies available for enforcement by the attorney general.
House Bill 452 includes an affirmative defense provision for mental health chatbots. In other words, a safe harbor from litigation initiated due to alleged harm caused by a mental health chatbot. To qualify for safe harbor protection, the supplier must develop a written policy that states the purpose of the chatbot and its abilities and limits.
In addition, the supplier must implement procedures that ensure mental health therapists are involved in the development of a review process and that the chatbot is developed and monitored consistent with clinical best practices, is tested to ensure that there is no greater risk to a user than there would be with a therapist and about ten other requirements.
As early best practices, the guidance may become industry standards that establish legal duties that can inform the risk management policies and programs contemplated by new laws and regulations, such as CAIA and UAIP. If so, these can form the basis for enforceable contract provisions.
Final Thoughts
We have previously provided recommendations that individuals and organizations should consider to mitigate risks associated with AI, both holistic and specific, emphasizing data collection practices. But, as shown through the rejected amendments in Colorado and the enacted AI amendments in Utah, AI literacy might be the most essential requirement.
[1] For an insightful description of how the amendments died, see journalist Tamara Chuang’s excellent reporting here https://coloradosun.com/2025/05/05/colorado-artificial-intelligence-law-killed/#
[2] Utah Code. Ann. section 13-2-12 (2).
Michigan Attorney General Files Lawsuit Against Roku Over Alleged COPPA Violations
On April 29, 2025, the Michigan Attorney General filed a lawsuit against Roku alleging violations of the federal Children’s Online Privacy Protection Act (“COPPA”). The complaint alleges that Roku collects and processes, and allows third parties to collect and process, children’s personal information, including voice recordings, location data, IP addresses and browsing histories, in violation of COPPA. The complaint also alleges that Roku monetizes children’s personal information by enabling third-party channels to collect this personal information, to increase Roku’s advertising revenue and make its platform more attractive to content providers and advertisers.
In addition, the complaint asserts that Roku misleads parents about its collection of their children’s personal information and creates confusion regarding parents’ rights to protect such information. Specifically, Roku allegedly does not (1) provide notice of what types of information it collects from children and how it uses and discloses such information; (2) obtain verifiable parental consent before collecting, using or disclosing such information; and (3) does not provide a “reasonable means for a parent to review the personal information collected from a child and to refuse to permit its further use or maintenance.”
Separately, the lawsuit alleges violations of the Video Privacy Protection Act, the Michigan Preservation of Personal Privacy Act (i.e., the Video Rental Privacy Act) and the Michigan Consumer Protection Act. The complaint seeks to cease Roku’s alleged illegal data collection and disclosure practices, require Roku to comply with federal and state law, and recover damages, restitution and civil penalties.