MISSOURI V. FLORIDA: Prerecorded Debt Collection Calls Get MOHELA Into Trouble in the Sunshine State

Can a non-profit corporation created by the State of Missouri be sued under the TCPA in the state of Florida?
According to the Florida district court the answer is yes!
But before we get there, why in the world did the Defendant not seek to move this case to Missouri before moving to dismiss? The world may never know.
In Coffey v. Higher Education Loan Authority of the State of Missouri 2025 WL 770396 (M.D. Fl. March 11, 2025) the Plaintiff sued claiming she received prerecorded debt collection calls without prior express written consent.
Pause.
Prior express written consent isn’t required for such calls. So this entire complaint should have been thrown out immediately.
But, the good folks at MOHELA (as the Defendant styles itself) had different plans. Rather than attack the obvious they argued Plaintiff could not sue it at all because it is entitled to something called sovereign immunity.
Sovereign immunity is a state or federal government’s ability to do whatever it wants to folks and not be sued for it. Essentially we all give up our rights to complain about what the government does when we agreed (through our forefathers without our consent) to be bound by the rules of this great country and each of its various commonwealths.
But sovereign immunity does not apply to every entity created by a state. It only applies to actions of the state or federal government taken directly. And in this case the MOHELA was set up to act independently of the state of Missouri and with its own pool of money and source of income separate and apart from the state. So, in essence, a judgment against MOHELA is not a judgment against the Show-Me state.
All that was left was to determine whether MOHELA was a “person” under the TCPA. Recall there is a dizzying analysis required to determine whether government officials qualify as “people” who can be sued under the TCPA. But here the Court had little trouble analyzing the issue because MOHELA is set up as a corporation– and corporations are specifically listed as “persons” governed by the statute.
So there you go, MOHELA can be sued for doing something that it shouldn’t be sued for because it brought the wrong argument. Nice!

NO IMMUNITY: Capital One Sued Over “Refer a Friend” Text – Court Holds Section 230 Does Not Apply

Recently, while shopping around for a new credit card, I was surprised by how many people were eager to “refer” me. It’s a common promotional scheme – someone sends you a referral link or code, and if you use it, they score a bonus. Seems harmless enough, but a recent ruling out of the Western District of Washington has raised an important question—can the company behind these referral programs be held liable for the messages sent? Let’s find out.
Plaintiff Tamie Jensen alleged that she received a text message from a contact, containing content prepared by Defendant Capital One as part of its “Refer a Friend” program. Jensen filed this putative class-action lawsuit on behalf of herself and others who received a “Refer a Friend” text message – claiming that the transmission of this commercial text message violated Washington’s Commercial Electronic Mail Act (“CEMA”) and Consumer Protection Act (“CPA”).
According to the Complaint, users can click the referral button on Capital One’s app or website, prompting Capital One to generate a referral link and compose an editable text message. The user is then allegedly directed to copy and paste the message with the link and send it to their contacts. The Complaint states that on the app (but not the website), a notice underneath the referral button reads: “You confirm you have consent to send text messages to each recipient. You may edit the pre-filled message as desired.” Jensen claims the alleged text message she received had not been edited by her contact before she received it and contained only the pre-filled content composed by Capital One.
In its motion to dismiss the lawsuit, Capital One raised three contentions, including an argument that it is immune under Section 230 of the Communications Decency Act from liability for text messages it did not directly send.
Section 230 isn’t something we talk about here too often, so let me give you a little background – the operative part of Section 230(c)(1) specifies that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The statute essentially protects online platforms such as Google, Facebook, or Amazon, as well as companies that provide broadband internet access or web hosting from being held legally responsible for information posted by an “information content provider”, or the person or entity actually responsible for the creation or development of information. However, Section 230 does not prevent an interactive computer service from being held liable for information that it has developed. Section 230, therefore, distinguishes those who create content from those who provide access to that content, providing immunity to the latter group. An entity may be both an “interactive computer service” provider and an “information content provider,” but the critical inquiry for applying Section 230’s immunity is whether the service provider developed the content that is the basis for liability.
With that out of the way, let’s get into Jensen and Capital One’s specific contentions. Jensen argued that she complains of content provided—either entirely or mostly—by Capital One, not by a third party (the “friend” who sent her the referral text). Capital One, however, argued that because it merely provided suggested language, and its customers retained control over whether to or what to text to their friends, Capital One should not be liable for the text messages and language that its customers chose to send.
The Court agreed with Jensen, holding that the offending content for the purposes of the alleged CEMA violation is the referral link—which was composed in its entirety by Capital One with respect to the text Jensen received. Although Capital One emphasized that senders retain the ability to modify the content of the “Refer a Friend” texts, the text Jensen allegedly received was not modified. The Court distinguished the situation here from that in Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1122 (9th Cir. 2003), where the defendant was an online dating site that required users to complete a multiple-choice survey to create a profile. A user created a false and defamatory profile for a celebrity, who then sued the site. The Ninth Circuit held that, although the site required its users to complete the survey, because the site did not play a significant role in creating, developing, or transforming the relevant information—the defamatory information—the dating site was protected by Section 230. Here, however, because Jensen alleged that Capital One is the sole author of the content of the text that she received, the Court held that Capital One is not entitled to Section 230 immunity.
The Court also rejected Capital One’s two other grounds for dismissing the Complaint – that Jensen’s claims seek to interfere with Capital One’s power to advertise and market its credit cards, and are therefore preempted by the National Bank Act (“NBA”), and that Jensen did not state a CEMA claim because she failed to allege that Capital One either initiated the text message or substantially assisted in transmitting the message.
Briefly, CEMA imposes liability for persons conducting business in Washington who “initiate” or “assist” in transmitting a commercial text message to a telephone number assigned to a Washington resident’s cell phone. CEMA defines “assist the transmission” as providing “substantial assistance or support.” WASH. REV. CODE § 19.190.010(1). Interestingly, Capital One essentially conceded that it assisted its customers in transmitting text messages but argued that the assistance it provided was not “substantial.” The Court disagreed, finding that Jensen’s allegations that Capital One generates a referral link and other content of a text message that customers are asked to copy and send to their contacts are sufficient to support a finding that Capital One substantially assisted its customers in formulating, composing, and sending commercial text messages. Although Capital One emphasized the part of the process that is outside its control (when to send messages, who to send messages to, whether Capital One’s provided language should be edited or sent as is), the Court held that these arguments go to the merits of the CEMA claim, rather than the sufficiency of Jensen’s allegations.
Capital One also attempted to argue that it notified its customers only to send texts to people who have consented to receive them and did not know that the text messages would be sent without consent. However, Capital One’s description of the notice was found to be only partially accurate: the notice on the mobile app indicates that the customer should have received consent to send “text messages” to the recipient, but not that the customer should have received consent to send the particular commercial text message. The Court rejected Capital One’s argument that a “natural reading” of the notice would tell a consumer to only send the specific commercial text with consent and instead concluded that the plain language of the notice suggests that the consent at issue is the consent to send text messages in general.
Lastly, the Court rejected Capital One’s contention that CEMA represents a significant restriction on Capital One’s ability to advertise its credit cards, and is thereby preempted by the NBA, which gives federally chartered banks the power “[t]o exercise … all such incidental powers as shall be necessary to carry on the business of banking.” 12 U.S.C. § 24. The Court held that CEMA’s generally applicable restrictions on the manner of advertising would not restrict all forms of Capital One’s advertising, or even all forms of advertising via text message. Accordingly, the Court found that requiring Capital One to comply with CEMA would not significantly impair its ability to advertise its credit cards and thus found no preemption here.
The Future of Section 230
A particularly interesting part of this decision is when the Court notes that “the purpose of Section 230 immunity—to encourage Internet service providers to voluntarily monitor and edit user-generated speech in internet traffic—would not be served by protecting Capital One from liability in this case.” As acknowledged by the Court, the “two basic policy reasons” for Section 230 immunity are “to promote the free exchange of information and ideas over the Internet and to encourage voluntary monitoring for offensive or obscene material.” Remember, this statute was enacted back in 1996. At the time, the feeling was that the threat of being sued into oblivion by anyone who felt wronged by something someone else posted would naturally disincentivize online platforms that were still very much in their nascent stage of growth – and not the tech giants we see today. Over the years, there have been numerous attempts to reform Section 230, ranging from outright repeal to reinterpreting the scope of protected activities (for example, limiting or eliminating protection of child sexual abuse material has been one of the few bipartisan efforts in recent years), placing conditions on platforms that wish to avail the immunity, and altering the “Good Samaritan” provisions to address what are perceived to be politically motivated content removals.
Of course, this brings us to the question of who actually decides the scope of Section 230 – until not too long ago, the clear answer was the FCC. However, the Supreme Court’s decision in Loper Bright v. Raimondo stripped the FCC of its ability to broadly interpret statutes. Nevertheless, FCC Chairman Brendan Carr made his views on Section 230 perfectly clear in his chapter of Project 2025, stating that, “The FCC should issue an order that interprets Section 230 in a way that eliminates the expansive, non-textual immunities that courts have read into the statute.” While the FCC’s authority to do this in a post-Loper world is questionable, Carr also adds, “The FCC should work with Congress on more fundamental Section 230 reforms […] ensuring that Internet companies no longer have carte blanche to censor protected speech while maintaining their Section 230 protections.”
Conclusion
So, to answer the question I started with – yes, a corporation can be held liable for the transmission of a message it developed. Even with a Section 230 shakeup on the horizon, it doesn’t look like Capital One will be offered any respite in this case.
However, it will be interesting to see what stance the FCC does take on the future of Section 230 – and we may find out sooner rather than later in light of the deregulation initiative announced on March 12, 2025.
Meanwhile, you can read the Court’s order here: Jensen v. Capital One Financial Corp., 2025 WL 606194 (W.D. Wash. Feb. 25, 2025).

New Bench Card Promotes Clarity and Consistency in Virtual Court Proceedings

New York’s state court judges will soon have a new resource at their fingertips when holding court remotely. As detailed in a recent article in the New York Law Journal, New York’s Court Modernization Action Committee (“CMAC”) recently developed a bench card for judges to reference while they prepare for and implement virtual proceedings.
The CMAC is comprised of various stakeholders, including judges, court staff and attorneys, who work to modernize New York’s court system by encouraging the adoption of new technologies and maintenance of pandemic-era improvements to remote court services. 
Last year, the CMAC identified a key issue: Although judges have continued to hold virtual proceedings since the pandemic, many have lacked ongoing formal training on the best practices for remote court appearances. As a result, court users and attorneys might encounter judges who manage their virtual courtrooms in drastically different ways or who have varying levels of proficiency with technology.
To address this inconsistency, the CMAC developed a Virtual Proceedings Bench Card. The bench card is a double-sided reference guide for New York State judges to use when planning and conducting remote court appearances. One side of the card lists best practices for judges to implement before and during proceedings, such as instructing participants to mute themselves when joining and reminding them to enable livestreaming where appropriate. It also contains blank spaces for judges to fill in information specific to their needs, such as the phone number for their courthouse’s IT department.
The other side contains a list of recommended formats for various proceedings in different courts. For example, in Family Court, the card recommends that child support proceedings are held virtually by default, while most custody proceedings are presumptively in-person. These recommendations were developed with input from many judges and other stakeholders as part of the 2022 Report of the court system’s Pandemic Practices Working Group. Since the default formats are only recommendations, the bench card also provides a list of factors that courts should consider when deciding whether to deviate from the guidelines, such as whether a particular format would present hardships for a party or whether a party’s health or disability favors one format over another.
The option to conduct virtual proceedings provides significant benefits to court users and should be widely encouraged. Remote court attendance allows court users to avoid lengthy travel, time off work, arrangement of childcare and other impediments that can make in-person attendance difficult. Greater flexibility allows for greater access to justice. With the publication of the bench card, both judges and court users alike will hopefully have a smoother, more predictable experience, empowering them to utilize remote options more often and improve the virtual proceeding experience for all.

AppLovin & Its AI: A Lesson in Accuracy

Last week, we explored a recent data breach class action and the litigation risk of such lawsuits. Companies need to be aware of litigation risk not only arising from data breaches, but also from shareholder class actions related to privacy concerns.
On March 5, 2025, a class action securities lawsuit was filed against AppLovin Corporation and its Chief Executive Officer and Chief Financial Officer (collectively, the defendants). AppLovin is a mobile advertising technology business that operates a software-based platform connecting mobile game developers to new users. AppLovin offers a software platform and an app. In the lawsuit, the plaintiff alleges that the defendants misled investors regarding AppLovin’s artificial intelligence (AI)-powered digital ad platform, AXON.
According to the complaint, the defendants made material representations through press releases and statements on earnings calls about how an upgrade to AppLovin’s AXON AI platform would provide improvements over the platform’s earlier version. The complaint further alleged that the defendants made numerous statements indicating that AppLovin’s financial growth in 2023 and 2024 was driven by improvements to the AXON technology. The defendants reportedly stated that AppLovin’s increase in net revenue per installation of the mobile app and the volume of installations was a result of the improved AXON technology.
The complaint further states that on, February 25, 2025, two short seller reports were published that linked AppLovin’s digital ad platform growth not to AXON, but to exploitative app permissions that carried out “backdoor” installations without users noticing. According to the reports, AppLovin used a code that purportedly allowed it to bind to consumers’ permissions for AppHub, Android’s centralized Google repository where app developers can upload and distribute their apps. The complaint claims that by attaching to AppHub’s one-click direct installations as its own, AppLovin directly downloaded apps onto consumers’ phones without their knowledge.
The research reports also state that AppLovin was reverse-engineering advertising data from Meta platforms and using manipulative practices, such as having ads click on themselves and forcing shadow downloads, to inflate its installation and profit figures. One of the research reports states that AppLovin was “intentionally vague about how its AI technology actually works,” and that the company used its upgraded AXON technology as a “smokescreen to hide the true drivers of its mobile gaming and e-commerce initiatives, neither of which have much to do with AI.” The reports further assert that the company’s “recent success in mobile gaming stems from the systematic exploitation of app permissions that enable advertisements themselves to force-feed silent, backdoor app installations directly onto users’ phones.” The complaint details the findings from the reports and alleges that AppLovin’s misrepresentations led to artificially inflated stock prices, which materially declined because of the research report findings.
On a company blog post in response to the research reports, the CEO wrote that “every download [of AppLovin] results from an explicit user choice—whether via the App Store or our Direct Download experience.”
As organizations begin integrating AI into their operations, they should be cautious in making representations regarding AI as a profitability driver. Executive leaders responsible for issuing press releases and leading earnings calls relating to a company’s technology practices should also understand how these technologies function and ensure that any statements they make are accurate. Whether such allegations are true or not, litigation around materially false representations can prove costly to an organization, both from a financial and reputation perspective. 

Privacy Tip #435 – Threat Actors Go Retro: Using Snail Mail for Scams

We have educated our readers about phishing, smishing, QRishing, and vishing scams, and now we’re warning you about what we have dubbed “snailing.” Yes, believe it or not, threat actors have gone retro and are using snail mail to try to extort victims. TechRadar is reporting that, according to GuidePoint Security, an organization received several letters in the mail, allegedly from the BianLian cybercriminal gang, stating:
“I regret to inform you that we have gained access to [REDACTED] systems and over the past several weeks have exported thousands of data files, including customer order and contact information, employee information with IDs, SSNs, payroll reports, and other sensitive HR documents, company financial documents, legal documents, investor and shareholder information, invoices, and tax documents.”

The letter alleges that the recipient’s network “is insecure and we were able to gain access and intercept your network traffic, leverage your personal email address, passwords, online accounts and other information to social engineer our way into [REDACTED] systems via your home network with the help of another employee.” The threat actors then demand $250,000-$350,000 in Bitcoin within ten days. They even offer a QR code in the letter that directs the recipient to the Bitcoin wallet.
It’s comical that the letters have a return address of an actual Boston office building.
GuidePoint Security says the letters and attacks mentioned in them are fake and are inconsistent with BianLian’s ransom notes. Apparently, these days, even threat actors get impersonated. Now you know—don’t get scammed by a snailing incident.

MS-ISAC Loses Funding and Cooperative Agreement with CIS

The Cybersecurity and Infrastructure Security Agency (CISA) confirmed on Tuesday, March 11, 2025, that the Multi-State Information Sharing and Analysis Center (MS-ISAC) will lose its federal funding and cooperative agreement with the Center for Internet Security. MS-ISAC’s mission “is to improve the overall cybersecurity posture of U.S. State, Local, Tribal, and Territorial (SLTT) government organizations through coordination, collaboration, cooperation, and increased communication.”
According to its website, MS-ISAC is a cybersecurity partner for 17,000 State, Local, Tribal, and Territorial (SLTT) government organizations, and offers its “members incident response and remediation support through our team of security experts” and develops “tactical, strategic, and operational intelligence, and advisories that offer actionable information for improving cyber maturity.” The services also include a Security Operations Center, webinars addressing recent threats, evaluations of cybersecurity maturity, advisories and notifications, and weekly top malicious domain reports.
All of these services assist governmental organizations that do not have adequate resources to respond to cybersecurity threats. Information sharing has been essential to prevent government entities from becoming victims. State and local governments have relied on this information sharing for resilience. Dismantling MS-ISAC will make it harder for governmental entities to obtain timely information about cybersecurity threats for preparedness. It is an organized place for governmental entities to share information about cyber threats and attacks and to learn from others’ experiences.
According to CISA, the dismantling of MS-ISAC will save $10 million. State representatives rely on the information shared by MS-ISAC. It may save the federal government minimal dollars, but when state and local governments are adversely affected and become victims of cyberattacks, this savings will be dwarfed by the amount spent on future attacks without MS-ISAC’s assistance. Responding to state and local government cyberattacks still expends taxpayer dollars. This shift is an unhelpful one that will leave state and local governments in the dark and at increased risk. This is a short-sighted strategy by the administration.

BUSINESS KILLER: Honda’s $632K CCPA Settlement is Terrible Precedent and Should NEVER Have Been Entered Into

Companies hounded by the California Privacy Protection Agency an state AG for supposed California Consumer Privacy Act violations are going to have Honda to thank for it in large measure.
I took a look at this resolution and I am just appalled.
Honda just paid $632,500.00 to resolve CCPA claims but to my eye the regulators were grasping at straws here. The supposed violations were ticky-tack, if not invented. And I cannot figure out why Honda would embolden the regulators–and lose hundreds of thousands of dollars– with this rollover settlement.
It seems like some lawyers just roll over as soon as a regulator comes knocking and that is such a mistake.
Let’s take a look at this.
Honda’s supposed crime here was two fold.
First, Honda supposedly required consumers to provide more information that necessary on non-verifiable consumer requests, which the regulators claim is not allowed under the statute. But it is unclear that Honda’s conduct actually exceeded the “Non-verifiable” requirements and, even if it did, it is unclear how these extra efforts to protect consumer privacy and provide a superior user experience actually lead to damage. This is especially true in the context of Honda’s third-party agent verification requirement.
In essence the regulator seemed to be claiming that since Honda conceded it did not need more than two pieces of data to execute on a request it should not have obtained more than two pieces of data. Not sure why Honda would concede that point but, regardless, a company’s subjective perception of need does not dictate the objective reach of a statute. So… junk.
From a practical standpoint, the vast majority of businesses out there are using a single process flow for both non-verifiable and verifiable requests. This sudden and jarring regulation by enforcement theoretically means basically every business out there is now in violation of the CCPA.
Eesh.
But it gets even worse.
Honda’s second supposed crime was having an “asymmetrical” cookie management tool. While that sounds fancy it just means that it took 2 steps to opt out of cookies and only 1 step re-opt-in.
Big whooping deal. (Or “Holy Santa Fe” to quote a recent Hyundai commercial.)
Opting out of something using two steps to assure things are done right and a consumer understands consequences verus opting back into something–after the consumer obviously has already been informed of the choices– is an asymmetrical experience by definition. In one part of the experience the user must be educated and then given an option. In the second part of the experience the user must only be given an option. This is just logic written into a sound user experience.
My mind is blown Honda would settle this.
So everyone out there please make sure your two step cookie dance works on the front end and the back end. And please make sure you have a process to seek protect some categories of consumer information less than others.
Thanks.
And thanks Honda.
Sorry, but this was such an important moment for a well-funded company with good inhouse counsel to make a forward-looking assessment and take a stand for common sense. Didn’t happen.

Key Findings from Oregon’s Consumer Privacy Act Report

After six months of enforcement of Oregon’s Consumer Privacy Act (OCPA), a new report from Oregon Attorney General Dan Rayfield indicates strong consumer engagement with the law’s privacy rights, notable business compliance efforts and key areas where businesses are falling short. Since the OCPA took effect in July 2024, the Privacy Unit at the Oregon DOJ has received 110 consumer complaints. The most common complaints involve:

data brokers, particularly background check websites that sell personal information;
social media and technology companies, which collect and share user data; and
denials of consumer rights requests, with the right to delete personal data being the most frequently requested and denied right.

Under the OCPA, businesses that fail to comply receive “cure notices,” which provide 30 days to fix violations. In the last six months, the Privacy Unit has initiated and closed 21 privacy enforcement matters. Common compliance deficiencies flagged in these notices include:

Lack of required disclosures – Many businesses failed to properly inform consumers of their rights under the OCPA.
Confusing or incomplete privacy notices – Some businesses listed privacy rights for other states but omitted Oregon, giving the impression that OCPA protections do not apply.
Difficult or hidden opt-out mechanisms –Some businesses made it unnecessarily burdensome for consumers to exercise their rights, by, for example, requiring excessive authentication steps.

The report notes that most businesses have responded positively to enforcement actions, and that businesses receiving cure notices have quickly updated their privacy policies and consumer rights mechanisms in response to DOJ requests.

Upcoming OCPA Compliance Deadlines July 1, 2025 – Nonprofits will become subject to the OCPA;, the Oregon DOJ is preparing guidance materials to assist nonprofits with compliance.
January 1, 2026 – The cure notice period will expire, allowing stricter enforcement measures.
2026 and beyond – The OCPA’s universal opt-out mechanism provisions will take effect, requiring businesses to honor automated consumer privacy requests.

OCR Imposes $200,000 Penalty Against Oregon Health & Science University for HIPAA Right of Access Violations

On March 6, 2025, the U.S. Department of Health and Human Services Office for Civil Rights (“OCR”) announced a $200,000 civil monetary penalty against Oregon Health & Science University (“OHSU”), a public academic health center and research university, for allegedly violating the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Privacy Rule’s right of access provisions.
The HIPAA Privacy Rule requires covered entities to provide individuals or their personal representatives with access to their protected health information upon request within 30 days, with the possibility of one 30-day extension. In May 2020, OCR received a complaint regarding an individual who did not receive their requested records after their personal representative made an access request to OHSU on the individual’s behalf in April 2019. OCR resolved the complaint after notifying OHSU of its potential noncompliance with the Privacy Rule’s right of access provisions. OCR then initiated an investigation of OHSU based on a second complaint with respect to the same individual filed in January 2021.
Although OHSU provided part of the requested records in April 2019, OCR alleged that the university did not provide all of the requested records until August 2021. OCR’s investigation determined that OHSU failed to take timely action in response to the individual’s right of access requests, and subsequently imposed a $200,000 civil monetary penalty against OHSU.

President Announces Creation of Strategic Bitcoin Reserve

On March 6, 2025, President Trump issued an executive order entitled “Establishment of the Strategic Bitcoin Reserve and United States Digital Asset Stockpile.” It is the latest effort in the President’s sweeping reforms concerning the digital asset industry.
Under the order, the Secretary of the Treasury is required to establish an office to administer and maintain control of custodial accounts collectively known as the “Strategic Bitcoin Reserve.” The Strategic Bitcoin Reserve is capitalized with all Bitcoin held by the Department of the Treasury that was finally forfeited as part of criminal or civil asset forfeiture proceedings or in satisfaction of any civil money penalty imposed by any executive department or agency. Government Bitcoin deposited into the Strategic Bitcoin Reserve may not be sold and must be maintained as reserve assets of the United States utilized to meet governmental objectives in accordance with applicable law. 
Similarly, the order further tasks the Secretary of the Treasury with establishing a “United States Digital Asset Stockpile,” capitalized with all digital assets owned by the Department of the Treasury, other than Bitcoin. The Secretary of the Treasury is required to determine strategies for responsible stewardship of the United States Digital Asset Stockpile in accordance with applicable law.
The order instructs the Secretary of the Treasury and the Secretary of Commerce to develop strategies for acquiring additional Government Bitcoin provided that such strategies are budget neutral and do not impose incremental costs on United States taxpayers. However, the United States Government may not acquire additional digital assets other than in connection with criminal or civil asset forfeiture proceedings or in satisfaction of any civil money penalty imposed by any agency without further executive or legislative action. Additionally, the head of each executive agency must provide the Secretary of the Treasury and the President’s Working Group on Digital Asset Markets with a full accounting of all digital assets in such agency’s possession in order to facilitate its transfer to the Strategic Bitcoin Reserve and United States Digital Asset Stockpile, as applicable.

Navigating the AI Frontier: Why Information Governance Matters More Than Ever

Artificial Intelligence (AI) is rapidly transforming the legal landscape, offering unprecedented opportunities for efficiency and innovation. However, this powerful technology also introduces new challenges to established information governance (IG) processes. Ignoring these challenges can lead to significant risks, including data breaches, compliance violations, and reputational damage.
“AI Considerations for Information Governance Processes,” a recent paper published by Iron Mountain, delves into these critical considerations, providing a framework for law firms and legal departments to adapt their IG strategies for the age of AI.
Key Takeaways:

AI Amplifies Existing IG Risks: AI tools, especially machine learning algorithms, often require access to and process vast amounts of sensitive data to function effectively. This makes robust data security, privacy measures, and strong information governance (IG) frameworks absolutely paramount. Any existing vulnerabilities or weaknesses in your current IG framework can be significantly amplified by the introduction and use of AI, potentially leading to data breaches, privacy violations, and regulatory non-compliance.
Data Lifecycle Management is Crucial: From the initial data ingestion and collection stage, through data processing, storage, and analysis, all the way to data archival or disposal, a comprehensive understanding and careful management of the AI’s entire data lifecycle is essential for maintaining data integrity and ensuring compliance. This includes knowing exactly how data is used for training AI models, for analysis and generating insights, and for any other purposes within the AI system.
Vendor Due Diligence is Non-Negotiable: If you’re considering using third-party AI vendors or cloud-based AI services, conducting rigorous due diligence on these vendors is non-negotiable. This due diligence should focus heavily on evaluating their data security practices, their compliance with relevant industry standards and certifications, and their contractual obligations and guarantees regarding data protection and privacy.
Transparency and Explainability are Key: “Black box” AI systems that make decisions without any transparency or explainability can pose significant risks. It’s crucial to understand how AI algorithms make decisions, especially those that impact individuals, to ensure fairness, accuracy, non-discrimination, and compliance with ethical guidelines and legal requirements. This often requires techniques like model interpretability and explainable AI.
Proactive Policy Development is Essential: Organizations need to proactively develop clear policies, procedures, and guidelines for AI usage within their specific context. These should address critical issues such as data access and authorization controls, data retention and storage policies, data disposal and deletion protocols, as well as model training, validation, and monitoring practices.

The Time to Act is Now:
AI is not a future concern; it’s a present reality. Law firms and legal departments must proactively adapt their information governance processes to mitigate the risks associated with AI and unlock its full potential.

Artists Protest AI Copyright Proposal in the U.K.

British Prime Minister Keir Starmer wants to turn the U.K. into an artificial intelligence (AI) superpower to help grow the British economy by using policies that he describes as “pro-innovation.” One of these policies proposed relaxing copyright protections. Under the proposal, initially unveiled in December 2024, AI companies could freely use copyrighted material to train their models unless the owner of the copyrighted material opted out.
Although some Parliament members called the proposal an effective compromise between copyright holders and AI companies, over a thousand musicians released a “silent album” to protest the proposed changes to U.K. copyright laws. The album, currently streaming on Spotify, includes 12 tracks of only ambient sound. According to the musicians, the silent tracks illustrate empty recording studios and represent the impact they “expect the government’s proposals would have on musicians’ livelihoods.” To further convey their unhappiness with the proposed changes, the title of these twelve songs, when combined, reads, “The British government must not legalize music theft to benefit AI companies.” 
High-profile artists like Elton John, Paul McCartney, Dua Lipa, and Ed Sheeran have also signed a letter urging the British government to avoid implementing these proposed changes. According to the artists, implementing the new rule would effectively give artists’ rights away to big tech companies. 
The British government launched a consultation that sought comments on the potential changes to the copyright laws. The U.K. Intellectual Property Office received over 13,000 responses before the consultation closed at the end of February 2025, which the government will now review as it seeks to implement a final policy.