President Announces Creation of Strategic Bitcoin Reserve

On March 6, 2025, President Trump issued an executive order entitled “Establishment of the Strategic Bitcoin Reserve and United States Digital Asset Stockpile.” It is the latest effort in the President’s sweeping reforms concerning the digital asset industry.
Under the order, the Secretary of the Treasury is required to establish an office to administer and maintain control of custodial accounts collectively known as the “Strategic Bitcoin Reserve.” The Strategic Bitcoin Reserve is capitalized with all Bitcoin held by the Department of the Treasury that was finally forfeited as part of criminal or civil asset forfeiture proceedings or in satisfaction of any civil money penalty imposed by any executive department or agency. Government Bitcoin deposited into the Strategic Bitcoin Reserve may not be sold and must be maintained as reserve assets of the United States utilized to meet governmental objectives in accordance with applicable law. 
Similarly, the order further tasks the Secretary of the Treasury with establishing a “United States Digital Asset Stockpile,” capitalized with all digital assets owned by the Department of the Treasury, other than Bitcoin. The Secretary of the Treasury is required to determine strategies for responsible stewardship of the United States Digital Asset Stockpile in accordance with applicable law.
The order instructs the Secretary of the Treasury and the Secretary of Commerce to develop strategies for acquiring additional Government Bitcoin provided that such strategies are budget neutral and do not impose incremental costs on United States taxpayers. However, the United States Government may not acquire additional digital assets other than in connection with criminal or civil asset forfeiture proceedings or in satisfaction of any civil money penalty imposed by any agency without further executive or legislative action. Additionally, the head of each executive agency must provide the Secretary of the Treasury and the President’s Working Group on Digital Asset Markets with a full accounting of all digital assets in such agency’s possession in order to facilitate its transfer to the Strategic Bitcoin Reserve and United States Digital Asset Stockpile, as applicable.

Navigating the AI Frontier: Why Information Governance Matters More Than Ever

Artificial Intelligence (AI) is rapidly transforming the legal landscape, offering unprecedented opportunities for efficiency and innovation. However, this powerful technology also introduces new challenges to established information governance (IG) processes. Ignoring these challenges can lead to significant risks, including data breaches, compliance violations, and reputational damage.
“AI Considerations for Information Governance Processes,” a recent paper published by Iron Mountain, delves into these critical considerations, providing a framework for law firms and legal departments to adapt their IG strategies for the age of AI.
Key Takeaways:

AI Amplifies Existing IG Risks: AI tools, especially machine learning algorithms, often require access to and process vast amounts of sensitive data to function effectively. This makes robust data security, privacy measures, and strong information governance (IG) frameworks absolutely paramount. Any existing vulnerabilities or weaknesses in your current IG framework can be significantly amplified by the introduction and use of AI, potentially leading to data breaches, privacy violations, and regulatory non-compliance.
Data Lifecycle Management is Crucial: From the initial data ingestion and collection stage, through data processing, storage, and analysis, all the way to data archival or disposal, a comprehensive understanding and careful management of the AI’s entire data lifecycle is essential for maintaining data integrity and ensuring compliance. This includes knowing exactly how data is used for training AI models, for analysis and generating insights, and for any other purposes within the AI system.
Vendor Due Diligence is Non-Negotiable: If you’re considering using third-party AI vendors or cloud-based AI services, conducting rigorous due diligence on these vendors is non-negotiable. This due diligence should focus heavily on evaluating their data security practices, their compliance with relevant industry standards and certifications, and their contractual obligations and guarantees regarding data protection and privacy.
Transparency and Explainability are Key: “Black box” AI systems that make decisions without any transparency or explainability can pose significant risks. It’s crucial to understand how AI algorithms make decisions, especially those that impact individuals, to ensure fairness, accuracy, non-discrimination, and compliance with ethical guidelines and legal requirements. This often requires techniques like model interpretability and explainable AI.
Proactive Policy Development is Essential: Organizations need to proactively develop clear policies, procedures, and guidelines for AI usage within their specific context. These should address critical issues such as data access and authorization controls, data retention and storage policies, data disposal and deletion protocols, as well as model training, validation, and monitoring practices.

The Time to Act is Now:
AI is not a future concern; it’s a present reality. Law firms and legal departments must proactively adapt their information governance processes to mitigate the risks associated with AI and unlock its full potential.

Artists Protest AI Copyright Proposal in the U.K.

British Prime Minister Keir Starmer wants to turn the U.K. into an artificial intelligence (AI) superpower to help grow the British economy by using policies that he describes as “pro-innovation.” One of these policies proposed relaxing copyright protections. Under the proposal, initially unveiled in December 2024, AI companies could freely use copyrighted material to train their models unless the owner of the copyrighted material opted out.
Although some Parliament members called the proposal an effective compromise between copyright holders and AI companies, over a thousand musicians released a “silent album” to protest the proposed changes to U.K. copyright laws. The album, currently streaming on Spotify, includes 12 tracks of only ambient sound. According to the musicians, the silent tracks illustrate empty recording studios and represent the impact they “expect the government’s proposals would have on musicians’ livelihoods.” To further convey their unhappiness with the proposed changes, the title of these twelve songs, when combined, reads, “The British government must not legalize music theft to benefit AI companies.” 
High-profile artists like Elton John, Paul McCartney, Dua Lipa, and Ed Sheeran have also signed a letter urging the British government to avoid implementing these proposed changes. According to the artists, implementing the new rule would effectively give artists’ rights away to big tech companies. 
The British government launched a consultation that sought comments on the potential changes to the copyright laws. The U.K. Intellectual Property Office received over 13,000 responses before the consultation closed at the end of February 2025, which the government will now review as it seeks to implement a final policy.

California Privacy Protection Agency Begins Investigative Sweep into Location Data Collection under CCPA

The California Privacy Protection Agency (CPPA) the agency responsible for implementing and enforcing the California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA) (collectively the CCPA), protecting consumer privacy, and ensuring compliance with data privacy regulations, has announced an investigate sweep into companies’ collection of sensitive location data. The CPPA has already sent out inquiries to “advertising networks, mobile app providers, and data brokers that appear to be in violation” of the CCPA.
California Attorney General Rob Bonta said, “Every day, we give off a steady stream of data that broadcasts not only who we are, but where we go. This location data is deeply personal, can let anyone know if you visit a health clinic or hospital, and can identify your everyday habits and movements.” The CPPA is concerned that this sensitive location data will be used to target vulnerable populations. The CPPA urges businesses to take responsibility as stewards of this sensitive data seriously and affirmatively protect location data.
The CPPA’s investigation will focus on how companies are informing consumers about their right to opt out of the sale and sharing of their data (as required under the CCPA), including geolocation data and other types of personal information collected by businesses. Additionally, the CPPA will investigate how companies actually apply this opt-out requirement when a consumer asserts that right.
If your company hasn’t assessed its opt-out processes and procedures lately, now is the time to confirm that consumers are clearly notified of this right and that they can readily opt-out of such tracking and collection and subsequent sale and/or sharing of that data with their parties.

Utilities Petition FCC for Updates to TCPA Guidelines to Allow “Demand Response” Calls and Texts

Edison Electric Institute (EEI), an association that represents all U.S. investor-owned electric companies, petitioned the Federal Communications Commission (FCC) to permit calls and texts under the Telephone Consumer Protection Act (TCPA) without prior express consent for “demand response” communications. A prior FCC ruling clarified the FCC’s policies towards the types of calls and texts from utilities that require prior express consent; EEI now urges the FCC to provide additional guidance on allowable “demand response” calls and texts. “Demand response” refers to non-marketing communications related to “temporary, strategic adjustments to electricity usage during peak demand periods.” EEI has asked the FCC to “recognize how essential demand response programs are to ensuring customer safety and to managing increasing demand for electricity more effectively.” EEI seeks FCC clarification on whether such calls and texts are permissible without prior express consent from customers so that utilities can save customers money and prevent outages.
Violations of the TCPA could result in fines and lawsuits against utilities. Thus, in 2016, the FCC clarified that when a customer provides a telephone number to a utility, such provision constitutes prior express consent for certain communications “closely related” to the utility service. EEI is asking that the FCC’s ruling be expanded to include non-telemarketing, information demand response calls, and texts. EEI’s petition states, “Demand response programs target short-term, intentional modification of electricity usage by end-user customers during peak times or in response to market prices. They help keep the electricity grid stable and efficient and can save customers money.” EEI further states that customer survey data “indicates widespread satisfaction among participants in demand response programs utilizing calls or texts, demonstrating positive impacts on customer experience with low opt-out rates.” EEI hopes that the FCC can clarify the language regarding the applicability of the utility customer presumption of consent and allow utilities to engage customers in these essential demand and response programs.

Skin360 App Can’t Escape Scrutiny under Illinois Biometric Law

A federal district court has denied a motion by Johnson & Johnson Consumer Inc. (JJCI) to dismiss a second amended complaint alleging it violated the Illinois Biometric Information Privacy Act (BIPA) by collecting and storing biometric information through its Neutrogena Skin 360 beauty app without consumers’ informed consent or knowledge. The plaintiffs also allege that the biometric information collected through the app is then linked to their names, birthdates, and other personal information.
Plaintiffs alleged that the Skin360 app is depicted as “breakthrough technology” that provides personalized at-home skin assessments by scanning faces and analyzing skin to diagnose enigmas like wrinkles, fine lines, and dark spots. The app then uses that data to recommend certain Neutrogena products for the consumer to eliminate those concerns. JJCI argued that the Skin360 app recommends products designed to improve skin health, which means that the consumers should be considered patients in a healthcare setting, making BIPA inapplicable.
However, the court disagreed and cited Marino v. Gunnar Optiks LLC, 2024 Ill. App. (1st) 231826 (Aug. 30, 2024), which held that a customer trying on non-prescription sunglasses using an online “try-on” tool is not considered a patient in a healthcare setting. In Marino, the court defined a patient as an individual currently waiting for or receiving treatment or care from a medical professional. Alternatively, Skin360 uses artificial intelligence software to compare a consumer’s skin to a database of images and provides an assessment based on a comparison of these images. Of course, JCCI did not dispute that no medical professionals are involved in providing the service through the Skin360 app.
The court stated that “[e]ven assuming Skin360 provides users with this AI assistant and ‘science-backed information’ the court finds it a reach to consider these services ‘medical care’ under BIPA’s health care exemption; [i]ndeed, Skin360 only recommends Neutrogena products to users of the technology, which suggests it is closer to a marketing and sales strategy rather than to the provision of informed medical care or treatment.”

Bye Bye Home Buyers? – Proposed Legislation Might Make Home Buyers’ Jobs Harder

One area that we have seen multiple times in TCPAWorld is complaints against parties offering to buy a consumers home.
Well, we have spotted an interesting trend in some state legislatures where bills are being introduced to rein those practices in.
In Tennessee, there is a bill which limits the number of times a developer or someone working on behalf of a developer can contact a homeowner.
In Pennsylvania, a similar bill has been introduced, but the unique factor in that bill is that the Secretary of the Commonwealth must designate a certain geographic region as a “homeowner cease and desist zone”. How long until all of Pennsylvania is a “homeowner cease and desist zone”?
Indiana’s bill is slightly different because it prohibits a telephone solicitor who is NOT a licensed real estate broker from making more than “one unsolicited home purchase inquiry to the same consumer in a single year.”
Typically, when you see multiple states addressing the same or similar issues, there is some model language being used and there are similarities between the states. However, this seems to be different bills and different use cases. Which suggests that these grew somewhat organically in the states.
The other interesting thing is some of the most active lobbyists in state politics are realtors and developers.
So, it will be very interesting to watch as the bills progress to see if there is any traction.

Proposed FAR CUI Rulemaking Nears Comment Deadline

Go-To Guide:

The comment period on the proposed FAR Controlled Unclassified Information (CUI) Rule closes Monday, March 17, 2025. 
To date, filed comments demonstrate core concerns, including the difficulty of complying with the eight-hour incident reporting requirement for potential CUI incidents or mismarked CUI. 
The FAR Council may issue the final rule later this year after adjudicating submitted comments and a 90-day Office of Information and Regulatory Affairs review period. 
Once the rule is finalized, government contractors performing work for any government agency who receive CUI must implement the security controls in NIST SP 800-171.

Despite the potentially sweeping impact of the proposed FAR CUI Rule (Proposed Rule), less than 30 comments have been filed to date during the comment period, which ends March 17, 2025. The FAR Council will adjudicate each of these comments, and any additional ones submitted by the deadline, before issuing the final rule, which may be expedited given the relatively low number of submissions.
The long-awaited Proposed Rule, published on Jan. 15, 2025, would implement the final piece of the National Archives and Records Administration (NARA)’s Federal Controlled Unclassified Information (CUI) Program, which dates back to 2010.
As we previously covered in a January 2025 GT Alert, the Proposed Rule would standardize cybersecurity requirements for all federal contractors and subcontractors and implement NARA’s policies under 32 CFR part 2002. The Proposed Rule would also introduce new procedures, including reporting and compliance obligations, and define roles and responsibilities for both the government and contractors who handle CUI.
Commenters Express Common Concerns and Themes
Commenters expressed many of the same concerns, and the submitted comments correspond to common themes.

The Eight-Hour Incident Reporting Timeframe Is Unreasonable. A key requirement under the Proposed Rule is to report a suspected or confirmed CUI incident within eight hours of discovery. This obligation also flows down to subcontractors and requires them to notify the prime or next higher tier subcontractor within the same eight-hour timeframe. Many commenters appear concerned about the potential burden and cost impact of this requirement, especially for small businesses. Commenters seek to align the reporting timeframe with other existing federal frameworks, such as the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA), which calls for a 72-hour timeframe to report qualifying incidents to the Cybersecurity and Infrastructure Security Agency. Similarly, DFARS 252.204-7012 (Safeguarding Covered Defense Information and Cyber Incident Reporting) also requires defense contractors to “rapidly report” cyber incidents to the Department of Defense within 72 hours of discovery. Notably, the Proposed Rule does not attribute this eight-hour reporting requirement to defense contractors due to their existing obligations under DFARS 252.204-7012. Accordingly, maintaining the current expedited timeframe structure has potential to further complicate federal contractor and subcontractor obligations under the Proposed Rule, depending on which agency they are working with. 
The Definition and Scope of “CUI incident” Require Clarification. Under the Proposed Rule, FAR 2.101 would be amended to add “CUI incident,” which shall be defined as “suspected or confirmed improper access, use, disclosure, modification, or destruction of CUI, in any form or medium.” In response, several commenters have noted that this term is poorly defined and overly broad. Core to these commenters’ concern is the related obligation for contractors to expeditiously report a suspected or confirmed CUI incident—a vaguely and broadly defined term would be potentially burdensome and drastically increase the number of reported events to the government. Given the breadth of the definition, the FAR Council’s estimate of 580 incident reports annually might be a significant underestimation. 
Small Business Contractors Would Incur High Compliance Costs. The FAR Council estimates that non-defense contractors and subcontractors would incur labor, hardware, and software costs in order to comply with the Proposed Rule. For small businesses, the total initial year cost estimate is $175,700, with recurring annual costs expected to be $103,800. The Proposed Rule recognizes this impact and has engaged in a Regulatory Impact Analysis (RIA) that considers specific business concerns. In response, one commenter has detailed the potential outsized impact of the Proposed Rule on small businesses, which do not have dedicated compliance teams or the built-in expertise to continuously monitor their systems with in-house resources, structure incident reporting chains, or implement training programs. This commenter suggests that the FAR Council’s estimate for training costs is underestimated. Such comments align with the FAR Council’s express invitation for feedback from small entities on any RIA assumptions or other expected burdens that may help inform the final rule. However, to date, small business concerns and other interested parties have largely been absent from the public comment efforts. Such entities should consider submitting comments to provide additional detail around the anticipated costs and considerations the RIA may have missed.

Other Concerns Raised
Some commenters have requested further guidance on how to handle legacy records and information that might have been previously designated as For Official Use Only (FOUO), a designation that is no longer utilized, and how those records would be marked under the CUI framework. Other comments request more guidance on how CUI would be identified, especially for small business concerns. While these are important considerations, they are likely outside of the current rulemaking’s scope, which arises under Title 48 of the CFR (the acquisition regulation). The Proposed Rule implements NARA’s CUI Program, which is separately described under 32 CFR part 2002, and which codified a standardized approach to designating, handling, and safeguarding CUI.
Additionally, some comments seek an extension of the public comment period. Given that the comment period remained in effect during the new administration’s regulatory freeze pending review, it appears unlikely that a continuance will be granted, and the 60-day comment period may close as scheduled.
Interested contractors should submit their comments on the Proposed Rule by March 17, 2025. Given the relatively few comments received, the adjudication process may be quicker than originally anticipated. The FAR Council may issue the final rule in 2025, with standardized cybersecurity standards for all federal contractors and subcontractors going into effect and the clauses included in contracts by year end or early 2026.

Blockchain+ Bi-Weekly; Highlights of the Last Two Weeks in Web3 Law: March 13, 2025

Among the biggest news that dropped in the past two weeks was the Trump administration’s announcement of a national Bitcoin reserve plan, a move whose mere discussion marks a significant shift from the federal government’s previous stance on digital assets and crypto. The SEC has continued its trend of closing non-fraud-related investigations and enforcement actions, providing some long-awaited relief for the industry. On the litigation front, Uniswap secured a key victory at the Second Circuit against the SEC, marking another win for DeFi. Meanwhile, the SEC continues to make waves with its statement on memecoins, asserting that the tokens themselves are not securities in many contexts unless tied to an investment contract. This statement has sparked widespread debate and heightened expectations for further developments and related classifications in the coming weeks.
These developments and a few other brief notes are discussed below.
National Bitcoin Reserve Plan Announced: March 6, 2025
Background: After a few weeks of teasing it, President Trump has released his Executive Order establishing a Strategic Bitcoin Reserve and Digital Asset Stockpile capitalized with digital assets that were forfeited as part of criminal or civil asset forfeiture. On the day the Executive Order was signed, Crypto/AI Czar David Sacks released a statement on social media that under prior administrations (which includes Trump’s first term) “the federal government sold approximately 195,000 bitcoin for proceeds of $366 million. If the government had held the bitcoin, it would be worth over $17 billion today.”
Analysis: In practice, this order primarily directs federal agencies to account for and retain, rather than sell, digital assets, seized digital assets—a move that, while noteworthy, is not particularly groundbreaking. There was some interesting text in the Order about it being a “strategic advantage” to be among the first nations to create a bitcoin reserve due to its limited supply. However, beyond this symbolic step, it does little to shift the broader landscape. That said, the absence of federal government sell pressure for the next four years is a welcome development for Bitcoin markets.
More SEC Investigations and Cases Dropped: March 3, 2025
Background: The creators of Bored Ape Yacht Club NFTs and related products, Yuga Labs, have announced the SEC has closed its investigation into the company, stating on X (formally Twitter), “NFTs are not securities.” At the same time, the SEC appears to have reached an agreement with Kraken to drop its pending case against the second largest digital asset exchange in the U.S. This leaves only the Ripple and PulseChain lawsuits still active, with the Cumberland DRW case dismissed while we were finalizing this update, highlighting just how quickly things are changing. The PulseChain case, meanwhile, is effectively dead if the jurisdiction dismissal holds up.
Analysis: While it remains unclear how Ripple and the SEC can coordinate a dismissal at this stage in the appeal process, with nearly every other non-fraud case either closed or in the process of closing, it is reasonable to assume that this case is also likely to wind down or end in the near future. While fraud cases will continue and new cases may emerge, it is highly unlikely that we will see new non-fraud enforcement actions related to failure-to-register as a security until clearer regulatory rules are established. The substantial costs and uncertainty these cases have imposed on the industry make their resolution a much-needed reprieve.
Uniswap Wins with the SEC and at the Second Circuit: February 25, 2025
Background: The SEC’s Enforcement Division issued a Wells notice to Uniswap in April of last year, signaling its intention to recommend enforcement action against the decentralized exchange. Last week, Uniswap announced that it has been informed that the SEC has officially closed its investigation with no further action. In the same week, the Second Circuit upheld the dismissal of a civil securities class action filed against Uniswap.
Analysis: The closure of the SEC’s investigation into Uniswap follows similar decisions regarding the NFT platform OpenSea and the online exchange Robinhood. The ruling in the Second Circuit, meanwhile, is seen as a broader win for DeFi, holding that social media posts about the security of the platform and transactions executed via its smart contracts do not make its developers statutory sellers or solicitors of securities transactions. Combined with the SEC dropping its case against Consensys over the Metamask wallet swapping and staking functionalities (which facilitate transactions with third-party DeFi providers), DeFi had a strong week—despite market-wide token price declines.
SEC Stays Busy with Flurry of Developments: February 26, 2025
Background: In addition to the Uniswap and Consensys closures noted above, the SEC also has called off its investigation into the Winklevoss-backed platform Gemini. It also acknowledged 4 crypto ETFs, released a Staff Statement on Memecoins, had six Crypto Task Force meetings, released Commissioner Peirce’s statement on litigation by enforcement, and two statements from Commissioner Crenshaw decrying recent Agency actions.
Analysis: It is hard to imagine all of this would be happening so quickly if there wasn’t unofficial buy-in from the likely future Chair of the SEC, Paul Atkins. The biggest development by far was the statement on memecoins, which is seemingly an official shift in the SEC’s interpretation of the Howey test as well as an official statement that the tokens themselves aren’t securities in certain situations and need a separate investment contract, which is basically the exact opposite position the SEC took in LBRY and Kik. 
Briefly Noted:
OCC Permits Banks to Engage in Cryptocurrency Activities: The Office of the Comptroller of the Currency (OCC) has issued Interpretive Letter 1183, clarifying that national banks and federal savings associations can engage in cryptocurrency-related activities, including custody services and certain stablecoin operations, without needing prior regulatory approval. This marks a significant policy shift, removing previous barriers for banks offering crypto services.
White House Crypto Summit: The White House hosted a summit of leaders in the crypto industry. While not much in terms of developments came from that meeting, it is nice to see this level of interaction between government officials and industry leaders.
Richard Heart Beats SEC: It looks like the court overseeing the Richard Heart/Hex/PulseChain case has agreed that his interactions with the U.S. were not sufficient to create specific jurisdiction or satisfy what is required for application of U.S. securities laws to his (alleged) conduct.
OKX Exchange Settles with DOJ: OKX has agreed to pay over $500 million for serving as an unregistered money transmitter for U.S. customers from 2018 until 2024.
ByBit Hack Developments: There appears to be conflicting information on whether Bybit had its own systems compromised or if the breach was solely due to a hack of its SAFE multi-sig provider. The attack resulted in significant fund losses, though the full extent is still being assessed. Notably, the founder gave a full 1-hour interview in the days following the incident—an unusual level of transparency in the aftermath of a major security breach and possibly even a level of pretty radical transparency.
Senate Banking Hearing on Digital Asset Legal Framework: The Senate Banking Subcommittee on Digital Assets held a hearing titled Exploring Bipartisan Legislative Frameworks for Digital Assets, demonstrating that lawmakers are following through on their commitment to prioritize the fast-tracking of digital asset regulations in the coming months.
Senate Passes CRA to Overturn IRS Crypto Broker Rule: In a strong bipartisan move, the Senate passed a 70-28 resolution to overturn a controversial tax reporting rule enacted in the final days of the previous administration. This rule would have broadly classified internet service providers as brokers, requiring them to collect tax information, including Social Security numbers, from users. President Trump has already stated that he will sign this resolution into law if and when it passes in the House. If enacted, this legislation will prevent the IRS from reintroducing similar tax reporting requirements in the future without Congressional approval.
SEC Sets First Crypto Roundtable: The SEC is set to host their first roundtable for the crypto task force, conveniently scheduled for the Friday before the D.C. Blockchain Summit. The SEC also named a number of industry veterans as the staff of its Crypto Task Force, with a promising sign that those with hands-on experience in the space will have a role in shaping policy.
Bi-Partisan “Congressional Crypto Caucus” Formed: Republican House Majority Whip Tom Emmer and Democrat Ritchie Torres are creating a “Congressional Crypto Caucus,” which is intended to create a unified and bipartisan coalition to spearhead bills that support the growth of digital assets in America.
Senate Bill to Stop Chokepoint 3.0: The chair of the Senate Finance Committee is proposing a bill that eliminates “reputational risk” as a component of the supervision of depository institutions after it was used to debank unfavored industries in Operation Chokepoint and Chokepoint 2.0.
Houlihan Capital Issues Q4 2024 Crypto Market & VC Industry Report: Houlihan Capital released its latest report analyzing crypto market trends, venture capital deal activity and sector performance. A key takeaway is that while early-stage investments slowed, later-stage crypto deals saw an uptick, reflecting growing investor confidence in established blockchain projects.
Crypto Market Sees Price Declines, Over the past two weeks, Bitcoin (BTC) dropped about 21% to $78,000, while Ethereum (ETH) fell nearly 15% to $1,873. Despite prices still being much higher than six months ago, the decline suggests that crypto remains viewed as a high-risk asset rather than a hedge like gold, reflecting its continued correlation with equities.
Conclusion:
Although the current iteration of the national Bitcoin reserve strategy is quite limited—essentially just preventing the federal government from selling Bitcoin it otherwise would have—it is symbolically significant and has the potential to evolve into something much more impactful. The SEC appears to be following through on its commitment to wind down non-fraud-related litigations and investigations, providing some regulatory relief for the industry. Beyond the SEC, Congress has been increasingly active in exploring and advancing crypto-related legislation and regulatory frameworks, further intensifying focus on the industry.

DELETE, DELETE, DELETE: FCC Looking For Public Comment on “Unnecessary Regulatory Burdens” And Boy Oh Boy Does The Czar Have Some Ideas

So American have watched mostly in horror as some thing called DOGE has dismantled critical government services, seemingly cutting jobs and–at times–entire functions without really even understanding what they were doing.
Deregulation is an incredibly sexy thing when done well. And pretty doggone ugly when done poorly.
The FCC is leaning into the sexy side of deregulation it would appear by actually seeking to educate itself as to what regulations are causing unnecessary regulatory burden and then get rid of them. Horray! And given the title of the notice– “delete, delete, delete”–I suspect we are going to see some really bold (read: useful) changes to the tome of FCC regulation weighing down American enterprise.
Nowhere are the FCC’s regs more needlessly oppressive in my view than those implementing the TCPA.
The new revocation rule–my goodness what a disaster— jumps immediately to mind.
But a ton of other ticky-tack and sometimes entirely unworkable regulations also exist out there under the TCPA.
While the bones of the DNC rule ought to stick around, basically all of the Commission’s rules in 47 CFR 64.1200 should be reevaluated to promote desired contact between businesses and consumers. Great opportunity to restore the “balanced” approached to regulation the 1992 FCC promise but that the 2008-2024 FCC stole away.
And let’s not forget the most important regulations– those the FCC has handed to the carriers (without Congressional authority) to block, censor, throttle and label our speech without guardrails or redress.  It flies DIRECTLY in the face of the FCC’s core mission to “make available, so far as possible, . . . a rapid, efficient, Nation-wide, and world-wide wire and radio communication service with adequate facilities at reasonable charges.” That has to end entirely.
I expect we will all have fun writing our wish list, like a kid writing to Santa Claus.
R.E.A.C.H. will be discussing this next meeting. But for now send me any suggestions you have for TCPA regulations that ought to be rolled back as we will certainly be submitted a comment.
DEADLINES: 
Comments Due: Friday, April 11, 2025Reply Comments Due: Monday, April 28, 2025
Full notice here: DA-25-219A1.pdf
This is a really big deal folks and I expect we will see some really big changes. So don’t be shy in sending in suggestions.

AI in Business: The Risks You Can’t Ignore

Artificial Intelligence (AI) is revolutionizing business operations, offering advancements in efficiency, decision-making, and customer engagement. However, its rapid integration into business processes brings forth a spectrum of legal and financial risks that enterprises must navigate to ensure compliance and maintain trust.
The Broad Legal Definition of AI and Its Implications
In the United States, the legal framework defines AI far more expansively than the average person might expect, potentially encompassing a wide array of software applications. Under 15 U.S.C. § 9401(3), AI is any machine-based system that:

makes predictions, recommendations, or decisions,
uses human-defined objectives, and
influences real or virtual environments.

This broad definition implies that even commonplace tools like Excel macros could be subject to AI regulations. As Neil Peretz of Enumero Law notes, such an expansive definition means that businesses across various sectors must now re-appraise all of their software usage to ensure compliance with new AI laws.
Navigating the Evolving Regulatory Landscape
The regulatory environment for AI is rapidly evolving. The European Union’s AI Act, for instance, classifies AI systems into risk categories, imposing strict compliance requirements on high-risk applications. In the United States, various states are introducing AI laws, requiring companies to stay abreast of changing regulations.
According to Jonathan Friedland, a partner with Much Shelist, P.C., who represents boards of directors of PE-backed and other privately owned companies, developments in artificial intelligence are happening so quickly that many companies of even modest size are spending significant time developing compliance programs to ensure adherence to applicable laws.
One result, according to Friedland, is that “[a]s one might expect, the sheer number of certificate programs, online courses, and degrees now offered in AI is exploding. Everyone seems to be getting into the game,” Friedland continues, “for example, the International Association of Privacy Professionals, a global organization previously focused on privacy and data protection, recently started offering its ‘Artificial Intelligence Governance Professional certification.” The challenge for companies, according to Friedland, is “to invest appropriately without overdoing it.”
Navigating Bias and Discrimination in AI Systems
Legal challenges have been associated with algorithmic bias and accountability, which claim that historical data used to train AI often reflects societal inequalities, which AI systems can further perpetuate.
Sean Griffin, of Longman & Van Grack, highlights cases where AI tools have led to allegations of discrimination, such as a lawsuit against Workday, where an applicant claimed the company’s AI system systematically rejected Black and older candidates. Similarly, Amazon discontinued an AI recruiting tool after discovering it favored male candidates, revealing the potential for AI to reinforce societal biases.
To mitigate these risks, businesses should implement regular audits of their AI systems to identify and address biases. This includes diversifying training data and establishing oversight mechanisms to ensure fairness in AI-driven decisions.
Addressing Data Privacy Concerns
AI’s reliance on vast datasets, often containing personal and sensitive information, raises significant data privacy issues. AI-powered tools might be able to infer sensitive information, such as health risks from social media activity, potentially bypassing traditional privacy safeguards.
Because AI systems potentially have access to a wide range of data, compliance with data protection regulations like the GDPR and CCPA is crucial. Businesses must ensure that data used in AI systems is collected and processed lawfully, with explicit consent where necessary. Implementing robust data governance frameworks and anonymizing data can help mitigate privacy risks.
Ensuring Transparency and Explainability
The complexity of AI models, particularly deep learning systems, often results in ‘black box’ scenarios where decision-making processes are opaque. This lack of transparency can lead to challenges in accountability and trust. Businesses should be mindful of the risks associated with engaging third parties to develop or operate their AI solutions. In many areas of decision-making, explainability is required, and a black-box approach will not suffice. For example, when denying someone for consumer credit, specific adverse action reasons need to be provided to the applicant.
To address this, businesses should strive to develop AI models that are interpretable and can provide clear explanations for their decisions. This not only aids in regulatory compliance but also enhances stakeholder trust.
Managing Cybersecurity Risks
AI systems are both targets and tools in cybersecurity. Alex Sharpe points out that cybercriminals are leveraging AI to craft sophisticated phishing attacks and automate hacking attempts. Conversely, businesses can employ AI for threat detection and rapid incident response.
The legal risks associated with AI in financial services highlight the importance of managing cybersecurity risks. Implementing robust cybersecurity measures, such as encryption, access controls, and continuous monitoring, is essential to protect AI systems from threats. Regular security assessments and updates can further safeguard against vulnerabilities.
Considering Insurance as a Risk Mitigation Tool
Given the multifaceted risks associated with AI, businesses should evaluate the extent to which certain types of insurance can help them manage and reduce risks. Policies such as commercial general liability, cyber liability, and errors and omissions insurance can offer protection against various AI-related risks.
Businesses can benefit from auditing business-specific AI risks and considering insurance as a risk mitigation tool. Regularly reviewing and updating insurance coverage ensures that it aligns with the evolving risk landscape associated with AI deployment.
Conclusion
While AI offers transformative potential for businesses, it also introduces significant legal and financial risks. By proactively addressing issues related to bias, data privacy, transparency, cybersecurity, and regulatory compliance, enterprises can harness the benefits of AI while minimizing potential liabilities.
AI tends to tell the prompter what they want to hear, whether it’s true or not, underscoring the importance of governance, accountability, and oversight in its adoption. Organizations that establish clear policies and risk management strategies will be best positioned to navigate the AI-driven future successfully.

To learn more about this topic view Corporate Risk Management / Remembering HAL 9000: Thinking about the Risks of Artificial Intelligence to an Enterprise. The quoted remarks referenced in this article were made either during this webinar or shortly thereafter during post-webinar interviews with the panelists. Readers may also be interested to read other articles about risk management and technology.
©2025. DailyDACTM, LLC d/b/a/ Financial PoiseTM. This article is subject to the disclaimers found here.

Data Processing Evaluation and Risk Assessment Requirements Under California’s Proposed CCPA Regulations

As we have previously detailed here, the latest generation of regulations under the California Consumer Privacy Act (CCPA), drafted by the California Privacy Protection Agency (CPPA), have advanced beyond public comments are closer to becoming final. These include regulations on automated decision-making technology (ADMT), data processing evaluation and risk assessment requirements and cybersecurity audits.
Assessments and Evaluations Overview
The new ADMT notice, opt-out and access and appeal obligations and rights go into immediate effect upon the regulation’s effective date, which follows California Office of Administrative Law (OAL) approval and would either be subject to the quarterly regulatory implementation schedule in the Government Code, or as has been the case with prior CCPA regulations immediately on OAL sign-off. We will not know if the CPPA will again seek a variance from the schedule until they submit the final rulemaking package.
Moving on to evaluations and risk assessments, the draft regulations do propose a phase-in, but only in part. Evaluations must be undertaken beginning on the regulation’s effective date, whereas assessment requirements apply to practices commencing on the effective date, but there is a 24 month period to complete, file certifications and abridged versions, and make available for inspection.
However, since Colorado, which like California, has very detailed requirements for conducting and documenting assessments, and New Hampshire, Oregon, Texas, Montana, Nebraska, and New Jersey already require assessments, Delaware and Minnesota will this summer, and Indiana, Rohde Island and Kentucky will by the new year, query whether the California phase-in is of much use. Out of the 20 state consumer privacy laws, all but Utah and Iowa require assessments.
Further, without at least a cursory assessment, how can you determine if the notice, opt-out and access and appeal rights apply?
So, what is the difference between an evaluation and an assessment?
First, they are required by different provisions. Evaluations are required by Section 7201, and risk assessments by Section 7150.
Next, there is no phase-in of evaluations as with risk assessments.
Risk assessments are much more complex and prescribed, and are at the core of a risk benefit judgment decision, and must be available for inspection and abridged summaries must be filed.
The content of the evaluation, which need not be published or subject to inspection demand outside of discovery, need only address if the process and technology is effective, in other words, materially error free, and if it discriminates against a protected class, in other words free of material bias. As such, they have similarities to assessments under the Colorado AI Act, effective next year but likely to be amended before then, and the recently passed Virginia HB 2094 AI bill that may or may not get signed by Governor Yougkin.
Thus, an evaluation alone won’t help you determine if the ADMT notice, opt-out and access and appeal rights apply, nor meet the risk assessment requirements. While it is a separate analysis, it can be incorporated into assessments assuming a company begins those immediately. 
Also, evaluations are not required for selling and processing of sensitive personal information (PI), as are assessments, and assessments are only required for identification processing to the extent AI is trained to do so, whereas any processing for identification is subject to an evaluation. Since CCBA is part of behavioral advertising, which is part of extensive profiling, sharing needs to be addressed in both evaluations and assessments.
Finally, under Section 7201, a business must implement policies, procedures, and training to ensure that the physical or biological identification or profiling works as intended for the business’s proposed use and does not discriminate based on protected classes.
So on to assessments, what activities need to be assessed?
First selling or sharing. All 18 states that require assessments require them for this; though, for the non-California states, the trigger is processing for targeted advertising rather than “sharing”, which is broader than sharing for CCBA, but the California regulations catch up with the new concept of behavioral advertising.
Next, processing of sensitive personal information. The same 18 states require assessments for the processing of sensitive data, with differing definitions. For instance, what is considered children’s personal data differs considerably. Notably, the California draft Regulation amendments would raise the age from 13 to 16, and Florida is under 18. There is also variation in the definition of health data. 
Note, while the Nevada and Washington (and potential New York) consumer health laws do not explicitly require assessments, they are practically needed, and Vermont’s data broker law requires initial risk assessments and a process for evaluating and improving the effectiveness of safeguards.
Other Risk Assessment Triggers
Assessments are mandatory before using ADMT to make or assist in making a significant decision, which is “profiling in furtherance of decisions that produce legal or similarly significant effects concerning a consumer.” This is a General Data Protection Regulation (GDPR) and European Data Protection Board (EDPB) inspired provision. The other states that require assessments also have a similar obligation, although the definitions may differ somewhat. In California, “Decisions that produce legal or similarly significant effects concerning a consumer” means decisions that result in the provision or denial of financial or lending services, housing, insurance, education enrollment or opportunity, criminal justice, employment opportunities, healthcare services, or access to essential goods or services. Only California gives guidance on what are essential goods or services, by means a parenthetical “(e.g., groceries, medicine, hygiene products, or fuel). Critics are concerned that this limits algorithmic custom pricing, sometimes derogatively referred to as surveillance pricing, or even AI of consumer behavior to decide where to open or close stores, though the aggregate or de-identified data should suffice for that. There is considerable guidance out of the EU, which can be looked to, though is clearly not binding. The EU approach is quite broad.
Speaking of looking to the EU, beware that the California and Colorado regulations diverge considerably from what is required under GDPR assessments, and keep in mind the material differences between GDPR with its lawful basis and legitimate interest tests and the US laws with opt-out concepts.
Uniquely amongst the states, California proposes the concept of extensive profiling, which covers any:
1) work or educational profiling;
2) public profiling; or
3) behavioral advertising.
Note however, that whilst behavioral advertising is said to include CCBA, it is broader and is defined as “the targeting of advertising to a consumer based on the consumer’s personal information obtained from the consumer’s activity—both across businesses, distinctly-branded websites, applications, or services, (i.e., CCBA) and within the business’s own distinctly-branded websites, applications, or services.” Significantly, this closes the gap between CCBA and the non-California regulation of targeted advertising, by including entirely 1st party behavioral advertising.
There is a carve out for “nonpersonalized advertising” as defined by CCPA Section .140(t), which means advertising and marketing that is based solely on a consumer’s personal information derived from the consumer’s current interaction with the business with the exception of the consumer’s precise geolocation. But also, note that here the exception is specifically limited to where the PI is not disclosed to third parties (i.e., not a processor or contractor). This has led some to argue that this guts the carve out. However, if personal data was disclosed to a third party, that would likely be a sale, especially given the breadth of the concept of “over valuable consideration” in the eyes of the regulators. So, the approach really is not inconsistent with the current treatment of contextual advertising.
PI to train ADMT or AI
Assessments are also proposed to be required for processing of PI to train ADMT or AI. This is another uniquely California concept, at least under state consumer privacy laws, and the California Chamber of Commerce and others, including some members of the legislature, have argued that, like other aspects of the proposed regulation’s treatment of ADMT, it goes beyond the Agency’s statutory authority. It is interesting to note that one of the topics included in the US House Committee on Energy and Commerce’s request for information to inform federal privacy legislation this week is the role of privacy and consumer protection standards in AI regulation and specifically the impact of state privacy law regulation of ADMT and profiling on US AI leadership. Another topic of focus is “the degree to which US privacy protections are fragmented at the state level and costs associated with fragmentation,” which seems to be inviting a preemption scope debate. So by the time at least this part of the regulation requires action, it may possibly be curtailed by federal law. That said, evaluations and assessments are practically necessary to guide compliance and information governance and to date repeated attempts at federal consumer privacy legislation have been unsuccessful.
Assessment Details
Most state laws do not have any specifics regarding how to conduct or document risk assessments, with the notable exception of Colorado. When it started assessment rulemaking, the Agency stated that it would look to try to create interoperability with Colorado and would also look to the guidance by the EDPB. While both can be seen to have influenced California’s proposed requirements, California adds to these.
Some of the content requirements are factual, such as purposes of processing and categories of PI. Others are more evaluative, such as the quality of the PI and the expected benefits and potential negative impacts of the processing, and how safeguards may mitigate those risks of harm. Nine examples are included in Section 7152(a)(5) to guide analysis.
Section 7152(a)(3) calls for analysis of specific operational elements for the processing.
Focus on Operational Elements
These operational elements are listed here[1] and can be seen as not only getting under the hood of the processing operations but also informing consumer expectations, and the risks and benefit analysis that is the heart of an assessment. Note, in particular, the inquiries into retention and logic, the latter meaning ‘built-in’ assumptions, limitations, and parameters that inform, power or constrain the processing, particularly as concerns ADMT.
Analysis and Conclusions
The assessment must not only document those processing details and the risk / benefit and risk mitigation analysis, but the conclusions and what was approved and/or disapproved.
The draft regulations call for participation by all relevant stakeholders, and they must be specifically named, as must the identification of the person responsible for the analysis and conclusions.
Filing and Certification
California diverges from the other states with respect to reporting requirements. Annually a responsible executive must certify to the CCPA that the business assessed all applicable processing activities, and an abridged assessment must be filed for each processing activity actually initiated. This will make it very apparent which businesses are not conducting assessments.
Further, the draft regulations limit what is required in the abridged assessments to largely factual statements:

The triggering processing activity;
The purposes;
The categories of personal information, including any sensitive categories; and
The safeguards undertaken.

Note that the risk / benefit analysis summary is not a part of the filing.
Inspection and Constitutional and Privilege Issues
Contrast that with the detailed risk / benefit analysis required by the full assessment, which, like all of the other states that require or will require assessments, is subject to inspection upon request.
This GDPR-inspired approach to showing how you made decisions calls for publication of value judgments, which, as I have opined in an article that is in your materials (see a synopsis here), is likely unconstitutional compelled speech. While the 9th Circuit in the X Corp and NetChoice cases struck down harm assessment and transparency requirements in the context of children’s online safety, the Court distinguished compelling disclosure of subjective opinions about a company’s products and activities from requiring disclosure of merely product facts. There is no 1st Amendment in GDPR-land, so we will have to wait and see if the value judgment elements of assessments can really be compelled for inspection.
Inspections also raise serious questions about attorney-client and work product privilege. Some states specifically provide that inspections of assessments is not a waiver of privilege, and/or that they will be maintained as confidential and/or are not subject to public records access requests. The draft regulations do not; however, the CCPA itself provides that the Act shall not operate to infringe on evidentiary privileges. At any event, consider labeling legal analysis and counsel as such and maintaining them apart from what is maintained for inspection.[2]

[1] Planned method for using personal information; disclosures to the consumer about processing, retention period for each category of personal information, categories of third parties with access to consumers’ personal information, relationship with the consumer, technology to be used in the processing, number of consumers whose personal information will be processed and the logic used.
[2] Note – Obtaining educational materials from Squire Patton Boggs Services Ireland, Limited, or our resellers, does not create an attorney-client relationship with any Squire Patton Boggs entity and should be used under the direction of legal counsel of your choice.