U.S. Export Controls On Software License Keys
With the many updates to U.S. export controls in the past few months, it would be easy to miss a recent update concerning software keys. The U.S. Commerce Department Bureau of Industry and Security (BIS) amended the Export Administration Regulations (EAR) to add new Sec. 734.19 of the EAR, specifying how and when license requirements apply to:
Software license keys allowing a user the ability to use software or hardware; and
Software keys that renew existing software or hardware use licenses.
Sec. 734.19 specifies that such software keys are classified and controlled under the same Export Control Classification Numbers (ECCNs) as the corresponding software or hardware to which they provide access, imposing the same controls and authorization requirements. For hardware, BIS provided that “the software key would be classified under the corresponding ECCN in the software group (e.g., a software license key that allows the use of hardware classified under ECCN 5A992 would be classified under ECCN 5D992).”
As a result of this clarification, companies that provide software keys to their customers should review their export compliance programs to ensure they have appropriate controls not only around the provision of software, but also around corresponding license keys. For instance, companies should be aware that even if no authorization was required for the release of the initial software license key, renewal use licenses may be subject to authorization requirements to the extent circumstances changed (e.g., if the end user was added to the Entity List).
This change is particularly noteworthy given the EAR’s license requirement for the export, reexport, or transfer (in-country) to or within Russia or Belarus of many types of software, including certain EAR99 software.
Everything Changes April 11, 2025: You Have Just 45 Days Until The Biggest Tcpa Ruling Of The Year Takes Effect–are You Ready? [Video]
The FCC’s critical new TCPA revocation rule is set to go into effect on April 11, 2025– that’s just 45 days from now.
Yes, a consumer will be able to revoke by “any reasonable means” but that is already the case. And yes, a caller will only have ten business days to honor a revocation.
But those are tiny changes compared to the new scope of revocation rules–which is what EVERYBODY needs to be paying attention to right now.
Really the core of the new rule is found in paragraph 30 and 31 of the FCC’s ruling and it boils down to three key provisions:
“[W]hen a consumer revokes consent with regard to telemarketing robocalls or robotexts, the caller can continue to reach the consumer pursuant to an exempted informational call, which does not require consent, unless and until the consumer separately expresses an intent to opt out of these exempted calls”;
“If the revocation request is made directly in response to an exempted informational call or text, however, this constitutes an opt-out request from the consumer and all further non-emergency robocalls and robotexts must stop”; and
“[W]hen consent is revoked in any reasonable manner, that revocation extends to both robocalls and robotexts regardless of the medium used to communicate the revocation of consent.”
Taken together, the FCC’s new scope of consent rules require a “stop” or “do not call” request received in response to an informational or exempted call or text to require communications to stop across all channels for all purposes across the enterprise.
Insane right?
The only upside is businesses will have a chance to send a one time “clarification” message to try to limit the damage caused by the consumer’s revocation effort. But if a consumer does not respond to the clarification message it is lights out–so crafting brilliantly-worded clarification messages will now be a massively important part of enterprise contact strategy .
Absolutely massive change. And the biggest headache imaginable for large companies.
We break down the massive consequences of the ruling here:
A ton of folks have been asking me whether this ruling is likely to be stayed or vacated like the one-to-one rule was. The truth is, probably not.
The reason is that no one seemed to be paying attention to this ruling before it came out. I have talked with numerous large companies recently that have asked why the trades didn’t do anything to challenge the rule and as far as I can tell nobody but R.E.A.C.H. and USHealth was really paying attention.
I predict there will be some last minute scrambling to try to get the rule stayed but a Hobbs Act appeal (like the one that killed the one-to-one rule) is out of the question as the time for such a challenge ran a couple months back.
R.E.A.C.H. is evaluating seeking a stay in light of Mr. Trump’s recent efforts to seize control over the FCC but we are hoping a different trade will take the lead here as this is less of a lead gen issue and more of an issue for large enterprises with multiple consumer touchpoints.
But with only 45 days to go until the rule becomes effective it is CRITICAL that you folks reach out to us ASAP so we can help before it is too late.
Per usual, however, I will give you some free tips to consider:
As noted, work on drafting brilliant clarification messages;
Consider moving toward non-regulated technology and human selection systems for your text and call outreach (Safe Select, Drips Initiate, Convoso CallCaltyst, etc.);
Identify and prioritize high value messaging while removing campaigns that tend to draw higher opt out rates from your campaign strategy;
For larger organization, assigning a point person to oversee contact strategy and evaluate enterprise needs in light of these new rules is critical;
Leverage vendors that offer solutions to identify free form text responses (merely obeying a handful of keywords will not be sufficient);
Collapse contacts to fewer outbound channels to make it easier to track and honor revocations and critical re-consents; and
Build out robust opportunities for consumers to provide new consents as you interact and provides services to existing customers.
Charting the Future of AI Regulation: Virginia Poised to Become Second State to Enact Law Governing High-Risk AI Systems
Virginia has taken a step closer to becoming the second state (after Colorado) to enact comprehensive legislation addressing discrimination stemming from the use of artificial intelligence (AI), with the states taking different approaches to this emerging regulatory challenge.
On February 12, 2025, the Virginia state senate passed the High-Risk Artificial Intelligence Developer and Deployer Act (H.B. 2094) which, if signed into law, will regulate the use of AI in various contexts, including when it is used to make decisions regarding “access to employment.” The legislation now heads to Governor Glenn Youngkin’s desk for signature. If signed, the law will come into force on July 1, 2026, and will establish new compliance obligations for businesses that deploy “high-risk” AI systems affecting Virginia “consumers,” including job applicants.
Quick Hits
If signed into law by Governor Youngkin, Virginia’s High-Risk Artificial Intelligence Developer and Deployer Act (H.B. 2094) will go into effect on July 1, 2026, giving affected businesses plenty of time to understand and prepare for its requirements.
The legislation applies to AI systems that autonomously make—or significantly influence—consequential decisions, such as lending, housing, education, and healthcare, and potentially job hiring as well.
Although H.B. 2094 excludes individuals acting in a commercial or employment context from the definition of “consumer,” the term “consequential decision” specifically includes decisions with a material legal or similar effect regarding “access to employment,” such that job applicants are ostensibly covered by the requirements and prohibitions under a strict reading of the text.
Overview
Virginia’s legislation establishes a duty of reasonable care for businesses employing automated decision-making systems in several regulated domains, including employment, financial services, healthcare, and other consequential sectors. The regulatory framework applies specifically to “high-risk artificial intelligence” systems that are “specifically intended to autonomously” render or be a substantial factor in rendering decisions—statutory language that significantly narrows the legislation’s scope compared to Colorado’s approach. A critical distinction in the Virginia legislation is the requirement that AI must constitute the “principal basis” for a decision to trigger the law’s anti-discrimination provisions. This threshold requirement creates a higher bar for establishing coverage than Colorado’s “substantial factor” standard.
Who Is a ‘Consumer’?
A central goal of this law is to safeguard “consumers” from algorithmic discrimination, especially where automated systems are used to make consequential decisions about individuals. The legislation defines a “consumer” as a natural person who is a resident of Virginia and who acts in an individual or household context. And, as with the Virginia Consumer Data Protection Act, H.B. 2094 contains a specific exclusion for individuals acting in a commercial or employment context.
One potential source of confusion is how “access to employment” can be a “consequential decision” under the law—while simultaneously excluding those in an employment context from the definition of “consumers.” The logical reading of these conflicting definitions is that job applicants do not act in an employment capacity on behalf of a business; instead, they are private individuals seeking employment for personal reasons. In other words, if Virginia residents are applying for a job and an AI-driven hiring or screening tool is used to evaluate their candidacy, a purely textual reading of the legislation suggests that they remain consumers under the statute because they are acting in a personal capacity.
Conversely, once an individual becomes an employee, the employee’s interactions with the business (including the business’s AI systems) are generally understood to reflect action undertaken within an employment context. Accordingly, if an employer uses a high-risk AI system for ongoing employee monitoring (e.g., measuring performance metrics, time tracking, or productivity scores), the employee might no longer be considered a “consumer” under H.B. 2094.
High-Risk AI Systems and Consequential Decisions
H.B. 2094 regulates only those artificial intelligence systems deemed “high-risk.” Such systems autonomously make—or are substantial factors in making—consequential decisions that affect core rights or opportunities, such as admissions to educational programs and other educational opportunities, approval for lending services, the provision or denial of housing or insurance, and, as highlighted above, access to employment. The legislature included these provisions to curb “algorithmic discrimination,” which is the illegal disparate treatment or unfair negative effects that occur on the basis of protected characteristics, such as race, sex, religion, or disability, and result from the use of automated decision-making tools. And, as we have seen with other, more narrowly focused laws in other jurisdictions, even if the developer or deployer does not intend to use an AI tool to engage in discriminatory practice, merely using an AI tool that produces such biased outcomes may trigger liability.
H.B. 2094 also includes a list of nineteen types of technologies that are specifically excluded from the definition of a “high-risk artificial intelligence system.” One notable carve-out is “anti-fraud technology that does not use facial recognition technology.” This is particularly relevant as the prevalence of fraudulent remote worker job applicants increases and more companies seek effective mechanisms to address such risks. Cybersecurity tools, anti-malware, and anti-virus technologies are likewise entirely excluded for obvious reasons. Among the other more granular exclusions, the legislation takes care to specify that spreadsheets and calculators are notconsidered high-risk artificial intelligence. Thus, those who harbor anxieties about the imminent destruction of pivot tables can breathe easy—spreadsheet formulas will not be subject to these heightened regulations.
Obligations for Developers
Developers—entities that create or substantially modify high-risk AI systems—are subject to a “reasonable duty of care” to protect consumers from known or reasonably foreseeable discriminatory harms. Before providing a high-risk AI system to a deployer—entities that use high-risk AI systems to make consequential decisions in Virginia—developers must disclose certain information (such as the system’s intended uses), known limitations, steps taken to mitigate algorithmic discrimination, and information intended to assist the deployer with performing its own ongoing monitoring of the high-risk AI system for algorithmic discrimination. Developers must update these disclosures within ninety days of making any intentional and substantial modifications that alter the system’s risks. Notably, developers are also required to either provide or, in some instances, make available extensive amounts of documentation relating to the high-risk AI tool they develop, including legally significant documents like impact assessments and risk management policies.
In addition, H.B. 2094 appears to take aim at “deep fakes” by mandating that if a developer uses a “generative AI” model to produce audio, video, or images (“synthetic content”), a detectable marking or other method that ensures consumers can identify the content as AI-generated will generally be required. The rules make room for creative works and artistic expressions so that the labeling requirements do not impair legitimate satire or fiction.
Obligations for Deployers
Like developers, deployers must also meet a “reasonable duty of care” to prevent algorithmic discrimination. H.B. 2094 requires deployers to devise and implement a risk management policy and program specific to the high-risk AI system they are using. Risk management policies and programs that are designed, implemented, and maintained pursuant to H.B. 2094, and which rely upon the standards and guidance articulated in frameworks like the National Institute of Standards and Technology’s AI Risk Management Framework (NIST AI RMF) or ISO/IEC 42001, presumptively demonstrate compliance.
Prior to putting a high-risk AI system into practice, deployers must complete an impact assessment that considers eight separate enumerated issues, including the system’s purpose, potential discriminatory risks, and the steps taken to mitigate bias. As with data protection assessments required by the Virginia Consumer Data Protection Act, a single impact assessment may be used to demonstrate compliance with respect to multiple comparable high-risk AI systems. Likewise, under H.B. 2094, an impact assessment used to demonstrate compliance with another similarly scoped law or regulation with similar effects, may be relied upon. In all cases, however, the impact assessment relied upon must be updated when the AI system undergoes a significant update and must be retained for at least three years.
Deployers also must clearly inform consumers when a high-risk AI system will be used to make a consequential decision about them. This notice must include information about:
(i) the purpose of such high-risk artificial intelligence system,
(ii) the nature of such system,
(iii) the nature of the consequential decision,
(iv) the contact information for the deployer, and
(v) a description of the artificial intelligence system in plain language of such system.
Any such disclosures must be updated within thirty days of the deployer’s receipt of notice from the developer of the high-risk AI system that it has made intentional and significant updates to the AI system. Additionally, the deployer must “make available, in a manner that is clear and readily available, a statement summarizing how such deployer manages any reasonably foreseeable risk of algorithmic discrimination that may arise from the use or deployment of the high-risk artificial intelligence system.” In the case of an adverse decision—such as denying a loan or rejecting a job application—the deployer must disclose the principal reasons behind the decision, including whether the AI system was the determining factor, and give the individual an opportunity to correct inaccuracies in the data or appeal the decision.
Exemptions, Cure Periods, and Safe Harbors
Although H.B. 2094 applies broadly, it exempts businesses that operate in certain sectors or engage in regulated activities for which equivalent or more stringent regulations are already in place. For example, federal agencies and regulated financial institutions may be exempted if they adhere to their own AI risk standards. Similarly, H.B. 2094 provides partial exemptions for Health Insurance Portability and Accountability Act (HIPAA)–covered entities or telehealth providers in limited situations, including those where AI-driven systems generate healthcare recommendations but require a licensed healthcare provider to implement those recommendations or where an AI system for administrative, quality measurement, security, or internal cost or performance improvement functions.
H.B. 2094 also contains certain provisions that are broadly protective of businesses. For example, the legislation conspicuously does not require businesses to disclose trade secrets, confidential information, or privileged information. Moreover, entities that discover and cure a violation before the attorney general takes enforcement action may also avoid liability if they promptly remediate the issue and inform the attorney general. And, the legislation contains a limited “safe harbor” in the form of a (rebuttable) presumption that developers and deployers of high-risk AI systems have met their duty of care to consumers if they comply with the applicable operating standards outlined in the legislation.
Enforcement and Penalties
Under H.B. 2094, only the attorney general may enforce the requirements described in the legislation. Nevertheless, the potential enforcement envisioned could be very impactful, as violations can lead to civil investigative demands, injunctive relief, and civil penalties. Generally, non-willful violations of H.B. 2094 may incur up to $1,000 in fines plus attorneys’ fees, expenses, and costs, while willful violations can result in fines of up to $10,000 per instance along with attorneys’ fees, expenses, and costs. Notably, each violation is counted separately, so penalties can accumulate quickly if an AI system impacts many individuals.
Looking Forward
Even though the law would not take effect until July 1, 2026, if signed by the governor, organizations that develop or deploy high-risk AI systems may want to begin compliance planning. By aligning with widely accepted frameworks like the NIST AI RMF and ISO/IEC 42001, businesses may establish a presumption of compliance. And, from a practical perspective, this early adoption can help mitigate legal risks, enhance transparency, and build trust among consumers—which can be particularly beneficial with respect to sensitive issues like hiring decisions.
Final Thoughts
Virginia’s new High-Risk Artificial Intelligence Developer and Deployer Act signals a pivotal moment in the governance of artificial intelligence at the state level and is a likely sign of things to come. The law’s focus on transparent documentation, fairness, and consumer disclosures underscores the rising demand for responsible AI practices. Both developers and deployers must understand the scope of their responsibilities, document their AI processes and make sure consumers receive appropriate information about them, and stay proactive in risk management.
President Trump’s Artificial Intelligence (AI) Action Plan Takes Shape as NSF, OSTP Seek Comments
On January 23, 2025, as one of the first actions of his second term, President Trump signed Executive Order (EO) 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” making good on a campaign promise to rescind Executive Order 14110 (known colloquially as the Biden AI EO).
It is not surprising that AI was at the top of the agenda for President Trump’s second term. In his first term, Trump was the first president to issue an EO on AI. On February 11, 2019, he issued Executive Order 13859, Maintaining American Leadership in Artificial Intelligence. This was a first-of-its-kind EO to specifically address AI, recognizing the importance of AI to the economic and national security of the United States. In it, the Trump Administration laid the foundation for investment in the future of AI by committing federal funds to double investment in AI research, establishing national AI research institutes, and issuing regulatory guidance for AI development in the private sector. The first Trump Administration later established guidance for federal agency adoption of AI within the government.
The current EO gives the Assistant to the President for Science and Technology, the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs, in coordination with agency heads they deem relevant, 180 days—until July 22, 2025—to prepare an AI Action Plan to replace the policies that have been rescinded from the Biden Administration.
OSTP/NSF RFI
To develop an AI Action Plan within that deadline, the National Science Foundation’s Networking and Information Technology Research and Development (NITRD) National Coordination Office (NCO)—on behalf of the Office of Science and Technology Policy (OSTP)—has issued a Request for Information (RFI) on the Development of an Artificial Intelligence (AI) Action Plan. Comments are due by March 15, 2025.
This is a unique opportunity to provide the second Trump Administration with important real-world, on-the-ground feedback. As the RFI states, this administration intends to use these comments to “define the priority policy actions needed to sustain and enhance America’s AI dominance, and to ensure that unnecessarily burdensome requirements do not hamper private sector AI innovation.
Epstein Becker Green and its Artificial Intelligence practice group, along with its health care, employment, and regulatory teams, are closely monitoring how the administration will address the regulation of health care AI and workplace AI in this plan. During President Trump’s first term, the administration focused its AI policy primarily around national security. Given the great expansion of the types and uses of AI tools since President Trump’s first term, we anticipate the Trump Administration will broaden its regulatory reach during this term—with the aim of “enhancing America’s global AI dominance.”
We have seen an explosion of AI tools adopted by our clients within health care—both clinical and administrative—as well as for employment decision-making. We work closely with clients to manage enterprise risk and drive strategic edge through AI innovation and look forward to helping shape the current administration’s AI policies through this and other opportunities for engagement with federal policymakers.
Submission Guidelines
OSTP seeks input on the highest priority policy actions that should be in the new AI Action Plan. Responses can address any relevant AI policy topic, including but not limited to: hardware and chips, data centers, energy consumption and efficiency, model development, open source development, application and use (either in the private sector or by government), explainability and assurance of AI model outputs, cybersecurity, data privacy and security throughout the lifecycle of AI system development and deployment (to include security against AI model attacks), risks, regulation and governance, technical and safety standards, national security and defense, research and development, education and workforce, innovation and competition, intellectual property, procurement, international collaboration, and export controls.
OSTP encourages respondents to suggest concrete AI policy actions needed to address the topics raised. Comments may be submitted by email to [email protected] or by mail at the address on page 2 of the RFI. Email submissions should be machine-readable, not copy-protected, and include “AI Action Plan” in the subject heading. Additional guidelines, including font and page limits, appear on page 2.
“Claims” Under the FCA, §1983 Claim Denials on Failure-to-Exhaust Grounds, and Limits to FSIA’s Expropriation Exception – SCOTUS Today
The U.S. Supreme Court decided three cases today, with one of particular interest to many readers of this blog. So, let’s start with that one.
Wisconsin Bell v. United States ex rel. Heath is a suit brought by a qui tam relator under the federal False Claims Act (FCA), which imposes civil liability on any person who “knowingly presents, or causes to be presented, a false or fraudulent claim” as statutorily defined. 31 U. S. C. §3729(a)(1)(A). The issue presented is a common one in FCA litigation, namely, what is a claim? More precisely, in the context of the case, the question is what level of participation by the government in the actual payment is required to demonstrate an actionable claim by the United States. The answer, which won’t surprise many FCA practitioners, is “not much.”
The case itself concerned the Schools and Libraries (E-Rate) Program of the Universal Service Fund, established under the Telecommunications Act of 1996, which subsidizes internet and other telecommunication services for schools and libraries throughout the country. The program is financed by payments by telecommunications carriers into a fund that is administered by a private company, which collects and distributes the money pursuant to regulations set forth by the Federal Communications Commission (FCC). Those regulations require that carriers apply a kind of most-favored-nations rule, limiting them to charging the “lowest corresponding price” that would be charged by the carriers to “similarly situated” non-residential customers. Under this regime, a school pays the carrier a discounted price, and the carrier can get reimbursement for the remainder of the base price from the fund. The school could also pay the full, non-discounted price to the carrier itself and be reimbursed by the fund.
The relator, an auditor of telecommunications bills, asserted that Wisconsin Bell defrauded the E-Rate program out of millions of dollars by consistently overcharging schools above the “lowest corresponding price.” He argued that these violations led to reimbursement rates higher than the program should have paid. His contention is that a request for E-Rate reimbursement qualified as a “claim,” a classification that requires the government to have provided some portion of the money sought. Wisconsin Bell moved to dismiss, arguing that there could be no “claim” here because the money at issue all came from private carriers and was administered completely by a private corporation.
Affirming the U.S. District Court for the Eastern District of Wisconsin, the U.S. Court of Appeals for the Seventh Circuit rejected Wisconsin Bell’s argument, holding that there was a viable claim because the government provided all the money as part of establishing the fund. Less metaphysically, it also held that the government actually provided some “portion” of E-Rate funding by depositing more than $100 million directly from the U.S. Treasury into the fund.
Justice Kagan delivered the unanimous opinion of the Supreme Court, affirming the Seventh Circuit on the narrower ground that “the E-Rate reimbursement requests at issue are ‘claims’ under the FCA because the Government ‘provided’(at a minimum) a ‘portion’ of the money applied for by transferring more than $100 million from the Treasury into the Fund.” It is important to recognize that this amount was quite separate from the funds involved in the core program at issue. Instead, it constituted delinquent contributions collected by the FCC and the U.S. Department of the Treasury, as well as civil settlements and criminal restitution payments made to the U.S. Department of Justice in response to wrongdoing in the program. This nonpassive role by the government was enough to satisfy the Court that the money was sought through an actionable “claim.”
Rather blithely, Justice Kagan analogizes these government transfers to “most Government spending: Money usually comes to the Government from private parties, and it then usually goes out to the broader community to fund programs and activities. That conclusion is enough to enable Heath’s FCA suit to proceed.”
This conclusion suggests that quibbling about what constitutes a “claim,” where government participation in payment is peripheral, is unlikely to provide an effective avenue for defending FCA lawsuits. But wait! Before closing the discussion, we must turn to the concurring opinion of Justice Thomas, who was joined by Justice Kavanaugh and, in part, by Justice Alito. They note that the Court has left open the questions of whether the government actually provides the money that requires private carriers to contribute to the E-Rate program and whether the program’s administrator is an agent of the United States. Thomas’s suggestion, in attempting to reconcile various Circuit Court opinions as to the fund, is that an FCA claim must be based upon a clear nexus with government involvement. Thomas then goes on to describe a range of cases where, although the arrangements at issue might be prescribed by the government, the absence of government money would be fatal to holding that there was a justiciable FCA claim. In other words, the kind of government payments into the fund that we see in the instant case are the likely minimum that the Court would countenance.
Perhaps a bigger storm warning is the additional concurrence of Justice Kavanaugh, joined by Justice Thomas, in noting that today’s opinion is a narrow one. However, the FCA’s qui tam provisions raise substantial questions under Article II of the Constitution. The Court has never ruled squarely as to Article II, though it has upheld qui tam cases as assignments to private parties of claims owned by the government, something like commercial relationships. Two Justices augured that potential unresolved constitutional challenges to the FCA’s qui tam regime necessarily will mean that any competent counsel will raise the point in any future FCA case not brought by the government alone. But note that Justice Alito did not join Kavanaugh’s opinion, though he did in the Thomas concurrence. Nor did any other conservative Justice. It still takes four to grant cert. But the future is a bit hazier, thanks to Justice Kavanaugh.
Justice Kavanaugh finds himself on the opposite side of Justice Thomas in the case of Williams v. Reed. Writing for himself, the Chief Justice, and Justices Sotomayor, Kagan, and Jackson, Justice Kavanaugh ruled in favor of a group of unemployed workers who contended that the Alabama Department of Labor unlawfully delayed processing their state unemployment benefits claims. They had sued in state court under 42 U. S. C. §1983, raising due process and federal statutory arguments, attempting to get their claims processed more quickly. The Alabama Secretary of Labor argued that these claims should be dismissed for lack of jurisdiction because the claimants had not satisfied the state exhaustion of remedies requirements.
Holding against the Secretary, the Court’s majority opined that where a state court’s application of a state exhaustion requirement effectively immunizes state officials from §1983 claims challenging delays in the administrative process, state courts may not deny those §1983 claims on failure-to-exhaust grounds. Citing several analogous precedents, the majority decided what I submit looks like a garden-variety supremacy case. After all, as Kavanaugh notes, the “Court has long held that ‘a state law that immunizes government conduct otherwise subject to suit under §1983 is preempted, even where the federal civil rights litigation takes place in state court.’” See Felder v. Casey, 487 U. S. 131 (1988).
Justice Thomas and his conservative allies didn’t see it that way at all. Quoting himself in dissent in another case, Justice Thomas asserts that “[o]ur federal system gives States ‘plenary authority to decide whether their local courts will have subject-matter jurisdiction over federal causes of action.’ Haywood v. Drown, 556 U. S. 729, 743 (2009) (THOMAS, J., dissenting).” Well, he didn’t persuade a majority then, and he didn’t do so now in this §1983 case.
Finally, in Republic of Hungary v. Simon, a unanimous Court, per Justice Sotomayor, considered the provision of the Foreign Sovereign Immunities Act of 1976 (FSIA) that provides foreign states with presumptive immunity from suit in the United States. 28 U. S. C. §1604. That provision has an expropriation exception that permits claims when “rights in property taken in violation of international law are in issue” and either the property itself or any property “exchanged for” the expropriated property has a commercial nexus to the United States. 28 U. S. C. §1605(a)(3).
The Simon case involved a suit by Jewish survivors of the Hungarian Holocaust and their heirs against Hungary and its national railway, MÁV-csoport, in federal court, seeking damages for property allegedly seized during World War II. They alleged that the expropriated property was liquidated and the proceeds commingled with other government funds that were used in connection with commercial activities in the United States. The lower courts determined that the “commingling theory” satisfied the commercial nexus requirement in §1605(a)(3) and that requiring the plaintiffs to trace the particular funds from the sale of their specific expropriated property to the United States would make the exception a “nullity.”
The Supreme Court didn’t quite agree, holding that alleging the commingling of funds alone cannot satisfy the commercial nexus requirement of the FSIA’s expropriation exception. “Instead, the exception requires plaintiffs to trace either the specific expropriated property itself or ‘any property exchanged for such property’ to the United States (or to the possession of a foreign state instrumentally engaged in United States commercial activity).”
The three cases decided today bring the total decisions of the term to eight. Stay tuned because a torrent might be on the horizon.
Trump Executive Order Requires Independent Agencies to Submit Regulations for Presidential Review
On February 19, 2025, President Donald Trump signed an executive order (the “Order”) mandating that independent agencies, including the SEC, the FCC, and the FTC, submit proposed regulations for presidential review before finalization. The order marks a significant shift in the regulatory process, altering the long-standing autonomy of these agencies by subjecting them to executive oversight.
The Order asserts that independent agencies should not be exempt from executive supervision, citing Article II of the U.S. Constitution. According to the related White House fact sheet, the Order seeks to align the regulatory activities of independent agencies with the Trump administration’s priorities, and ensure consistency across the executive branch. While the Order applies broadly to independent agencies, it explicitly exempts the Federal Reserve’s monetary policy functions.
Some key provisions of the Order include:
Presidential Review Requirement. Independent agencies must submit proposed regulations for review by the White House before adoption.
Interpretation of Law. The President and Attorney General will interpret the law for the executive branch to prevent agencies from issuing conflicting legal positions.
Coordination with the White House. All agencies must consult with the White House to align their strategic plans and regulatory priorities with the administration’s policy goals.
Budgetary Oversight. The Office of Management and Budget will have oversight authority over the budgets of independent agencies to ensure “tax dollars are spent wisely.”
Putting It Into Practice: The Order is part of a broader effort by the Trump administration to increase control over independent federal agencies. The Order argues that “Article II of the U.S. Constitution vests all executive power in the president, meaning that all executive branch officials and employees are subject to his supervision.” The Order is likely to be challenged as it marks a dramatic shift from the current administrative regulatory framework.
No Business Transaction, No Chapter 93A Claim: Mass. Courts Clarify Requirements
To pursue a Chapter 93A claim, there must be some business, commercial, or transactional relationship between the plaintiff(s) and the defendant(s). An indirect commercial link—such as upstream purchasers—may be sufficient to state a valid claim, but there must ultimately be some commercial connection between the plaintiff and defendant. The District of Massachusetts and the Appeals Court of Massachusetts recently affirmed this requirement in two separate cases.
First, the District of Massachusetts affirmed this principle when it denied plaintiffs’ motion for leave to conduct limited discovery, as the allegations in the complaint only highlighted the commercial relationship between the various defendants and not with the plaintiff. In Courtemanche v. Motorola Sols., Inc., plaintiffs brought a putative class action against a group of commercial defendants and the superintendent of Massachusetts State Police, alleging that the State Police unlawfully recorded conversation content between officers and plaintiffs, and then later used those recordings to pursue criminal charges against plaintiffs. The commercial defendants allegedly willfully assisted the State Police by providing them with intercepting devices and storing the recordings on their servers. The commercial defendants moved to dismiss based on plaintiffs’ failure to allege a business, commercial, or transactional relationship between them and the commercial defendants. Plaintiffs then sought to conduct limited discovery in order to establish such a relationship. The court concluded that allowing even limited discovery on the issue would only amount to an inappropriate fishing expedition and denied the motion.
Shortly thereafter, the Massachusetts Appeals Court reversed portions of a consolidated judgment against defendants for Chapter 93A § 11 violations in Flightlevel Norwood, LLC v. Boston Executive Helicopters, LLC. On appeal, the defendants argued, and the Appeals Court agreed, that the trial judge erred in denying their motion for judgment notwithstanding the verdict. The parties both operated businesses at the Norwood Memorial Airport and subleased adjoining parcels of land with a taxiway running along their common border. At trial, plaintiff argued that defendants engaged in unfair acts to exercise dominion and control over plaintiff’s leasehold to advance defendants’ commercial interests and deliberately interfere with plaintiff’s commercial operations. The Appeals Court reiterated that to maintain a Section 11 claim, a business needs to show more than just being harmed by another business’s unfair practices. Instead, plaintiff must prove that it had a significant business deal with the other company, and that the unfair practices occurred as part of the deal. The Appeals Court thus concluded that Chapter 93A § 11 was inapplicable, as there was no business transaction between the parties.
Commercial Agents Regulations: Here to Stay
In October 2024 we reported on the case of Kompakwerk GmbH v Liveperson Netherlands B.V. [CL-2018-000802] which concerned the question of whether an agent selling access to end users in Great Britain to a third-party software as a service (SaaS) product should be considered an agent for the purposes of the Commercial Agents (Council Directive) Regulations 1993 (the Regulations). For the reasons set out in our post on that case, the Court decided that the agent did not.
As further detailed in that earlier report, in Great Britain the Regulations protect both individual self-employed agents and companies who act as agents and sell goods (but not services) on the behalf of another (their Principal). The Regulations are generally favourable towards agents providing many protections, most of which cannot be contracted out of (even by agreement) whilst an agency arrangement remains in place. A key protection for agents is the right to claim a potentially significant compensation payment from their Principal in most termination scenarios.
As noted in that earlier report, at the time of this case a government consultation, begun under the previous Conservative government, was ongoing to consider whether to bring forward new legislation to stop the Regulations from applying to new agency contracts in Great Britain.
The outcome of that consultation was published on 13 February 2025 and perhaps unsurprisingly the new Labour government has decided not to proceed with this proposal meaning that the Regulations will be retained in their current form without amendment and will continue to apply to new agency contracts in Great Britain which meet the existing criteria under the Regulations.
An interesting footnote to this response is that of the 86 respondents to this consultation only seven were Principals with the vast majority of respondents being agents understandably keen to retain the Regulations in their current form. Whilst a few Principals did comment that the Regulations did not allow for contracts to be freely negotiated between an agent and principal, there was not considered to be a sufficiently large body of evidence to suggest that was a major issue with a strong case for change.
This shows the importance for interested parties to take the time to respond to consultations such as this to influence future change and regulation – a timely reminder for both AI developers and copyright holders that the deadline for providing responses to the government’s current consultation on potential changes to UK copyright law for AI training purposes closes next week.
HHS’s Proposed Security Rule Updates Will Require Adjustments to Accommodate Modern Vulnerability and Incident Response Issues
In this week’s installment of our blog series on the U.S. Department of Health and Human Services’ (HHS) HIPAA Security Rule updates in its January 6 Notice of Proposed Rulemaking (NPRM), we discuss HHS’s proposed rules for vulnerability management, incident response, and contingency plans (45 C.F.R. §§ 164.308, 164.312). Last week’s post on the updated administrative safeguards is available here.
Existing Requirements
HIPAA currently requires regulated entities to implement policies and procedures to (1) plan for contingencies and (2) respond to security incidents. A contingency plan applies to responses to emergencies and other major occurrences, such as system failures and natural disasters. When needed, the plan must include a data backup plan, disaster recovery plan, and an emergency mode operation plan to account for the continuation of critical business processes. A security incident plan must be implemented to ensure the regulated entity can identify and respond to known or suspected incidents, as well as mitigate and resolve such incidents.
Existing entities — especially those who have unfortunately experienced a security incident — are familiar with the above requirements and their implementation specifications, some of which are “required” and others only “addressable.” As discussed throughout this series, HHS is proposing to remove the “addressability” distinction making all implementation specifications that support the security standards mandatory.
What Are the New Technical Safeguard Requirements?
The NPRM substantially modifies how a regulated entity should implement a contingency plan and respond to security incidents. HHS proposes a new “vulnerability management” standard that would require regulated entities to establish technical controls to identify and address certain vulnerabilities in their respective relevant electronic information systems. We summarize these new standards and protocols below:
Contingency Plan – The NPRM would add additional implementation standards for contingency plans. HHS is proposing a new “criticality analysis” implementation specification, requiring regulated entities to analyze their relevant electronic information systems and technology assets to determine priority for restoration. The NPRM also adds new or specifying language to the existing implementation standards, such as requiring entities to (1) ensure that procedures are in place to create and maintain “exact” backup copies of electronic protected health information (ePHI) during an applicable event; (2) restore critical relevant electronic information systems and data within 72 hours of an event; and (3) require business associates to notify covered entities within 24 hours of activating their contingency plans.
Incident Response Procedures – The NPRM would require written security incident response plans and procedures documenting how workforce members are to report suspected or known security incidents, as well as how the regulated entity should identify, mitigate, remediate, and eradicate any suspected or known security incidents.
Vulnerability Management – HHS discussed in the NPRM that its proposal to add a new “vulnerability management” standard was to address the potential for bad actors to exploit publicly known vulnerabilities. With that in mind, this standard would require a regulated entity to deploy technical controls to identify and address technical vulnerabilities in its relevant electronic information systems, which includes (1) automated vulnerability scanning at least every six months, (2) monitoring “authoritative sources” (e.g., CISA’s Known Exploited Vulnerabilities Catalog) for known vulnerabilities on an ongoing basis and remediate where applicable, (3) conducting penetration testing every 12 months, and (4) ensuring timely installation of reasonable software patches and critical updates.
Stay Tuned
Next week, we will continue Bradley’s weekly NPRM series by analyzing justifications for HHS’s proposed Security Rule updates, how the proposals may change, and areas where HHS offers its perspective on new technologies. The NPRM public comment period ends on March 7, 2025.
Please visit HIPAA Security Rule NPRM and the HHS Fact Sheet for additional resources.
Listen to this post
HIPAA VIOLATIONS?: Health Insurance Company Allegedly Tracks and Shares Private Health Information
Hey, CIPAWorld! The Baroness here. Happy Friday everyone
Believe it or not, even health insurance companies are facing litigation for allegedly tracking and sharing consumer information. Just yesterday, Blue Cross Blue Shield of Massachusetts (BCBS) and its subsidiary removed such a case to the District of Massachusetts. Vita v. Blue Cross & Blue Shield of Mass., Inc., No. 1:25-cv-10420 (D. Mass. Feb. 20, 2025).
In the Amended Complaint, Plaintiff Vita claims that she lives in Massachusetts and obtains health insurance from BCBS. She claims that BCBS’s Website, https://ww.bluecrossma.org/, offers consumers general information about insurance plan offerings by BCBS and individualized information about consumers’ insurance plans. Notably, Plaintiff states that the website includes a “Find a Doctor” function that enables users to search by condition, specialty, gender, language, and location; a “24/7 Nurse Line” through which consumers can communicate with nurses employed by BCBS; allows consumers to access their insurance information, including services and medications obtained, amounts paid, and benefits available; and it allows consumers to access their private medical information through the MyBlue patient portal.
Vita argues that BCBS Website users have legitimate expectations of privacy and that BCBS will not share with third parties their communications with BCBS without consent. She alleges that these expectations are supported by Massachusetts state law and HIPAA, which prohibit healthcare companies from using or disclosing individuals’ protected health information without valid authorization from the individual.
Additionally, Vita references multiple statements in BCBS’s online policies in which it explicitly states that BCBS’s cookies, clear gifs, and other web monitoring technologies do not collect any personally identifiable information. Because of this, Vita claims that healthcare consumers would not anticipate that their communications with BCBS would be intercepted and shared with third parties, like Google, Facebook, Twitter, and LinkedIn for marketing purposes, and that BCBS did not inform consumers of this via a pop-up notification or otherwise.
Despite this expectation, Vita alleges that BCBS’s Website is designed with tracking technology that permits third parties such as Google and Facebook to intercept consumers’ interactions with BCBS, and that the information intercepted includes private health information. Vita claims that BCBS uses or has used tracking technologies such as Google Analytics, Google DoubleClick, Meta Pixel, and others, and that such tracking is injected into the code of almost all of the pages on BCBS’s Website, including the MyBlue patient portal. The Amended Complaint is detailed, going so far as to include screenshots of the code of the Website with portions highlighted to show tracking.
Based on these facts, Vita seeks to represent the following class:
All Massachusetts residents who, while in the Commonwealth of Massachusetts, accessed any portion of the website at bluecrossma.org between three years prior to the date of the filing of the initial complaint in this action and September 29, 2023.
Based on these facts, Vita alleges that BCBS violated the ECPA, 18 USC § 2511, which prohibits the intentional interception of the content of any electronic communications, as well as HIPAA, which imposes a criminal penalty for knowingly disclosing individually identifying health information to a third party. 42 USC § 1320d-6(a)(3). Second, Vita claims that BCBS violated M.G.L. c. 93A §§ 2, 9, which proscribes unfair competition and unfair or deceptive acts in trade or commerce, by falsely stating that its website does not capture personally identifiable information. Third, Vita brings a cause of action for violating the Massachusetts Right to Privacy Act, M.G.L. c. 214 § 1B, which confers a private right of action to Massachusetts citizens for privacy violations. Vita also brings claims for negligence, breach of confidence, breach of contract, and unjust enrichment.
Because this case was just removed, it is still in its nascent stage. We will be sure to keep you folks updated as the case progresses.
FROM THE VAULT: Society’s Acceptance of the TCPA Reveals its Waning Appreciation for the Freedom of Speech
Wrote this article back in 2015. More pertinent now, than ever.
Last Friday marked the 73rd anniversary of the oral argument in the U.S. Supreme Court’s landmark free speech decision in Martin v. City of Struthers. In that case our highest court held that an ordinance preventing people from knocking on one another’s doors to distribute unsolicited pamphlets and circulars was unconstitutional—an impermissible prior restraint on free speech that threatened free society itself.
A government—it was held—cannot substitute its judgment for that of its citizenry and issue a wholesale bar on the delivery of constitutionally protected messages. Yes, some folks might be annoyed by having to come to the door on a Sunday morning to greet a neighbor sharing an unwelcome message of faith, or an unsympathetic political position, but that nuisance must be borne—hopefully as a badge of honor—by all those who wish to live in freedom. As the great Justice Black wrote at the time, “[f]reedom to distribute information to every citizen wherever he desires to receive it” is “vital to the preservation of a free society.” Martin v. City of Struthers, 319 US 141 (1943.) Indeed, the “stringent prohibition” against disturbing to one’s neighbors unsolicited pamphlets and circulars was held to “serve no purpose but that forbidden by the Constitution, the naked restriction of the dissemination of ideas.”
Flash forward to today. Freedom of speech no longer concerns us, at least not as compared to the freedom not to be bothered—even if ever-so-slightly.
Indeed, we wish to be free from anything we do not admire, or agree with. Free to think only what we want to think and to be free of any who would disagree with us or share an idea we do not immediately relate to. Even the advertisements we view and the news articles we peruse must be tailored to our preferences, as gleaned by the learning computer programs that we happily allow to monitor our every page click, to assure that we are never bothered with something we might not want to see.
Those of us that exalt freedom of self over freedom of expression have the Telephone Consumer Protection Act (“TCPA”) to protect and preserve our most cherished freedom: the freedom to be left alone. Indeed just yesterday the FCC issued an Enforcement Advisory barring political activists from contacting constituents before their words have ever been formed and most observers—if there are any—likely think it’s a good thing. Fewer “robocalls” to bother us.
We know—if we ever paused to think about it—that the TCPA, as applied by the FCC, is the single most expansive restriction on Constitutionally-protected speech that has ever been passed in this country’s history.
Indeed, it is the death of free speech in the modern age. For decades America’s Supreme Court has prudently guarded our freedom of expression, holding it sacred against all forms of restrictive legislation. Even an otherwise righteous law might be struck down if it even risked “chilling” protected speech. But all of that is out the window now, it seems. For here we have the TCPA—itself a millennial born in 1991—wielded by an FCC that relishes in openly restricting and regulating protected speech.
The FCC is not only chilling speech, it is freezing it solid and then smashing it with a sledgehammer. It is applying the TCPA to restrict all speech—from core political speech to innocuous social banter—making use of a person’s cell phone without their express prior permission. It assumes a cell phone user will not want to receive the caller’s message before the words have ever been spoken. It silences the speaker before his message has ever been conveyed. And failing to comply with the statute’s morass of dense and often-times conflicting regulations subjects a speaker to a minimum violation of $500.00 per call.
Yet, as noted above, the TCPA’s reach is terrifying. The FCC’s enforcement advisory yesterday reminded political candidates that they may not make use of their constituent’s cell phones without complying with the TCPA. Yes, even this sort of key, compelling, core, essential political speech is subject to a prior restraint and restriction as to the manner in which it may be made. The FCC—the agency entrusted to assure timely access to wireless carrier services—is shutting down access to the phone lines even for those delivering the most important forms of protected speech unless its regulations are adhered to and obeyed.
Thus, it is the FCC that now tells you what messages you can receive, and which you cannot. How you can speak, and when, and to whom. And if you fail to comply, they can crush you or your institution with massive penalties. Yet this price does not seem so high if it means that political activists won’t eat up your cell phone minutes, does it?
And so it is that the battle over the constitutionality of the TCPA—now being waged before the DC Circuit Court of Appeal—is nothing less than an inter-generational struggle to define (or re-define) what it means to live in a free society.
Expression vs. Privacy. The freedom to speak vs. the freedom not to listen. Pick your side. There is no middle ground here.
Then again, the chances are good that I already know what side you’re on. You would never have seen this article—much less made it through it—if Google’s preference-mining computer applications hadn’t decided that you’d likely agree with me.
10 years ago..
Much love.
Location Data as Health Data? Precedent-Setting Lawsuit Brought Against Retailer Under Washington My Health My Data Act
An online retailer was recently hit with the first class action under Washington’s consumer health data privacy law alleging that it used advertising software attached to certain third-party mobile phone apps to unlawfully harvest the locations and online marketing identifiers of tens of millions of users. This case highlights how seemingly innocuous location data can become sensitive health information through inference and aggregation, potentially setting the stage for a flood of similar copycat lawsuits.
Quick Hits
An online retailer was hit with the first class action under Washington State’s My Health My Data Act (MHMDA), claiming that the retailer unlawfully harvested sensitive location data from users through advertising software integrated into third-party mobile apps.
The lawsuit alleges that the retailer did not obtain proper consent or provide adequate disclosure regarding the collection and sharing of consumer health data; a term that is defined incredibly broadly as personal information that is or could be linked to a specific individual and that can reveal details about an individual’s past, present, or future health status.
This case marks the first significant test of the MHMDA and could provide a roadmap for litigants in Washington and other states.
On February 10, 2025, Washington resident Cassaundra Maxwell filed a class action lawsuit in the U.S. District Court for the Western District of Washington alleging violations of Washington’s MHMDA. The suit alleged that the retailer’s advertising software, known as a “software development kit,” or SDK, is licensed to and “runs in the background of thousands of mobile apps” and “covertly withdraws sensitive location data” that cannot be completely anonymized.
“Mobile users may agree to share their location while using certain apps, such as a weather app, where location data provides the user with the prompt and accurate information they’re seeking,” the suit alleges. “But that user has no idea that [the online retailer] will have equal access to sensitive geolocation data that it can then exfiltrate and monetize.”
The suit brings claims under federal wiretap laws, federal and state consumer protection laws, and violations of the MHMDA, making it a likely test case for consumer privacy claims under the MHMDA. This case evokes parallels to the surge over the past several years of claims under the California Invasion of Privacy Act (CIPA), a criminal wiretap statute. Both involve allegations of unauthorized data collection and sharing facilitated by digital tracking technologies. These technologies, including cookies, pixels, and beacons, are often embedded in websites, apps, or marketing emails, operating in ways that consumers may not fully understand or consent to.
As we previously covered, hundreds if not thousands of lawsuits relating to similar technologies were brought pursuant to CIPA after a California district court denied a motion to dismiss such claims in Greenley v. Kochava, Inc. Given the parallels and the onslaught of litigation that CIPA entailed, the MHMDA case may set important precedents for how consumer health data privacy is interpreted and enforced in the digital age, similar to the impact CIPA litigation has had on broader privacy practices. Like CIPA, the MHMDA also allows for the recovery of attorneys’ fees, but unlike CIPA (which provides for statutory damages even without proof of actual harm), a plaintiff must prove an “injury” to his or her business or property to establish an MHMDA claim.
Consumer Health Data
As many companies working in the retail space likely know, the MHMDA imposes a host of new requirements for companies doing business in Washington or targeting Washington consumers with respect to the collection of “consumer health data.” The law broadly defines “consumer health data” as any personal information that can be linked or reasonably associated with an individual’s past, present, or future physical or mental health status. The MHMDA enumerates an entire list of data points that could constitute “health status,” including information that would not traditionally be thought of as indicative of health, such as:
biometric data;
precise location information that could suggest health-related activities (such as an attempt to obtain health services or supplies);
information about bodily functions, vital signs, and symptoms; and
mere measurements related to any one of the thirteen enumerated data points.
Critically, even inferences can become health status information in the eyes of the MHMDA, including inferences derived from nonhealth data if they can be associated with or used to identify a consumer’s health data.
For instance, Maxwell’s suit alleges the retailer collected her biometric data and precise location information that could reasonably indicate an attempt to acquire or receive health services or supplies. However, the complaint is light on factual support, alleging only that the data harvesting conducted via the retailer’s SDK couldreveal (presumably via inference in most cases) “intimate aspects of an individual’s health,” including:
visits to cancer clinics;
“health behaviors” like visiting the gym or fast food habits;
“social detriments of health,” such as where an individual lives or works; and
“social networks that may influence health, such as close contact during the COVID 19 pandemic.”
Notice and Consent
The suit further alleges that the retailer failed to provide appropriate notice of the collection and use of the putative class members’ consumer health data and did not obtain consent before collecting and sharing the data. These allegations serve as a timely reminder of the breadth and depth of the MHMDA’s notice and consent requirements.
Unlike most other state-level privacy laws, which allow different state-mandated disclosures to be combined in a single notice, the Washington attorney general has indicated in (nonbinding) guidance that the MHMDA “Consumer Health Privacy Policy must be a separate and distinct link on the regulated entity’s homepage and may not contain additional information not required under the My Health My Data Act.” Said differently, businesses in Washington cannot rely upon their standard privacy policies, or even their typical geolocation consent pop-up flows with respect to consumer health data.
Additionally, at a high-level, the MHMDA contains unusually stringent consent requirements, demanding the business obtain “freely given, specific, informed, opt-in, voluntary, and unambiguous” consent before consumer health data is collected or shared for any purpose other than the provision of the specific product or service the consumer has requested from the business, or collected, used, or shared for any purpose not identified in the business’s Consumer Health Privacy Policy.
Next Steps
The Maxwell lawsuit is significant as it is the first to be filed under Washington’s MHMDA, a law that has already spawned a copycat law in Nevada, a lookalike amendment to the Connecticut Data Privacy Act, and a whole host of similar bills in state legislatures across the country—most recently in New York, which has its own version of the MHMDA awaiting presentation to the governor for signature. The suit appears to take an expansive interpretation that could treat nearly all or essentially all location data as consumer health data, inasmuch as conclusions about an individual’s health that can be drawn from the data. And, while the MHMDA does use expansive language, the suit appears likely to answer still lingering questions about the extent of what should be considered “consumer health data” subject to the rigorous requirements of the MHMDA.
As this suit progresses, companies targeting Washington consumers or otherwise doing any business in Washington may want to review their use of SDKs or similar technologies, geolocation collection, and any other collection or usage of consumer data with an eye toward the possibility that the data could be treated as consumer health data. Also, their processors may wish to do the same (remember, the Washington attorney general has made it clear that out-of-state entities acting as processors for entities subject to MHMDA must also comply). Depending on what they find, those companies may wish to reevaluate the notice-and-consent processes applicable to the location data they collect, as well as their handling of consumer rights applicable to the same.