By Any Other Name: Rules Limiting Alternative Pleading in Professional Liability Actions

Most states allow former clients to assert claims against a licensed professional in either tort or contract. The stereotypical tort claim alleges that the professional failed to act in accordance with the standards expected by members of the profession, resulting in damages to the client. The stereotypical contract claim alleges that the professional was given a specific instruction and their failure to act in accordance with that instruction resulted in damages to the client. In a professional liability claim, it is not unusual to see multiple causes of action pleading professional failures, but in many circumstances case law has concluded that there is only one “real” appropriate claim.
In the majority of U.S. jurisdictions, the statute of limitations period for a contract claim is longer than the period for a tort claim. Former clients who fail to file a timely malpractice case, or those wishing to supplement a claim of delay in discovering an allegedly negligent act, will often allege a contract claim in the alternative. Often the allegation is a purported failure to conform to a professional standard of care, which constitutes a breach of the contract for professional services. In this way, a former client-plaintiff will try to avail themselves of a longer statute of limitations to assert a claim.
The ability of a plaintiff to allege a professional negligence claim as a contract (or vice-versa) may be limited by state-specific doctrines that seek to preserve the separation between tort and contract theories. These doctrines vary in both name and application but share a common goal of differentiating between tort and contract by examining the nature of the claim and the source of the duties giving rise to the claim. This distinction can be the difference between a malpractice claim going to trial or being dismissed as a matter of law. It is therefore important to be aware of whether and how your particular jurisdiction draws the line between claims of professional negligence, claims for breach of a contract, and various alternative legal bases to recover damages from a professional for services rendered.
Montana’s Gravamen Test
In Montana, the statute of limitations for a breach of a written contract is eight (8) years while a breach of an oral contract must be commenced within five (5) years. MCA §27-2-202(1); MCA §27-2-202(2). By contrast, the statute of limitations for a negligence claim in Montana is three (3) years. MCA §27-2-204(1). Montana recognizes a case may involve both breach of contract and negligence claims, but plaintiffs are prohibited from recasting a tort claim into a contract claim solely to take advantage of the longer statute of limitations. Instead, Montana has adopted the Gravamen Test to determine the nature of the claim.
In Montana, the label given to the claim by the plaintiff does not control which statute of limitations applies. Northern Montana Hosp. v. Knight, 248 Mont. 310, 315, 811 P.2d 1276, 1278-79 (1991); Billings Clinic v. Peat Marwick Main & Co., 244 Mont. 324, 341, 797 P.2d 899, 910 (1990). The statute of limitations for contract claims applies only if the alleged breach of a specific provision in a contract provides the basis of the plaintiff’s claims. Collection Professionals, Inc. v. Halpin, 2009 Mont. Dist. LEXIS 688, 3-5. If the plaintiff claims breach of a legal duty imposed by law that arises during the performance of the contract, the claim is governed by the three-year statute of limitations applicable to negligence actions. Northern Montana, 248 Mont. at 315, 811 P.2d at 1278-79. If doubt exists as to the gravamen of the action, the longer statute of limitations will apply. Billings Clinic., 244 Mont. at 341.
Pennsylvania’s Gist of the Action Doctrine
In Pennsylvania, the statute of limitations for a breach of contract action is four (4) years. 42 Pa.C.S.A. § 5525(a)(1). The statute of limitations for a claim sounding in professional negligence is two (2) years. 42 Pa.C.S.A. § 5524(7). Under Pennsylvania’s “gist of the action” doctrine, a party is precluded from recasting breach of contract claims as actions sounding in tort. Bruno v. Erie Ins. Co., 106A.3d 48, 60 (Pa.2014); Etoll, Inc. v. Elias/Savion Advertising, Inc., 811 A.2d 10, 14 (Pa. Super. Ct. 2002). 
The Pennsylvania Supreme Court has clarified this doctrine, holding that the label placed on the claim does not control – the operative question is whether the duty breached is one created by the parties in contract (as in a specific instruction from the client to the professional) or a broader social duty imposed on all practitioners (a standard of care). Bruno, 106 A.3d at 68; Norfolk So. Ry. Co. v. Pittsburgh & West Va. R.R., 101 F.Supp.3d. 497, 534 (W.D. Pa. 2015); see also Phico Ins. Co. v. Presbyterian Med. Servs. Corp., 444 Pa. Super. 221, 663 A.2d 753, 757 (1995) (citing Bash v. Bell Telephone Co., 411 Pa. Super. 347, 601 A.2d 830 (Pa. Super. 1992) (“the important difference between contract and tort actions is that the latter lies from the breach of duties imposed as a matter of social policy while the former lie for the breach of duties imposed by mutual consensus.”); accord Simons v. Royer Cooper Cohen Braunfeld, LLC, 587 F.Supp.3d 209 E.D.Pa. 2022) (gist of the action doctrine prohibits contract claim based upon alleged failure to comport to professional standard of care).
The Texas Anti-Fracturing Rule
In Texas the statute of limitations for a claim for professional negligence is two (2) years and the statute of limitations for claims of breach of contract, breach of fiduciary duty, and fraud is four (4) years. Tex. Civ. Prac. & Rem. Code §§16.003 [2 year]; §§16.004 [4 year].
In a recent suit against an accounting firm, the Texas Supreme Court agreed that the state’s courts of appeals’ application of the Anti-Fracturing Rule applied. The Anti-Fracturing Rule limits plaintiffs’ attempts “to artfully recast a professional negligence allegation as something more – such as fraud or breach of fiduciary duty – to avoid a litigation hurdle such as the statute of limitations.” Pitts v Rivas, 2025 Tex. LEXIS 131 1 (Tex. Feb. 21, 2025). The Court cautioned that it is the “gravamen of the facts alleged” that must be examined closely rather than the “labels chosen by the plaintiff.” Id. at 7. If the essence, “crux or gravamen of the plaintiff’s claim is a complaint about the quality of professional services provided by a defendant, then the claim will be treated as one for professional negligence even if the petition also attempts to repackage the allegations under the banner of additional claims.” Id. To survive application of the rule, a plaintiff needs to plead facts that extend beyond the scope of what has traditionally been considered a professional negligence claim. Id. at 7.
Conclusion
The question of whether a case involves an allegation of a failure to observe a general professional duty or a specific instruction can control whether or not a case is time-barred. Many courts apply the same analysis to determine the “gravamen” of a claim and the applicable statute of limitations, though they refer to this test by different names. It is important to look beyond the text of an opposing party’s pleadings and examine the nature of the claims asserted. Failing to do so may leave you litigating claims that are otherwise time-barred and that may be removable from suit under proper motions to dismiss.

Imagining Lawyer Malpractice in the Age of Artificial Intelligence

My dear Miss Glory, the Robots are not people. Mechanically they are more perfect than we are; they have an enormously developed intelligence, but they have no soul. 1
This was a quote at the outset of Bunce v. Visual Tech. Innovations, Inc.2 – a recent case involving a lawyer who used ChatGPT to write a legal brief containing hallucinated or fake legal cases to support the legal argument presented to the court. As the court noted, while sanctioning the offending lawyer who presented the false cases to the court, “[t]o be a lawyer is to be human, a tacit prerequisite to comply with Federal Rule of Civil Procedure Rule 11(b)(2).”3
The future is here. Generative Artificial Intelligence (GAI) is fully capable of generating legal briefs, pleadings, demand letters, and other legal correspondence.4 Just open ChatGPT and prompt it to write a demand letter involving some set of facts involving a casualty and ask it to make a demand for one million dollars. Open AI models like ChatGPT can write legal briefs, demand letters, and other legal correspondence, and legally trained GAI models can indeed write strong legal correspondence and legal briefs.
What is a legally trained GAI model and how does it work?
GAI works by using neural networks to learn patterns and structures within existing data. This allows GAI to generate new and original content based on the prompts or inputs it receives from users. In doing so, GAI effectively mimics the process of human creativity by creating something entirely new from learned information.
GAI models are trained on large sets of data using existing content. This helps the model understand patterns and relationships within that data. This training data typically covers a wide range of variations and examples within a specific domain. And so, for text data, like a legal brief, the model needs a lot of examples of legal briefs, and perhaps pleadings and legal correspondence, too.
Once a model is trained, it can generate new content by sampling from the learned patterns and creating outputs that are similar to the data used in its training but with variations. The model is then “fine-tuned” by being given “feedback” on the outputs it presents in response to the inputs it receives. It then uses this “feedback” to improve its performance by providing more accurate and relevant outputs in the future. In other words, the more training data and feedback the model receives, the better the outputs.
A legally trained model that has been trained on large data sets involving the law, such as legal briefs and legal correspondence, can generate accurate and relevant briefs and letters based on the prompts it receives. Thus, a legally trained GAI model can do things like:

Summarize complex legal cases.
Analyze contracts.
Identify key legal arguments.
Predict outcomes based on similar cases.
Generate efficient first drafts of legal documents.

In a world where lawyers are overworked and have too much going on in their professional lives, legally trained GAI models can make lawyers more efficient by creating first drafts of letters and briefs.
The limitations of using GAI models
It is important to understand that a GAI model is only as good as the training data and feedback it receives. If the training data contains errors, the model is effectively trained to recreate those errors when generating new content. Similarly, if the training data has biases, the outputs of the GAI model are likely to mimic those biases. If biases and errors are not weeded out through the feedback process, they can seriously compromise the outputs generated by the model. Similarly, if the user prompts are not precisely written and/or if the user is not well-trained on how to prompt the model, the results will likely not be what the user is seeking. This can undermine confidence in the GAI model. Therefore, training on how to use the GAI model effectively is critical to its usefulness.
The primary problems that lawyers have had with non-legally trained GAI models are when facts or legal cases have been hallucinated, which effectively creates “false” content in the outputs, even though it might appear true to the user.5 This generally occurs due to limitations in the training data provided to the model. In hallucinating, the model effectively makes assumptions based on the patterns it has learned from the data, even though those patterns do not apply to the context of the situation. Therefore, the false outputs are simply inaccurate assumptions, which the model does not understand as inaccurate. The hallucination may statistically fit the prompt, but lack the “real world” grounding or “common sense” that a person would use to reject this response. Stated differently, GAI models hallucinate because they lack the “soul” necessary to differentiate truth from fiction.
So, beyond understanding the GAI models hallucinate, where do the pitfalls lie for lawyers who use GAI models? They lie in three main areas: (1) a failure to understand GAI’s limitations; (2) a failure to supervise the use of GAI; and (3) data security and confidentiality concerns.
The failure to understand GAI’s limitations
A lack of understanding regarding GAI’s limitations is the primary theme in many of the reported cases involving lawyers receiving sanctions when using GAI to create legal briefs that contain false facts or cases. The use of GAI requires oversight by the attorney employing it. Even legally trained GAI models require human oversight to confirm that the generated content is accurate. At best, the output must be considered a first draft, to be reviewed and corrected before it is shared with a client, opposing counsel, or a court.
Another limitation of GAI is that the output will generally only be as strong as the prompts that generate it. To that end, it is critical that attorneys using GAI be trained on how to properly prompt the model to get the results sought. For untrained lawyers using open AI models, there is a high risk that the generated content will not be what the lawyer seeks. More importantly, an untrained lawyer may not even realize that the output contains fallacies. While GAI can make the lawyer more efficient, most lawyers using it are not interested in accepting poor work product as the cost of this greater efficiency. And certainly, false content or poor work product may lead to future legal malpractice cases for attorneys who use GAI but fail to account for or understand its limitations.
The failure to supervise
Lawyers have a professional responsibility to supervise those who work for them. If lawyers permit the use of GAI within a law firm, there must be supervision of that usage. For the same reasons that lawyers must be vigilant in their own use of GAI, they must similarly train and supervise those in their employ on the use of GAI as well.
Further, there are many legally trained GAI models available in the marketplace. If a lawyer and their firm decide to use a particular model, it is incumbent upon the lawyer to ask questions and learn about the limitations of the selected model, and train and supervise the use of the model based on those limitations.
Legal malpractice claims frequently arise when lawyers or their staff are not properly trained, resulting in errors that impact the lawyer’s work product. Like any new technology, supervision over the use of GAI is therefore critical to avoiding liability for lawyers and law firms.
The failure to protect client information
Lawyers have a professional responsibility to protect the information and secrets of their clients. When using an open AI model like ChatGPT, any information shared with the model is used by the model as training data. As such, if client information is shared with an open AI model, it is clearly being shared with the public. Even if the user attempts to be vague in the data being shared, if enough information is shared, it is possible that the model may fill informational gaps with correct assumptions and effectively correctly presume that the information is about your client, even if you have not told this to the model. In this way, it may appear that the lawyer has revealed information about a client, even if that is not actually the case.
In addition, cybersecurity is critical when using AI models, as the same security issues that impact any material containing links to the internet are present with those models. Lawyers must account for these cybersecurity risks as the standard of care adjusts to the realities of the use of AI by lawyers and requires them to protect client information in the face of those risks.
The use of closed, legally trained GAI models is an attempt to address these risks. But, at the end of the day, the lawyers who use them must take steps to ensure that vendors are following through on their promise to protect client data.
Protection of client information and secrets remains fundamental to the services offered by lawyers to their clients. And certainly, if a client’s information does become public, this presents a potentially significant risk for lawyer liability.

1 Capek, Karel, R.U.R. (Rossum’s Universal Robots): A Fantastic Melodrama in Three Acts and an Epilogue 17 (Paul Selver and Nigel Playfair trans., Samuel French, Inc. 1923).
2 2025 U.S. Dist. LEXIS 36454, *1 (E.D.Pa. March 13, 2025)
3 Id.
4 But not necessarily legal research, which has led to the problematic usage of ChatGPT by attorneys.
5 The first reported case was Mata v. Avianca, Inc., 678 F.Supp.3d 443 (S.D.N.Y. 2023). Since the model, there have been more than a dozen additional reported cases of lawyers relying on hallucinated case cites and being sanctioned under Rule 11 of the F.R.Civ.P.

When Satire Meets Statute: The Onion’s VPPA Class Action

Video Privacy Protection Act (VPPA) class action lawsuits have been on the rise, and the owner of the The Onion, a popular satire site, finds itself the subject of a recent one. On May 16, 2025, a plaintiff-initiated litigation against Global Tetrahedron, LLC, the owner of The Onion, alleges that the defendant installed the Meta Pixel on its website, with host videos for streaming, without user knowledge.
The plaintiff alleges that, unbeknownst to consumers, the Meta Pixel tracks users’ video consumption habits “to build profiles on consumers and deliver targeted advertisements to them.” According to the complaint, the Meta Pixel is configured to collect HTTP headers, which contain IP addresses, information about the user’s web browser, page location, and document referrer (the URL of the previous document or page that loaded the current one). Since the Meta Pixel is reportedly attached to a user’s browser, “if the user accesses Facebook.com through their Safari browser, then moves to theonion.com after leaving Facebook, the Meta Pixel will continue to track that user’s activity on that browser.” The complaint also alleges that Meta Pixel collects a Meta-specific value called the c-user cookie, which is a unique user ID for users logged into Facebook. By combining the points of data collection, the complaint asserts, the Onion transmits personally identifiable information to Meta.
In a novel approach, the complaint uses screenshots of the plaintiff’s ChatGPT conversation to demonstrate how ChatGPT can help an ordinary user decipher what information is allegedly being disclosed to Meta through the Onion website. According to the screenshots, when the plaintiff asked ChatGPT how to check if a website was disclosing their browsing activity to Meta, the plaintiff was directed to use developer tools to inspect the page’s network traffic. Each internet browser has an integrated developer tool, which allows developers to analyze network traffic, measure performance, and make temporary changes to a page. Any website user can open the developer tool, as ChatGPT directed the plaintiff to do.
Following ChatGPT’s instructions, the plaintiff reportedly opened the developer tool page for the Onion website. Then, the plaintiff uploaded a screenshot of the Onion’s developer tool onto ChatGPT. ChatGPT analyzed the request in the screenshot and broke down the parameters contained within, including Pixel ID, Page Views, URL, and Facebook cookie ID. Many VPPA complaints in recent months have described the technical processes behind tracking technologies, but by using ChatGPT in this complaint, the plaintiff underscores how such large language model tools can help an average website user decipher seemingly complex technical concepts and better understand the data flows from tracking technologies.
The case reflects a broader trend in VPPA litigation, in which plaintiffs are challenging the use of third-party tracking technologies on sites that offer any form of video content. As VPPA litigation evolves, this case could peel back another layer of risk for publishers across industries providing video streaming content.

Bidders Beware! Design-Builders Are at Risk Not Only for Defective Design Documents, But Possibly for Defective Bidding Documents, Too

Historically, the Boards of Contract Appeals and Courts have reviewed design-builders’ reliance on government-provided conceptual drawings or bridging documents in support of constructive change claims under a reasonableness standard (see M. A. Mortensen Company, ASBCA No. 39978, 93-3 BCA ¶ 26,189). However, in two recent cases, the Spearin doctrine – under which the government warrants that government-provided “design specifications,” if followed, will produce a satisfactory result (see United States v. Spearin, 248 U.S. 132, 136 (1918)) – has been applied by the boards and courts to analyze constructive change claims. Specifically, the conceptual drawings or bridging documents were reviewed to determine if they constituted design specifications that the government would warrant were adequate under Spearin. As set forth below, this alternative approach has ended with mixed results and may inadvertently make recovery from the government more difficult.
Sheffield Korte Joint Venture
In Sheffield Korte Joint Venture, ASBCA No. 62972, 23-1 BCA ¶ 38,417, aff’d, 2025 WL 1466934 (Fed. Cir. May 22, 2025),Sheffield (the design-builder) was awarded a contract with the United States Army Corps of Engineers to design and construct a new Army Reserve Center located near Waldorf, Charles County, Maryland. As part of the design, Sheffield was required to design and construct a stormwater management system to support the new center. The bid documents included conceptual drawings that depicted a centralized stormwater management system (versus a decentralized system). A centralized system is defined as one that collects stormwater in a single feature like a pond, whereas a decentralized system uses multiple, small-scale features to control stormwater and is intended to replicate natural hydrology.
The bid documents also indicated that the depicted stormwater management system was only an approximation, and that the contractor was ultimately responsible for determining the actual size and location of the system. Sheffield based its bid price for this scope of work on the conceptual drawings, which depicted a centralized stormwater management system.
Once performance of the design commenced, it became apparent to Sheffield that a centralized stormwater management system was not feasible under applicable state and local permitting requirements. Instead, Sheffield was required to design and construct a substantially more expensive decentralized stormwater management system. Thereafter, Sheffield submitted a certified claim to the government for its increased costs, which was subsequently denied by the government on the grounds that the Permits and Responsibility Clause, FAR 52.236-7, precluded entitlement.
Rather than argue its reliance on the conceptual drawings depicting a centralized stormwater management system was reasonable for bidding purposes, Sheffield based its claim for recovery under the Spearin doctrine (i.e., the government was responsible for the additional costs of construction since the conceptual drawings depicted a system that would not work for the project). Ultimately, the Armed Services Board of Contract Appeals (ASBCA) (and the Federal Circuit on appeal) denied Sheffield’s claim on the basis that the conceptual drawings were not “design specifications” for the warranty of constructability to apply under Spearin. In denying recovery, both forums relied on Sheffield’s significant discretion to design and build the stormwater management system in accordance with local regulations pursuant to FAR 52.236-7 and the fact that the bidding documents did not mandate a centralized stormwater management system. Importantly, there was no discussion of the implied warranty of the adequacy of the conceptual drawings for providing information for purposes of bidding as determined in the Mortensen case.
Balfour Beatty Construction LLC
In Balfour Beatty Construction LLC, CBCA 6750, 2023 WL 428747 (March 31, 2023), aff’d, 2025 WL 798865 (Fed. Cir. Mar. 13, 2025),the Civilian Board of Contract Appeals (CBCA) similarly considered to what extent a 30% bridging document provided to bidders should be considered design or performance specifications under a Spearin analysis for a number of claims submitted by the design-builder. In this case, the CBCA expressly found that Mortenson was not controlling and distinguishable because, in the board’s view, the bridging documents did not contain any warranty of accuracy for bidding purposes.
One particularly noteworthy claim addressed by the CBCA related to the thickness required for a mat slab. The CBCA found that the bridging documents at issue did not constitute design specifications as to the thickness of the mat slab because the bridging documents merely provided a minimum thickness, and the CBCA felt that Balfour Beatty should have validated the actual thickness that would be required. That ruling by the CBCA was appealed and overturned by the Federal Circuit. According to the Federal Circuit, a statement in the bridging documents indicating that the contractor should “match existing building foundations” was sufficiently definite to constitute a design specification, and, therefore, an implied warranty with respect to the mat thickness applied, which entitled the contractor to recover for the deviation from the specified thickness. So, while the CBCA refused to find any warranty by the bridging documents, the Federal Circuit concluded an implied warranty existed under Spearin.
Key Takeaways
The Sheffield Korte and Balfour Beatty cases demonstrate the challenges to design-builders presented by the application of the Spearin doctrine to adjudicate constructive changes based on faulty conceptual drawings or bridging documents. More importantly, these cases indicate a potential shift in Board and Federal Circuit jurisprudence away from the reasonableness standard articulated in Mortenson (see also Metcalf Construction Company, Inc. v. United States, 742 F.3d 984, 996 (Fed. Cir. 2014)).
The Sheffield Korte and Balfour Beatty cases place a burden on bidders of design-build projects to analyze conceptual drawings or bridging documents provided by the government for accuracy, especially if those documents are relied upon for bidding. Indeed, design-builders may not be able to recover additional costs if those documents are found to be defective absent an additional finding that the documents constitute “design specifications.” Whether the document constitutes “design specifications” can be highly technical, time-consuming, and unreasonably expensive for the bidders at bid time.
These recent decisions may also inadvertently increase the costs of design-build projects to the government. Wary design-builders may include higher cost contingencies in their bid price to account for the possibility of constructive change claims being denied because conceptual drawings or bridging documents do not constitute “design specifications.” As a corollary, this recent shift to a Spearin analysis on conceptual drawings and bridging documents may increase the burden on the government to respond to Requests for Information during the bidding stage as bidders seek certainty on mandatory versus discretionary design requirements.
Going forward, design-builders pursuing claims under similar circumstances should consider focusing the government’s attention on the fact that there is a material difference between design-build and design-bid-build contracting regarding the assumption of design risk and the application of the Spearin doctrine. In a design-bid-build delivery system, the Spearin doctrine applies where the government warrants that the fully designed plans and specifications are adequate to meet the government’s needs. In the design-build context, the Spearin doctrine should only apply where the government provides a fully developed design specification that the design-builder must follow for the construction of the project. The Spearin doctrine should not apply to conceptual drawings or bridging documents where the primary purpose of those documents is to inform the bidders of the scope of the project and assist them in assembling the price of completing it.
In summary, for a design-build project, there is an implied warranty that the conceptual drawings and bridging documents are adequate for the purposes of submitting a proposal as concluded by the ASBCA in Mortensen. Design-builders should focus on this warranty when making claims for constructive changes. Design-builders should not rely upon the application of the Spearin doctrine for constructive change claims stemming from defective conceptual drawings or bridging documents. Rather, consistent with the ruling in Mortensen, the focus of the analysis should be on fundamental fairness and reasonableness standards when determining whether a design-builder reasonably relied upon conceptual drawings or bridging documents in order for the government to obtain the most competitive price. In that circumstance, the government should assume the risk of providing inaccurate bidding information to the design-builder.
It is not clear whether the decisions in Sheffield Korte or Balfour Beatty signal a shift away from Mortenson, but, as described above, such a shift could prove problematic for a number of reasons. Regardless, in the future, design-builders pursuing the government for defective conceptual drawing and bridging documents would be wise to consider reverting to the Mortensen analysis to support their claims and avoiding reliance on the Spearin doctrine. 
Listen to this post

HHS Rescinds 2022 EMTALA Guidance on State Law Preemption in Emergency Reproductive Healthcare

On May 29, 2025, the Department of Health and Human Services (“HHS”) rescinded its July 11, 2022 guidance (Ref. QSO-22-22-Hospitals) (the “2022 Guidance”) clarifying how the Emergency Medical Treatment and Labor Act of 1965 (“EMTALA”) should be interpreted in the wake of state policy and legislative responses to the landmark Supreme Court decision, Dobbs v. Jackson Women’s Health Organization (2022), overturning Roe V. Wade (1973) which legalized abortion in the United States. 
Under EMTALA, all Medicare participating hospitals and their dedicated emergency departments are required to provide patients, regardless of their ability to pay or insurance status, with an appropriate medical screening, stabilizing treatment, and transfer, if necessary. The 2022 Guidance directed, among other things, that this law applies irrespective of any state laws or mandates that apply to specific procedures (including conditions for which stabilization may require abortion, such as ectopic pregnancy, severe preeclampsia, and complications arising from probable pregnancy loss). The 2022 Guidance further stated that “[a] physician’s professional and legal duty to provide stabilizing medical treatment to a patient who presents to the emergency department and is found to have an emergency medical condition preempts any directly conflicting state law or mandate that might otherwise prohibit such treatment.”
Today, the 2022 Guidance is no longer in effect, however the Centers for Medicare and Medicaid (“CMS”) affirmed in its June 3, 2025 Press Release that it “will continue to enforce EMTALA, which protects all individuals who present to a hospital emergency department seeking examination or treatment, including for identified emergency medical conditions that place the health of a pregnant woman or her unborn child in serious jeopardy.” CMS has been explicit that it is working to “rectify any perceived legal confusion and instability created by the former administration’s actions.” Nevertheless, Medicare participating hospitals and their dedicated emergency departments are left wondering: “what happens now when state law, federal law, and physician ethical duties to their patients are in direct conflict?”

Burn Grooming Policy, Burn? Third Circuit Reignites Bearded Firefighter’s Religious Accommodation and Free Exercise Claims

If you have a grooming policy based on safety factors (like no beards for firefighters), does that trump an employee’s request for a religious accommodation? Maybe not. A recent Third Circuit decision, Smith v. City of Atlantic City, et al., addressed this issue and partially reversed a district court’s grant of summary judgment in favor of Atlantic City. The court revived a firefighter’s claims under the Free Exercise Clause and Title VII. The decision offers important guidance on how courts evaluate workplace grooming policies and employers’ obligations to accommodate religious beliefs.
Burning Through the Facts
Alexander Smith, a long-serving firefighter, served as the city’s only assigned air mask technician — a role that required him to maintain SCBA (self-contained breathing apparatus) units but not to enter hazardous environments. As a Christian, Smith requested that the city allow him to grow a beard as an accommodation to his sincerely held beliefs.
The city’s grooming policy mandates that firefighters be clean shaven while on duty, citing safety concerns related to SCBA seal integrity. However, the policy contains exceptions: (1) captains may allow deviations at their discretion, assuming responsibility for any unfavorable outcome, and (2) administrative personnel, like Smith, were not required to undergo annual SCBA fit testing — despite being subject to the same policy.
Although Smith did not insist his demand was an all-or-nothing accommodation, the city denied his request without discussing whether certain alternatives might satisfy his religious beliefs. Subsequently, he was informed he would be suspended if he refused to shave.
Smith filed a lawsuit claiming that the city’s denial of his requested accommodation was religious discrimination in violation of Title VII, as well as a violation of his right to freely exercise his religion guaranteed by the First Amendment. The district court granted the city’s motion for summary judgment, and Smith appealed.
The Third Circuit Turns Up the Heat
The Third Circuit vacated summary judgment on Smith’s Free Exercise Clause and Title VII accommodation claims, finding that:

The city’s grooming policy was not generally applicable because it allowed formal and informal exceptions that undermined the city’s stated safety interest. This triggered strict scrutiny under the Free Exercise Clause. (The strict scrutiny standard required the city to prove that it had a compelling state interest and that its actions were narrowly tailored or the least restrictive means to achieve that interest.)
The city’s actions were not narrowly tailored to achieve its compelling interest. While safety is a compelling interest, the city did not explore less restrictive alternatives such as fit testing Smith with a beard or continuing to exclude him from suppression duties, as it had done for years.
The city did not demonstrate that granting the accommodation would pose an undue hardship under Title VII. Applying the Supreme Court’s standard from Groff v. DeJoy, the appeals court held that speculative risks do not qualify. The record showed no evidence that Smith’s beard posed an actual operational risk, especially given his role and the infrequency of emergency calls that might require him to wear an SCBA.

Lessons from the Ashes: Takeaways for Employers
Considering this decision, employers should understand:

Religious accommodations must be taken seriously. A blanket policy — even one rooted in safety — may not be sufficient if it allows discretion or contains exemptions.
Interactive process matters. Employers should engage with employees to assess alternative accommodations before denying requests.
The Groff standard is here to stay. “Undue hardship” under Title VII requires a showing of substantial, excessive, or unjustifiable burden — not mere inconvenience or administrative hassle.

If your workplace grooming policies contain any exceptions — or if you receive an accommodation request related to those policies — consult your employment counsel early. Proactive legal guidance is critical to ensure compliance and mitigate risk.
Listen to this post

When AI Acts Independently: Legal Considerations for Agentic AI Systems

The emergence of agentic artificial intelligence (AI) systems capable of autonomous planning, execution, and interaction creates unprecedented regulatory challenges. Unlike traditional AI applications that respond to specific prompts, agentic AI systems operate independently: making decisions, achieving goals, and executing complex tasks without the need for constant human guidance or intervention. For organizations leveraging or developing these advanced AI systems, understanding the evolving legal and regulatory landscape is critical for mitigating significant operational, financial, and reputational risks.
Key Technological Developments
Agentic AI systems possess critical capabilities that are distinct from conventional AI applications, including:

Autonomous Planning: Ability to define actions needed to achieve specified goals.
Tool Integration: Direct interaction with external systems, tools and application programming interfaces.
Independent Execution: Multi-step task completion without continuous human intervention.

These capabilities represent a qualitative (not merely quantitative) shift in AI functionality. Real-world applications include autonomous financial trading systems that can adjust strategies based on market conditions, supply chain management platforms that independently negotiate with vendors and optimize logistics, and sophisticated customer service agents that can resolve complex issues across multiple systems without human intervention. Each of these applications creates distinct liability profiles that existing legal frameworks can struggle to address.
Enhanced Opacity Challenges
While “traditional AI explainability” (i.e., the “black box” problem) already presents difficulties, agentic systems can significantly increase these concerns. The NIST AI Risk Management Framework distinguishes between explainability (understanding how an AI system works) and interpretability (understanding what an AI system’s output means in context), both tools for oversight of AI, and it explains how their absence can directly contribute to negative risk perceptions.
Agentic systems present particular opacity challenges:

Complex, multi-step reasoning processes can obscure decision pathways.
External system interactions introduce variables that can go beyond original design parameters.
Autonomous planning capabilities can produce outcomes deviating from those initial parameters.

Liability Framework Implications
In July 2024, the US District Court for the Northern District of California held that Workday, a provider of AI-driven applicant screening tools, could be considered an “agent” of its clients (the ultimate employers of successful applicants). The decision underscores the evolving legal landscape surrounding AI and the responsibilities of AI service providers whose tools directly influence hiring decisions. It also directly relates to agentic AI in several ways: (1) employers delegated traditional hiring functions to Workday’s AI tools; (2) the AI tools played an active role in hiring decisions rather than merely implementing employer-defined criteria; and (3) by deeming Workday an “agent,” the court created the potential for direct liability for AI vendors.
The Workday decision, while specific to employment screening, serves as a crucial precedent highlighting how existing legal principles like agency can be applied to AI systems. It underscores the additional liability concerns associated with AI systems, starting with the potential for direct liability for AI vendors. When considering the even broader capabilities of agentic AI, the liability considerations become more complex and multi-faceted, presenting challenges in areas such as product liability, vicarious liability and proximate causation.
Cross-jurisdictional deployment of agentic AI systems further complicates liability determination. When an autonomous system operating from servers in one jurisdiction makes decisions affecting parties in multiple other jurisdictions, questions of which legal framework applies become particularly relevant. This is especially problematic for agentic financial trading systems or global supply chain management platforms that operate across multiple regulatory regimes simultaneously.
Current Regulatory Environment
While the United States lacks comprehensive federal legislation specifically addressing AI (not to mention agentic AI), several frameworks are relevant:

State-Level Initiatives: Colorado’s AI Act, enacted in May 2024, applies to developers and deployers of “high-risk AI systems,” focusing on automated decision-making in employment, housing, healthcare, and other critical areas. The current political environment, however, creates additional regulatory uncertainty. The House has passed a 10-year moratorium on state-level AI regulations, which could eliminate state-level innovation in AI governance during the most critical period of agentic AI development. This regulatory uncertainty underscores the urgency for organizations to implement proactive governance frameworks rather than waiting for clear regulatory guidance.
International Frameworks: The EU AI Act does not specifically address AI agents, but system architecture and task breadth may increase risk profiles. Key provisions, including prohibitions on certain AI practices deemed unacceptable (due to their potential for harm and infringement of fundamental rights) and AI literacy requirements, became applicable in February 2025.
Federal Guidance: NIST released its “Generative AI Profile” in July 2024 and has identified explainability and interpretability guidance as priorities for connecting AI transparency to risk management.

Human Oversight Considerations
The requirement for human oversight may be inherently incompatible with agentic AI systems, which by definition are designed to act on their own to achieve specific goals. For agentic systems, meaningful human control might require pre-defined boundaries and kill switches rather than real-time oversight, but this approach may fundamentally limit the autonomous capabilities that make these systems valuable. This creates tension between regulatory requirements for meaningful human control and autonomous system operational value.
Strategic Implementation Recommendations
Organizations considering agentic AI deployment should address several key areas:

Contractual Risk Management: Implement clear provisions addressing AI vendor indemnification for autonomous decisions, particularly those causing harm or violating laws and regulations. 
Insurance Considerations: Explore specialized cyber and technology insurance products given the nascent state of insurance markets for agentic AI risks. Coverage gaps are likely to persist, however, until the market matures (e.g., traditional cyber policies may not cover autonomous decisions that cause financial harm to third parties).
Governance Infrastructure: Establish oversight mechanisms balancing system autonomy with accountability, including real-time monitoring, defined intervention points, and documented decision authorities.
Compliance Preparation: Consider California’s proposed Automated Decision-Making Technology (ADMT) regulations requiring cybersecurity audits and risk assessments, which suggest similar requirements may emerge for agentic systems.
Cross-Border Risk Assessment: Develop frameworks for managing liability and compliance when agentic systems operate across multiple jurisdictions, including clear protocols for determining applicable law and regulatory authority.

Looking Ahead
The intersection of autonomous decision-making and system opacity represents uncharted regulatory territory. Organizations that proactively implement robust governance frameworks, appropriate risk allocation, and careful system design will be better positioned as regulatory frameworks evolve.
The unique challenges posed by agentic AI systems represent a fundamental shift that will likely expose critical limitations in existing governance frameworks. Unlike previous AI developments that could be managed through incremental regulatory adjustments, agentic AI’s autonomous capabilities may require entirely new legal and regulatory paradigms. Organizations should engage legal counsel early in agentic AI planning to navigate these emerging risks effectively while maintaining compliance with evolving regulatory requirements.

Supreme Court Rejects Heightened Standard for Discrimination Claims from Majority Groups

On June 5, 2025, the Supreme Court of the United States ruled that employees who are part of a majority group do not have a higher evidentiary standard to prove workplace discrimination. The ruling revived a heterosexual woman’s lawsuit alleging she was discriminated against in favor of employees who identify as gay, and potentially opens the door for more discrimination lawsuits from people in majority groups.

Quick Hits

In Justice Ketanji Brown Jackson’s unanimous opinion, the Supreme Court held that plaintiffs who are part of a majority group cannot be held to a higher standard of proof in employment discrimination cases.
In this case, called Ames v. Ohio Department of Youth Services, a heterosexual woman alleged sex and sexual orientation discrimination.
The court’s decision potentially opens the door to more lawsuits from plaintiffs who belong to majority groups.

The Supreme Court unanimously rejected the Sixth Circuit’s “background circumstances” rule, which had required majority-group employees to provide extra evidence of employer discrimination to succeed in Title VII discrimination claims. The ruling means that, going forward, majority-group discrimination claims (or so-called “reverse discrimination” claims) will be analyzed using the same framework as minority-group discrimination claims, where a plaintiff can rely on their own circumstances to prove a prima facie case.
The background circumstances rule “cannot be squared with the text of Title VII or the Court’s precedents,” the Supreme Court stated.
This decision comes against the backdrop of President Donald Trump’s recent executive orders to stop “illegal” workplace diversity, equity, and inclusion (DEI) programs and reshape how federal policy defines sex discrimination and gender.
Background
The plaintiff in this case is a heterosexual woman who was employed by Ohio Department of Youth Services. She applied for and interviewed for a promotion, but the department instead offered her another job that amounted to a demotion with less pay, which she took. The department later hired a gay man to serve in her prior role, and promoted a lesbian woman to the position she had sought. The plaintiff alleged employment discrimination based on sex and sexual orientation under Title VII.
The district court granted summary judgment for the department, finding that the plaintiff failed to establish “background circumstances,” which could include statistical evidence or evidence that gay managers made the employment decisions at issue. The plaintiff appealed, and the U.S. Court of Appeals for the Sixth Circuit affirmed the lower court’s ruling under the background circumstances test. The Supreme Court overturned the Sixth Circuit’s ruling.
No Background Circumstances Required Under Title VII
The Supreme Court held that Title VII does not require majority group members to show additional background circumstances. Writing for the court, Justice Jackson noted that, “Title VII’s disparate-treatment provision draws no distinctions between majority-group plaintiffs and minority-group plaintiffs.”
As such, Title VII applies whenever an individual was treated differently because of their race, color, religion, sex, or national origin. Justice Jackson stated that, by establishing “the same protections for every ‘individual’—without regard to that individual’s membership in a minority or majority group—Congress left no room for courts to impose special requirements on majority-group plaintiffs alone.”
Justice Jackson further rejected the Sixth Circuit’s application of the background circumstances rule, as it was contrary to Title VII’s statutory text and the court’s precedent. Such an application “misses the mark by a mile,” Justice Jackson wrote.
In addition, Justice Clarence Thomas wrote a separate concurring opinion in which he criticized judges for creating “atextual legal rules and frameworks.” He argued that “judge-made doctrines,” such as the background circumstances rule, “can distort the underlying statutory text.” Justice Thomas further questioned whether the McDonnell-Douglas burden shifting test for Title VII claims remains a useful tool, potentially inviting future reconsideration.
Next Steps
The Supreme Court’s decision in this case is consistent with guidance from the U.S. Equal Employment Opportunity Commission (EEOC), which released two technical assistance documents to explain what constitutes illegal diversity, equity, and inclusion (DEI) programs in the workplace. In those documents, the EEOC rejected the background circumstances rule. It takes the position that majority-group plaintiffs bear the same evidentiary standard as minority-group plaintiffs.
Going forward, this casemay invite additional discrimination claims by members of majority groups. Therefore, employers may wish to review their policies and practices to ensure protection of all employees against discrimination, regardless of whether the employee falls within the minority or majority group.

New York Choice-of-Law Clause Does Not Bar Chapter 93A Claims in Reinsurance Dispute

In Clear Blue Specialty Ins. Co. v. R-SVP II, L.L.C., the Massachusetts Superior Court applied Chapter 93A to the parties’ dispute despite the existence of a New York choice-of-law provision in the parties’ contract. The case centers around a multi-tiered reinsurance arrangement where the insureds had purchased collateral protection insurance to protect against loan defaults. After borrowers defaulted, resulting in over $125 million in losses, the insurer refused to pay, citing a “pay-as-paid” clause and brought a declaratory judgment action against the insureds. The insureds responded with counterclaims for breach of contract, breach of the implied covenant of good faith and fair dealing, and Chapter 93A violations.
The Superior Court denied the insurer’s motion to dismiss the Chapter 93A counterclaim for the following reasons:
1. New York Choice-of-Law Clause Does Not Bar 93A Claims
The insured argued that because New York law governed the contract, Massachusetts law—and by extension, Chapter 93A—could not apply. Rejecting that argument, the Superior Court explained that, while contract interpretation is subject to New York law, the clause did not bar application of Chapter 93A to the parties’ conduct. Also, the Superior Court noted that, although a Chapter 93A claim based solely on breach of contract generally is barred when the contract includes a foreign choice-of-law provision, the claim here was not based merely on the insured’s alleged failure to pay—it was based on broader allegations of unfair conduct, including systemic mishandling of claims and reliance on alleged fraudulent financial instruments. These claims sounded in tort, not contract. Therefore, the claims fell outside the scope of mere contractual interpretation.
2. Alleged Misconduct Falls Under ‘Trade or Commerce’
The Superior Court reiterated that insurance claims handling—including reinsurance claims—falls squarely within “trade or commerce” under Chapter 93A, §1(b). Allegations that the insurer engaged in unfair and bad faith settlement practices, such as unjustified denial of claims and failure to investigate or pay, are sufficient to support a Chapter 93A claim at the pleading stage.
3. Jurisdictional Reach of Chapter 93A
The insurer also argued that its conduct lacked sufficient connection to Massachusetts; however, the Superior Court concluded that whether the alleged unfair acts occurred “primarily and substantially” in Massachusetts (as required in a Section 11 business-to-business claim) is a factual one that could not be resolved on a motion to dismiss. The insureds, headquartered in Massachusetts, alleged that the insurer specifically directed its conduct at them in Massachusetts and the insured’s loss occurred in Massachusetts, which made the application of Chapter 93A plausible. 
Clear Blue Specialty Ins. Co. v. R-SVP II L.L.C., L.L.C. Takeaways
This decision provides clarity on the reach and resilience of Massachusetts’ Chapter 93A in complex commercial insurance disputes.

Reinsurance is not immune from Chapter 93A scrutiny, especially when claims handling affects Massachusetts-based insureds.
Choice-of-law provisions do not automatically shield out-of-state insurers from Chapter 93A liability when their actions target Massachusetts businesses.
Courts may closely scrutinize insurers’ conduct, especially where systemic mismanagement or bad faith is alleged.

Bottom Line for Policyholders and Insurers
For insured parties in Massachusetts, this ruling affirms the protections Chapter 93A affords—even in sophisticated, cross-border reinsurance arrangements. For insurers and reinsurers, the message is clear: unfair claims practices may carry serious consequences under Massachusetts law, regardless of what the contract says about governing law.

Riding on Empty: ‘Stang’ With No Anthropomorphic Characteristics Isn’t Copyrightable Character

The US Court of Appeals for the Ninth Circuit affirmed a district court’s denial of copyright protection for a car that had a name but no anthropomorphic or protectable characteristics. Carroll Shelby Licensing, Inc. v. Denice Shakarian Halicki et al., Case No. 23-3731 (9th Cir. May 27, 2025) (Nguyen, Mendoza, JJ.; Kernodle Dist. J., sitting by designation).
In 2009, Denice Shakarian Halicki and Carroll Shelby Licensing entered into a settlement agreement resolving a lawsuit concerning Shelby’s alleged infringement of Halicki’s asserted copyright interest in a Ford Mustang known as “Eleanor,” which appeared in a series of films dating back to the 1970s. Under the agreement, Shelby, a custom car shop, was prohibited from producing GT-500E Ford Mustangs incorporating Eleanor’s distinctive hood or headlight design. Shortly thereafter, Shelby licensed Classic Recreations to manufacture “GT-500CR” Mustangs, a move Halicki viewed as a breach of the settlement agreement. Halicki contacted Classic Recreations and demanded it cease and desist in the production of the GT-500CRs.
Shelby filed a lawsuit alleging breach of the settlement agreement and seeking declaratory relief. Halicki counterclaimed for copyright infringement and breach of the agreement. Following a bench trial, the district court ruled in Shelby’s favor on both the breach and infringement claims but declined to grant declaratory relief. Shelby appealed.
The Ninth Circuit began by addressing whether “Eleanor” qualified for copyright protection as a character under the Copyright Act. Although the act does not explicitly list characters among the types of works it protects, the Ninth Circuit has recognized that certain characters may be entitled to such protection. The applicable standard, articulated in 2015 by the Ninth Circuit in DC Comics v. Towle, sets forth a three-pronged test, under which the character must:

Have “physical as well as conceptual qualities”
Be “sufficiently delineated to be recognizable as the same character whenever it appears” with “consistent, identifiable character traits and attributes”
Be “especially distinctive” and have “some unique elements of expression.”

The Ninth Circuit concluded that Eleanor failed to satisfy any of the three prongs of the Towle test. As to the first prong, the Court found that Eleanor functioned merely as a prop and lacked the anthropomorphized qualities or independent agency associated with protectable characters. Regarding the second prong, the Court noted that Eleanor’s appearance varied significantly across the films in terms of model, colors, and condition. Under the third prong, the Court found that Eleanor lacked the distinctiveness necessary to elevate it beyond the level of a generic sports car commonly featured in similar films. Thus, the Court concluded that Eleanor did not qualify as a character, let alone a copyrightable one.
The Ninth Circuit next turned to the parties’ settlement agreement. While California law permits the use of extrinsic evidence to aid in contract interpretation, the Court found the language sufficiently unambiguous to render such evidence unnecessary. Notably, the parties did not include “Eleanor” as a defined term in the agreement, and the term was used in varying contexts throughout the document, conveying different meanings depending on the provision. Ultimately, the Court found a clause that specifically restricted Shelby only from producing vehicles incorporating two design elements associated with Eleanor – its hood and headlight design – to be dispositive. Halicki’s broader interpretation, the Court found, would require an unreasonable reading that pieced together unrelated provisions in a manner unsupported by the agreement’s text.
Finally, the Ninth Circuit addressed the denial of Shelby’s request for declaratory relief. The Court found that the district court erred by conflating Shelby’s breach of contract claim with its separate request for a declaration regarding Halicki’s potential claims. Because Shelby sought clarity as to whether its production of a newer, similar vehicle would infringe Halicki’s asserted rights, the Court held that declaratory relief was warranted. Thus, it remanded the case with instructions for the district court to issue a declaration confirming that Shelby’s conduct did not infringe Halicki’s rights.

FLIP OUT: New Complaint Against Humans, Inc. Demonstrates the Risk to App Publishers that Push SMS To Contact Lists

We are always on the lookout for the latest trends in TCPA litigation here at TCPAWorld.com.
Obviously the biggest trend right now is a massive increase in TCPA class actions– up over 100% year over year.
Also seeing a BIG increase in revocation suits.
But one type of case that has ALWAYS been with us, albeit without much fanfare, are suits arising out of app publishers that push SMS messages to contacts in consumer phones.
To highlight just how not new this phenomenon is, it was a central issue in the FCC’s omnibus ruling way back in 2015.
There the Commission held companies like GroupMe (who?) that automatically synch to a users contact list at the push of a button and send messages inviting them to join the app were potentially liable for TCPA violations caused by their users. In those cases GroupMe was viewed as the “initiator” of the message as they were so intertwined with the message’s sending that they could be fairly said to have made it themselves.
On the other hand, companies like YouMail that merely invited users to find contacts of their own to send messages to were deemed to not be the initiator of the texts– even though the texts promoted the YouMail product–because the consumer had selected the contacts to send messages to.
Fast forward to today and we see Humans, Inc. has developed a new app called Flip.
I know nothing about Flip, or Humans, or apps generally, or humans generally– but I DO know TCPA. And it looks to me like Flip may have been violating the TCPA– at least as alleged in a new complaint.
A person named YAHNI PEDIGREE (not sure if that’s a boy’s name or a girl’s name) sued Humans alleging receipt of unwanted messages.
The first message readL
93 people in your community joined Flip today. Refer friends nowwhile invite rewards are up 48%. https://flip.shop/invite-couponsStop2Quit
Uh oh.
See that “refer friends” note suggests Humans is inviting trouble.
Now perhaps this app is set up like YouMail where the user has to select who to invite to join the little Flip community. Ok fine. But wait, there’s more.
Plaintiff allegedly responded “stop” to end the messages but another message came hot on its heels:
Flip: Want $35?Share Flip with friends to claim! Tap to invitehttps://l.flip.shop/a/J6xYi
Not good.
Couple of things could be going on here.

Plaintiff could just be lying (always an option);
Flip could just be really bad at honoring stop notifications; or
(And I think this is the one) Flip has multiple users hitting consumers with invite at the same time.

Now if it is 3. that is a serious problem. It is akin to a real estate brokerage allowing individual agents to message consumers and then not honoring stops by preventing all agents from sending messages to that consumer.
Because Flip is identified as the sender here, it has the responsibility to assure that all “friends” doing the referring are prevented from messaging the same consumer after they say stop. How that is done is not my problem– but it is their problem, and they need to solve it because if these allegations are true it is bad news bears town.
Anyhoo, very interesting one here.
Plaintiff has plead this ONLY as an internal DNC class case– which is very weak– as apparently he/she did not have a number on the national DNC list. Given the weakness of the claim Humans might skirt this issue in this one– but it should definitely wise up to revocation needs (again, if the claims are true.)

Climate Lawsuit Against German Energy Company Dismissed by Court

Last week, a long-running lawsuit brought against a major German energy company by a Peruvian farmer for alleged damages stemming from climate change was dismissed by an appellate court in Germany.  The court’s reasoning focused on the inability of the plaintiff to prove damages, highlighting the difficulty of this aspect of the various climate tort litigations for plaintiffs–and, indeed, this legal point has featured prominently in a number of defenses to these lawsuits (especially in the United States).
Nonetheless, the environmental groups backing the lawsuit claimed victory–at least to a degree–as the case could be interpreted as establishing the principle that emitters of greenhouse gases could ultimately be held liable for damages attributable to climate change, even if this particular action failed to satisfy the applicable legal standard.  While it is true that this general legal principle could be invoked in other litigation in the future, environmental lawsuits will still need to be able to prove specific damages that were caused by climate change, which–as this case demonstrates–is a stringent standard to satisfy.  Any future plaintiff would have to be selected delicately and deliberately, with this standard in mind–and it is not at all clear that such a suitable candidate for a legal action would be found.  Simply stated, the fact that the court found that the plaintiff in this action–a homeowner whose building could be washed away if a dam formed by a glacier collapses due to warming temperatures–had insufficient proof to prosecute this claim will likely discourage (to some degree) similar lawsuits in the future.     

RWE AG, one of Europe’s top carbon polluters, won dismissal of a case brought by a Peruvian farmer who tried to hold the German energy giant liable for its impact on climate change. An appeals court in Hamm on Wednesday said that while national law allows a single company to be targeted for its share of climate-related damage, not all the necessary requirements were met in this suit against RWE.
www.bloomberglaw.com/…