Litigation Minute: Emerging Contaminants: Minimizing and Insuring Litigation Risk

WHAT YOU NEED TO KNOW IN A MINUTE OR LESS
As the scientific and regulatory landscape surrounding various emerging contaminants shifts, so too do the options that companies can consider taking to minimize and insure against the risk of emerging-contaminant litigation.
The second edition in this three-part series explores considerations for companies to minimize that risk and provides consideration for potential insurance coverage for claims arising from alleged exposure to emerging contaminants.
In a minute or less, here is what you need to know about minimizing and insuring emerging-contaminant litigation risk.
Minimizing Litigation Risk
As we discussed in our first edition of this series, regulation of emerging contaminants often drives emerging-contaminant litigation. For example, in emerging-contaminant litigation that alleges an airborne exposure pathway, plaintiffs’ complaints often prominently feature information from the US Environmental Protection Agency’s (EPA’s) National Air Toxics Assessment (NATA) screening tool and its predecessor, the Air Toxics Screening Assessment (AirToxScreen). AirToxScreen, and NATA before it, is a public mapping tool that can be queried by location, specific air emissions, and specific facilities to identify census tracts with potentially elevated cancer risks associated with various air emissions. Despite these tools’ many limitations, their simplicity and the information they provide have served as a foundation for many civil tort claims.
The takeaway: Since NATA and AirToxScreen use the EPA’s National Emission Inventory (NEI) as a starting point, companies with facilities that have emissions tied into NEI should carefully consider the implications of their reported emissions. For example, in some situations for some companies, it could be appropriate to consider whether to examine reported emissions and control technologies to determine whether adjustments can be made to reduce reported emissions to better reflect reality on a going-forward basis. In addition, requests for emerging contaminants sampling and reporting by regulatory agencies may be made publicly available.
Regulatory compliance is not always an absolute defense in tort litigation, but in most situations, compliance with existing regulations will be relevant to whether a company facing emerging-contaminant litigation met the applicable standard of care. Companies should examine applicable regulations against established compliance efforts and, as appropriate and applicable to any given company, consider whether it may be appropriate to closer examine compliance programs for continued improvements or audit established protocols to substantiate safety.
Insurance Coverage Considerations
Policyholders facing potential liability for claims arising out of alleged exposure to emerging contaminants should consider whether they have insurance coverage for such claims.
Commercial general liability insurance policies typically provide defense and indemnity coverage for claims alleging “bodily injury” or “property damage” arising out of an accident or occurrence during the policy period. While some insurers are now introducing exclusions for certain emerging contaminants (and most policies today have pollution exclusions), the underlying claim(s) may trigger coverage under occurrence-based policies issued years or decades earlier, depending on the alleged date of first exposure to the contaminant and the alleged injury process.
These older insurance policies are less likely to have exclusions relevant to emerging contaminants, and policies issued before 1986 are more likely to have a pollution exclusion with an important exception for “sudden and accidental” injuries, or no exclusions at all. In addition, some courts have ruled that pollution exclusions do not apply to product-related exposures or permitted releases of certain emerging contaminants.
In deciding whether there is potential insurance coverage for claims alleging exposure to emerging contaminants, policyholders should also consider whether they have potential coverage for such claims under insurance policies issued to predecessor companies. If insurance records are lost or incomplete, counsel can often coordinate an investigation, potentially with the assistance of an insurance archaeologist, and may be able to locate and potentially reconstruct historical insurance policies or programs.
The takeaway: Do not overlook the possibility of insurance coverage for potential liability regarding claims arising out of alleged emerging contaminant exposure. To maximize access to potential coverage, policyholders should act promptly to provide notice under all potentially responsive policies in the event of emerging-contaminant claims. Our experienced Insurance Recovery and Counseling lawyers can help guide policyholders through this process.
Our final edition will touch on considerations for companies defending litigation involving emerging contaminants. For more insight, visit our Emerging Contaminants webpage.

New York AG Settles with School App

The New York Attorney General recently entered into an assurance of discontinuance with Saturn Technologies, operator of an app used by high school and college students. The app was designed to be a social media platform that assists students with tracking their calendars and events. It also includes connection and social networking features and displayed students’ information to others. This included students’ location and club participation, among other things. According to the NYAG, the company had engaged in a series of acts that violated the state’s unfair and deceptive trade practice laws.
In particular, according to the attorney general, although the app said that it verified users before allowing them into these school communities, in fact anyone could join them. Based on the investigation done by the AG, the majority of users appeared not to have been verified or screened to block fraudulent accounts. In other words, accounts that were not those of students at the school. This was a concern, stressed the AG, as the unverified users had access to personal information of students. The AG argued that these actions constituted unfair and deceptive trade practices.
Finally, the AG alleged that the company did not make it clear that “student ambassadors” (who promoted the program) received rewards for marketing the program. As part of the settlement, the app maker has agreed to create and train employees and ambassadors on how to comply with the FTC’s Endorsements Guides by, among other things, disclosing their connection to the app maker when discussing their use of the app.
Putting It Into Practice: This case is a reminder to review apps directed to older minors not only from a COPPA perspective (which applies to those under 13). Here, the NYAG has alleged violations stemming from representations that the company made about the steps it would take to verify users. It also signals expectations in New York for protecting minors if offering a social media platform intended only for that market. 
Listen to this post

BIG LAW LOSS: TCPA Defendant Loses Bifurcation Effort After Terrible Discovery Objections– Is #BigLaw Inexperience to Blame?

Looks like #biglaw inexperience has cost another TCPA defendant big time.
But let’s try to stay positive.
First, I’m fairly certain I invented the concept of seeking bifurcated discovery in TCPA class litigation.
I know I invented seeking “trifurcted” discovery in TCPA class litigation.
Been doing it since 2011. 
For a long time no other defense counsel even attempted the maneuver. Recently we have seen quite a bit of it. But like so much else in litigation, its one thing to make the right move– its another thing to win the move. Especially when #biglaw is involved. These guys can’t seem to win anything in TCPAWorld.
So what does bi/trifurcated discovery even mean and why does it matter?
The primary vehicle Plaintiff’s lawyers have to extract large dollar TCPA settlements in class discovery. They serve massively overly broad demands–stuff like, produce records of every call you’ve ever made and every consent record supporting the right to make those calls and every account record for every costumer that signed up as a result of those calls– in an effort to turn a company inside out and drive them to the settlement table.
For smaller companies these sorts of demands are irritating and invasive, but perhaps not crippling. But for large enterprises the idea of extracting millions of confidential/private client files to hand over to a plaintiff’s lawyer is downright insane.
Now the rules typically do not allow for this type of discovery but if defense counsel isn’t VERY careful with objections they may end up waiving critical protections and the court may end up issuing an order compelling production of these materials.
But one way to cut off this entire issue is by asking the court to prevent invasive “merits” discovery into class claims until after class issues are decided. (Type 1 bifurcation.) Or to stay all class discovery pending the outcome of a dispositive motion challenge to the named plaintiff’s claim. (Type 2 bifurcation.) Either one of these is a form of “bifurcation” of discovery.
In Bond v. Folsom Insurance Agency, 2025 WL 863469 (N.D. Tex. March 19, 2025) the Defendant–represented by a #biglaw firm that did NOT make my list of top best TCPA lawyers–attempted Type 2 bifurcation (i.e. they sought to stay class discovery until the Plaintiff’s individual claims were resolved.) Unfortunately the defendant had already lost a discovery battle earlier in the case and the court was not going to allow the belated effort to seek bifurcation bail the defendant out. So it denied the motion.
Get it?
The defense failed to seek bifurcation at the right time. Then the failed to assert proper objections/arguments to prevent the production of class wide information. Instead it asserted ” boilerplate objections” that were rejected by the court.
What a disaster. Shouldn’t have happened.

FTC’s Consumer Protection Agenda Thus Far Under President Trump

As contemplated by FTC defense lawyer in December 2024, the Federal Trade Commission’s operations during the first two months under the second Trump Administration have been chaotic. Unsurprisingly, the policy focus appears to be de-regulation and an enforcement focus on bread-and-butter fraud and deception (for example and without limitation, bogus business opportunity offers, unsubstantiated earnings claims and unlaw debt collection), privacy, telemarketing, big technology moderation and the protection of competition in labor markets.
Last week, President Trump fired the remaining two Democratic commissioners. Both have stated that they believe their termination is unlawful and may challenge the dismissals judicially. Two Republican commissioners remain to make regulatory, investigation and enforcement-related decisions.
The Federal Trade Commission has traditionally been considered an independent agency. However, President Trump recent issued an Executive Order seeking to vest control of various federal agencies and financial regulator within his control, including the FTC. In doing so, the Trump administration seemingly seeks to exert some degree of control over the strategic priorities of the agencies and regulators.
Historically, an FTC commissioner may only be removed by the President for “inefficiency, neglect of duty or malfeasance in office.” In fact, in Humphrey’s Executor v. United States (1935), the Supreme Court ruled that FTC commissioners cannot be removed over policy differences.
Importantly, however, in Selia Law v. CFPB (2019), the Supreme Court held that restricting removal of the Consumer Financial Protection Bureau director to “for cause” only is unconstitutional. Justices Thomas and Gorsuch concurred and criticized the Humphrey’s Executor decision. It is anticipated that, if challenged, the Trump Administration will rely upon the Selia Law decision in support of its position that the removal of the FTC commissioners is constitutional.
Many have noticed a considerable shift in consumer protection investigation and enforcement-related activities. A few new enforcement matters have been initiated in 2025 whilst one or more investigations and lawsuits have been paused. Whether the current slowdown is temporary while the agency aligns its priorities with the new administration’s policies, or is an indication of a more significant long term shift remains to be seen.
Also noteworthy is a brief recently filed in the Eighth Circuit by the FTC defending the “Click to Cancel” Negative Option Rule. Numerous business groups have filed challenges to the rule in federal court. Many have speculated the “Click to Cancel” Rule would face significant challenges by the Trump administration. The “Click to Cancel” Rule’s misrepresentation restrictions are already effective and the remainder of the rule is supposed to become effective in May 2025.

OCC Eliminates “Reputational Risk” Category from Bank Supervision Criteria

On March 20, the OCC announced that it will no longer treat reputation risk as a standalone category in its supervision of national banks and federal savings associations. The decision marks a dramatic shift in the agency’s risk-based examination framework.
Under the updated policy, OCC examiners are instructed to discontinue separate assessments of reputation risk and instead evaluate any such concerns through other established risk areas—such as operational, compliance, or credit risk—when they present a tangible impact to bank safety, soundness, or fair treatment of customers. OCC staff have been directed to revise examination manuals and related documentation to eliminate references to reputation risk. This change follows the Senate’s introduction of proposed legislation that would prohibit all federal banking agencies from considering reputation risk in supervisory exams.
The concept of reputational risk has been around for decades, and involves the risk to current or projected financial condition and resilience arising from negative public opinion. The OCC’s exam manual states that “departure from effective corporate and risk governance principles and practices cast doubt on the integrity of the bank’s board and management. History shows that such departures can affect the entire financial services sector and the broader economy.”
Now, according to the OCC, the revised framework is intended to improve clarity and public confidence in the examination process. The OCC emphasized that removing the term does not reduce expectations for sound risk management, but instead to ensure that supervisory actions are grounded in objective and material risk considerations.
Putting It Into Practice: The OCC’s removal of reputation risk as a standalone category echoes recent comments from Acting Comptroller Hood, who emphasized that the agency will not push banks to debank entire categories of customers without assessing individualized risks (discussed here). We expect further actions from federal regulators as part of a broader shift in supervisory policy and priorities (previously discussed here, here, and here).
Listen to this post

CFPB Pushes Forward in Debt Relief Action

On March 13, the CFPB filed a brief in an Illinois federal court, reinforcing its arguments for a $43 million judgment against the founder of a now-defunct debt relief company. The CFPB contends that the company’s founder controlled its deceptive telemarking operations and should be held personally liable under the Telemarketing Sales Rule (TSR) and the Consumer Financial Protection Act (CFPA). 
The lawsuit, originally filed in 2020, alleges that the company engaged in unlawful advance fees and deceptive practices targeting student-loan borrowers. According to the CFPB, the company: 

Misrepresented its services. The company allegedly promised lower student loan payments, full debt forgiveness, and improved credit scores, but often failed to deliver these results. 
Charged illegal upfront fees. Consumers were required to pay fees before receiving any debt relief services, in violation of federal law. 
Failed to provide promised relief. Many consumers paid significant amounts for services that did not produce the advertised benefits. 

In its brief, the CFPB reiterated its request for the full $43 million judgment, which includes $2M in consumer redress, arguing that it should be based on total consumer harm rather than net profits. The Bureau also seeks a $41M in civil penalty and rejected claims that its penalty request infringes on the Seventh Amendment right to a jury trial. 
Putting It Into Practice: Despite the CFPB’s recent withdrawal of several lawsuits (previously discussed here and here), its decision to proceed with this enforcement action indicates that certain regulatory priorities, including debt relief and Military Lending Act violations (previously discussed here and here), remain intact.
Listen to this post 

FTC Signals Strong Stance on Civil Investigation Demands

In a March 10 blog post, the new Director of the FTC’s Bureau of Consumer Protection (BCP) reaffirmed the agency’s commitment to enforcing consumer protection laws through Civil Investigation Demands (CIDs). 
A CID is a legally enforceable demand requiring recipients to provide requested documents, testimony, reports, or other information. The FTC issues CIDs to entities and individuals it believes may have violated the law, as well as to third parties who may possess relevant information.
The FTC expects full and timely compliance with CIDs, and failure to respond can lead to legal action, including judicial enforcement. While BCP may work with recipients to tailor requests or adjust response deadlines, recipients must initiate such discussions well in advance. Additionally, recipients are generally required to meet with FTC staff soon after receiving a CID. Although this requirement can be waived, the meeting provides a crucial opportunity to raise and address any compliance challenges.
Putting It Into Practice: The new BCP Director’s first blog post since his appointment highlights the FTC’s continued focus on financial institutions and fintech companies that engage with consumers. Businesses and individuals that receive a CID should

Act Promptly: Track all deadlines and contact the FTC staff identified in the CID to discuss compliance.
Seek Legal Counsel: Consult with experienced legal counsel to ensure appropriate and timely responses.
Engage Cooperatively: Proactively communicate with the FTC, as the agency may consider adjustments to requests or deadlines.

Listen to this post 

New York Attorney General Proposes Bill to Expand Consumer Protection Law

On March 13, New York Attorney General Letitia James announced the introduction of the Fostering Affordability and Integrity through Reasonable Business Practices Act (FAIR Business Practices Act). The proposed legislation seeks to extend the state’s existing ban on deceptive business practices to also prohibit unfair and abusive practices, aligning New York with 42 other states. 
The bill, introduced in both state Senate and Assembly, would enhance enforcement capabilities for the Office of the Attorney General (OAG) and private consumers, including the ability to seek civil penalties and restitution for UDAAP violations. According to Attorney General James, the legislation is needed to tackle a host of consumer harms, including: 

Subscription cancellations. Preventing companies from making it unreasonably difficult for consumers to cancel recurring payments.
Debt collection abuses. Prohibiting debt collectors from improperly seizing Social Security benefits or nursing homes from suing relatives of deceased residents for unpaid bills.
Auto dealer practices. Prohibiting car dealerships from withholding a customer’s photo identification until a sale is finalized. 
Student loan servicing misconduct. Restricting student loan servicers from steering borrowers into costlier repayment plans. 
Exploitation of limited English proficiency consumers. Addressing deceptive practices targeting non-English-speaking consumers. 
Junk fees and hidden costs. Reducing unnecessary and deceptive charges in various industries, including healthcare and lending. 
Artificial intelligence (AI) scams and online fraud. Strengthening enforcement against AI-driven scams, phishing schemes, and deceptive digital marketing practices. 

The proposal has garnered support from former CFPB director Rohit Chopra and former FTC Chair Lina Khan, both of whom have emphasized the need for stronger state-level enforcement against deceptive and abusive business practices. 
Putting It Into Practice: New York’s proposed legislation is the latest example of a growing trend among states taking a more active role in consumer protection enforcement (previously discussed here and here). This also highlights how some states are proactively responding to the CFPB’s state-level consumer protection recommendations from January, which encourage the adoption of the “abusive” standard (previously discussed here). With ongoing uncertainty surrounding the future of the CFPB, more states are likely to step in to fill the regulatory void by expanding their own consumer protection laws. 
Listen to this post

OCC Signals Shift on Crypto and Debanking Under Acting Comptroller Hood

On March 18, Acting Comptroller of the Currency Rodney Hood reiterated the OCC’s commitment to ensuring fair access to banking services, including for cryptocurrency firms. Speaking at a retail banking industry conference, Hood stated that the OCC would not tolerate so-called “debanking” without individualized risk assessments. He emphasized that banks must evaluate businesses—including those in the crypto sector—based on objective criteria rather than categorical exclusions. 
Hood’s remarks signaled several key potential policy shifts: 

Leveling the playing field for crypto activities. Banks engaging with digital asset companies should be evaluated under the same supervisory frameworks as traditional financial services. 
Firm Risk Management Expectations. While easing entry for crypto-related banking services, banks must still meet core regulatory requirements, including capital, cybersecurity, and BSA/AML compliance. 
No Mandates on Account Closures. Hood reaffirmed that the OCC does not direct banks to open or close specific accounts and that such decisions should reflect each customer’s unique risk profile. 
Fintech Expansion & Regulatory Innovation. The OCC plans to launch a fintech regulatory sandbox and recently granted a new fintech bank charter—the first in five years—as part of broader efforts to encourage responsible fintech innovation. 

Putting It Into Practice: The OCC recently clarified that banks are authorized to provide crypto custody services, hold stablecoin reserves for issuers, and participate in blockchain networks to process and validate payments, including stablecoin transactions. These developments, along with Hood’s comments, reflect a broader policy shift under the second Trump Administration favoring cryptocurrency adoption and challenging alleged politically motivated banking restrictions (previously discussed here and here). In addition, Hood’s comments on de-banking follow efforts by states such as Florida and Tennessee to tack perceived “de-banking” of consumers with conservative ideologies (previously discussed here and here).
Listen to this post

Virginia Moves to Regulate High-Risk AI with New Compliance Mandates

On February 20, the Virginia General Assembly passed the High-Risk Artificial Intelligence Developer and Deployer Act. If signed into law, Virginia would become the second state, after Colorado, to enact comprehensive regulation of “high-risk” artificial intelligence systems used in critical consumer-facing contexts, such as employment, lending, housing, and insurance.
The bill aims to mitigate algorithmic discrimination and establishes obligations for both developers and deployers of high-risk AI systems. 

Scope of Coverage. The Act applies to entities that develop or deploy high-risk AI systems used to make, or that are a “substantial factor” in making, consequential decisions affecting consumers. Covered contexts include education enrollment or opportunity, employment, healthcare services, housing, insurance, legal services, financial or lending services, and decisions involving parole, probation, or pretrial release. 
Risk Management Requirements. AI deployers must implement risk mitigation programs, conduct impact assessments, and provide consumers with clear disclosures and explanation rights. 
Developer Obligations. Developers must exercise “reasonable care” to protect against known or foreseeable risks of algorithmic discrimination and provide deployers with key system usage and limitation details. 
Transparency and Accountability. Both developers and deployers must maintain records sufficient to demonstrate compliance. Developers must also publish a summary of the types of high-risk AI systems they have developed and the safeguards in place to manage risks of algorithmic discrimination. 
Enforcement. The Act authorizes the Attorney General to enforce its provisions and seek civil penalties of up to $7,500 per violation. 
Safe Harbor. The Act includes a safe harbor from enforcement for entities that adopt and implement a nationally or internationally recognized risk management framework that reasonably addresses the law’s requirements. 

So how does this compare to Colorado’s law? Virginia defines “high-risk” more narrowly—limiting coverage to systems that are a “substantial factor” in making a consequential decision, whereas the Colorado law applies to systems that serve as a “substantial” or “sole” factor. Colorado’s law also includes more prescriptive requirements around bias testing and impact assessment content, and provide broader exemptions for small businesses. 
Putting It Into Practice: If enacted, the Virginia AI law will add to the growing patchwork of state-level AI regulations. In 2024, at least 45 states introduced AI-related bills, with 31 states enacting legislation or adopting resolutions. States such as California, Connecticut, and Texas have already enacted AI-related statutes . Given this trend, it is anticipated that additional states will introduce and enact comprehensive AI regulations in the near future. 

New Ohio Transparency Pricing Rules for Hospitals Comes with Limits to Targeted Advertising

Starting April 3, Ohio hospitals will have to navigate new requirements under House Bill 173. This law mandates greater transparency in healthcare pricing. It also includes rules for selling or targeted advertising related to personal information hospitals collect from price estimator tools (discussed in more detail below). The law applies to hospitals in Ohio, which is any facility providing inpatient medical services for periods longer than twenty-four hours.
Transparent pricing for services
HB 173 requires hospitals to provide consumers with public pricing information for all hospital items and services. Hospitals need to create a digital list of all standard charges for their services. This list must be easy to access, free of charge, and cannot require any personal information from the user. These provisions are designed to help patients understand how much they will have to pay for medical services. Hospitals also have to offer information about “shoppable services” e.g., – services that can be scheduled in advance.
To meet this transparency requirement, hospitals either must provide a list of shoppable services, or provide an internet-based price estimator tool that helps patients estimate costs for these types of procedures.
Targeted advertising
For hospitals that decide to use a price estimator tool, there are restrictions on how personal information the tool collects can be used. Specifically, the law prohibits hospitals from using personal information collected from the use of the tool for targeted advertising. The law defines targeted advertising as displaying an ad that is selected based on personal data obtained from the use of a hospital’s internet-based price estimator tool by a person in Ohio. This means that hospitals cannot show consumers specific ads based on the information a person provides to estimate healthcare costs. Hospitals are also not allowed to sell personal information collected from price estimator tools. While “sell” is not defined under the law it is most likely to be interpreted closer to HIPAA definitions than state consumer privacy laws. Sell under HIPAA means direct or indirect renumeration in exchange for PHI.
The law provides specific exclusions for what is considered targeted advertising. Hospitals can still advertise based on a user’s direct request for information or their activities on the hospital’s own websites. Ads that are shown based on the context of a user’s search or visit are also excluded. Additionally, using data to measure how effective ads are is not considered targeted advertising. However, covered entities must continue to be mindful of OCR’s guidance with respect to the use of tracking technologies as well.
Putting it into Practice: Hospitals in Ohio may need to adopt new practices to remain compliant with the law. This includes making sure their websites provide easy-to-find pricing information for patients. Additionally, hospitals should confirm personal information from price estimator tools isn’t used for targeted advertising. 

Québec’s Restrictive Approach to Biometric Data Poses Challenges for Businesses Working on Security Projects

The recent decision by the Commission d’accès à l’information du Québec (CAI) regarding a popular grocer’s biometric data project in Quebec has far-reaching implications for other businesses considering or currently using biometric technologies. This pivotal decision not only highlights the CAI’s stringent approach to privacy protection but also sets a significant precedent for any company utilizing or considering utilizing biometric technologies in Quebec. Businesses will want to closely monitor developments to ensure compliance with Quebec’s privacy laws and adapt their practices accordingly.

Quick Hits

The CAI emphasized the broad interpretation of “identity” and “verification” under Quebec’s privacy laws.
The decision highlights the quasi-constitutional nature of privacy protection in Quebec.
The CAI emphasized that if consent involving the capture and comparison of biometric data for identification purposes cannot be obtained, a project—even one focused on security—may not be approved in Quebec.

A prominent grocery store in Quebec proposed implementing a biometric data bank for facial recognition to combat theft and fraud in its stores. The system aimed to identify individuals involved in shoplifting or fraud by comparing surveillance footage with a database of biometric data. However, the CAI’s investigation focused on the project’s compliance with Quebec’s privacy laws, specifically the Act respecting the protection of personal information in the private sector and the Act to establish a legal framework for information technology.
Distinction Between Verification and Identity
A critical aspect of the CAI’s decision was the broad interpretation of “identity” and “verification” under Quebec’s privacy laws. The CAI determined that the grocer’s facial recognition system constituted a form of identity verification, as it involved capturing and comparing biometric data to identify individuals. This interpretation means that any process involving the capture and comparison of biometric data for identification purposes requires explicit consent from the individuals concerned, as mandated by Article 44 of the Act to establish a legal framework for information technology.
The CAI rejected the grocer’s argument that their system did not constitute identity verification because it did not confirm the exact identity of every individual entering the store but rather identified those who matched the biometric profiles of known offenders. The CAI clarified that the act of identifying individuals based on biometric data, even if it is to determine if they belong to a specific group (e.g., known shoplifters), still falls under the category of identity verification.
Explicit Consent Requirement
The CAI highlighted that under Article 44 of the Act to establish a legal framework for information technology, any process that involves the verification of identity through the capture and comparison of biometric data requires the explicit consent of the individuals concerned. The CAI noted that the grocer’s project did not plan to obtain such explicit consent, thereby violating the legal requirements. This requirement for explicit consent is a critical point for other businesses to consider. Any business using biometric technologies may want to confirm that they obtained explicit consent from individuals before collecting and using their biometric data. Failure to do so could result in significant legal repercussions and potential prohibitions on the use of such technologies.
Quasi-Constitutional Nature of Privacy Protection
The CAI’s decision highlights the quasi-constitutional nature of privacy protection in Quebec. Privacy laws in Quebec are designed to offer robust protection to individuals, and the CAI has broad powers to enforce these laws. This means that businesses may want to be particularly diligent in their compliance efforts, as the CAI is likely to take a restrictive approach to the use of biometric data and other sensitive personal information.
Next Steps
The CAI’s decision on the grocer’s biometric data project has significant implications for other businesses using biometric technologies. This development is important as it highlights the necessity of strict adherence to privacy laws, especially when handling sensitive biometric data. Specifically, the broad interpretation of “identity” and “verification,” the explicit consent requirement, and the quasi-constitutional nature of privacy protection in Quebec all provide cause for businesses to be diligent in their compliance efforts. Businesses may want to ensure they obtain explicit consent from individuals before collecting and using biometric data. The CAI’s decision on the grocer’s project serves as a critical reminder that privacy protection is taken very seriously in Quebec, and businesses may want to be proactive in ensuring their practices comply with the stringent requirements of the law. Given the CAI’s broad powers and the quasi-constitutional nature of privacy protection in Quebec, businesses can expect more restrictive decisions in the future.