In TCPA Case, SCOTUS Rules District Courts Are Not Bound by Final FCC Orders

Key Takeaways:

The U.S. Supreme Court has ruled that the Hobbs Act does not require district courts in civil enforcement proceedings to follow federal administrative agencies’ legal interpretations of federal statutes.
This ruling is a logical extension of Loper Bright, which, in overturning Chevron deference, expanded the judiciary’s power to review and reject agency interpretations of federal statutes.
Justice Kavanaugh, writing for the 6-3 majority, determined it would be unfair to preclude judicial review by district courts and announced a “default rule” that applies where the statute neither expressly allows nor prohibits judicial review in the context of enforcement proceedings.
Justice Kagan, joined by Justices Jackson and Sotomayor, vigorously dissented, asserting that the ruling effectively rewards parties that “intentionally or negligently forgo [ ]Hobbs Act review.”

In June 2024, the U.S. Supreme Court decided Loper Bright Enterprises v. Raimondo, which, in overturning Chevron deference, expanded the judiciary’s power to review and reject interpretations of statutes adopted by federal administrative agencies. Last week, the Court’s majority articulated a logical extension of Loper Bright, holding, over a vigorous dissent, that the Administrative Orders Review Act — usually referred to as the Hobbs Act, 64 Stat. 1129 — does not require district courts in civil enforcement proceedings to follow an agency’s legal interpretation of a federal statute. Instead, this ruling in McLaughlin Chiropractic Associates, Inc. v. McKesson Corp., et al. held that district courts “must determine the meaning of the law under ordinary principles of statutory interpretation, affording appropriate respect to the agency’s interpretation.” 1
McLaughlin involved a Federal Communications Commission (FCC) order interpreting the Telephone Consumer Protection Act (TCPA) 47 U.S.C.A. § 227, specifically the TCPA’s prohibition on unsolicited advertisements sent by fax to a “telephone facsimile machine.” McLaughlin claimed, on behalf of a class, that McKesson had violated the TCPA by faxing unsolicited advertisements that did not contain an opt-in notice required by the statute. Most class members received those faxes via online fax services; only a few received them via traditional fax machines. The district court did not differentiate and certified a class of all fax recipients.
The Amerifactors Order
But, while the McLaughlin litigation was pending, a third party, wholly uninvolved in the litigation, petitioned the FCC for a declaratory ruling as to whether the TCPA applied to faxes received through online fax services. The FCC answered “no” in In re Amerifactors Financial Group, LLC, interpreting the term “telephone facsimile machine” to exclude online fax services.2 The McLaughlin district court found the FCC’s Amerifactors Order to be binding, in that the court had no authority to “question the validity of FCC final orders,” which are, under the Hobbs Act, “subject to the exclusive review of the court of appeals in pre-enforcement suits.” McLaughlin, 2025 WL 1716136 at *3 (internal quotations omitted). The district court, therefore, granted summary judgment in favor of McKesson on claims involving online faxes and, because there were only twelve people left, decertified the class. Id.
The Ninth Circuit agreed that the district court was “bound by” the Amerifactors order and affirmed. The Supreme Court, however, granted certiorari to “decide whether the Hobbs Act required the District Court to follow the FCC’s legal interpretation of the TCPA.” Id. at 4. Justice Brett M. Kavanaugh, writing for a 6-3 majority, held that the answer is “no.”
A “Default Rule” Allowing Judicial Review When Not Expressly Prohibited
The Hobbs Act provides for pre-enforcement judicial review of FCC orders (and orders from other listed agencies). A party that disagrees with an agency order may file a petition in a federal court of appeals within 60 days of the order. The Hobbs Act provides that “[t]he court of appeals…has exclusive jurisdiction to enjoin, set aside, or suspend (in whole or in part), or to determine the validity of … all final orders of the [FCC].” 28 U.S.C. § 2342(1). The problem, the Supreme Court wrote, is what happens if nobody does this and the 60 days passed? Are all federal courts then required to accept the agency’s interpretation, even if they find it incorrect or even irrational? Unsurprisingly, in the wake of Loper Bright, the Court answered in the negative.
The Court found it unfair that judicial review should be precluded and announced a “default rule” that applies where the statute neither expressly allows nor expressly prohibits judicial review in the context of enforcement proceedings: “In an enforcement proceeding, the district court must independently determine for itself whether an agency’s interpretation of a statute is correct,” including reviewing “whether the rule or order was arbitrary and capricious under the APA or otherwise was unlawful.” Id.,2025 WL 1716136 at *6, n.2. The Court remanded McLaughlin back to the District Court for its independent determination as to whether the FCC’s decision excluding online faxes from the TPCA is correct.
A Cautionary Dissent
But Justice Kagan’s dissent — with Justices Jackson and Sotomayor joining — asked, what about the Hobbs Act’s language to the effect that the courts of appeals have “exclusive jurisdiction” to “determine the validity of” FCC or other agency orders? Id., 2025 WL 1716136 at *17. Doesn’t the plain language of the statute require that aggrieved parties seek review in a court of appeals within 60 days or forever hold their peace? Isn’t the “default rule” just made up? Why should we reward parties that “intentionally or negligently forgo [ ]Hobbs Act review?” Id., 2025 WL 1716136 at *14, n.1. Do we really want, for example, plutonium shippers to scoff at Atomic Energy Commission regulations until (or unless) an enforcement action is brought, and only then challenge the regulation? Justice Kavanaugh spent at least ten pages rebutting arguments made by McKesson, the Government and the dissent. The bottom line is that the majority found that none of these arguments outweighed the unfairness of precluding a party, who may well have been unaware of the agency’s ruling, from challenging its validity when that ruling is actually being applied to it.
After Loper Bright, the result in McLaughlin should have come as no surprise. Parties should consider the doors for challenging many administrative rules and regulations to be wide open.
[1] McLaughlin Chiropractic Associates, Inc. v. McKesson Corp., et al., No. 23-1226, 2025 WL 1716136 at *6 (June 20, 2025); https://www.supremecourt.gov/opinions/24pdf/23-1226_1a72.pdf
[2] In re Amerifactors Financial Group, LLC, 34 FCC Rcd. 11950 (2019); https://docs.fcc.gov/public/attachments/DA-19-1247A1_Rcd.pdf

United States: Senators Unveil Crypto Market Structure Principles in Lead-up to the Senate’s Version of the CLARITY Act

On the heels of the House Financial Services Committee’s introduction of the CLARITY Act, Republican senators who serve on the Senate Banking Committee introduced their “Crypto Market Structure Principles” (the Principles) to establish a “baseline” for negotiating the Senate’s version of its market structure bill. Shortly after releasing the Principles, the Digital Assets Subcommittee of the Senate Banking Committee held a hearing on market structure.
Despite making this forward progress, it is unclear when the Senate will put pen to paper on a market structure bill. House Financial Services Chair French Hill has indicated that he would like to pass the CLARITY Act with stablecoin legislation, with the support of House Majority Whip Tom Emmer, who stated that “market structure is essential to any congressional action on digital assets. I expect the GENIUS Act has a path in the House, so long as it’s accompanied by the CLARITY Act.”
Despite the House’s approach, the Senate and President Trump are calling on the House to pass a clean version of the GENIUS Act as soon as possible so the stablecoin legislation can move forward irrespective of the status of the market structure bill. We expect to see whose vision for timing wins out in the coming weeks.
In the meantime, the Senate is exploring market structure legislation of its own. According to Senator Cynthia Lummis, the Principles will “ensure the US remains at the helm of global financial advancement.” The Principles provide that legislation should provide a distinction between digital asset securities and digital asset commodities and clearly allocated regulatory authority to avoid advancing an “all-encompassing regulator”. Themes of the Principles include fostering innovation, establishing clear guidance to enable financial institutions to operate in the digital asset space with regulatory certainty, and protecting customers.

UK Data Use and Access Bill Becomes Law

The UK’s major post-Brexit reform of the UK General Data Protection Regulation (UK GDPR), the Data Use and Access Act (DUAA), was became law on 19 June 2025.
The DUAA had a long gestation in the form of two previous draft laws, the Data Protection and Digital Information Bills No. 1 and 2, the second of which failed when the UK general election 2024 was called. The new government resurrected most elements of the previous proposals as the DUAA.
The UK Information Commissioner’s Office has published guidance on the effects of the DUAA for businesses, but some of the more eye-catching changes include:

Of particular interest to businesses considering AI solutions – the removal of restrictions on use of personal data for automated decision-making, as long as there are some safeguards in place.
Consent no longer required to set statistical and functionality cookies.

The wider impact of the DUAA on UK–EU data transfers remains to be seen, with the EU due to review its UK adequacy decision by the end of 2025. However, as the EU has announced its own intention to reform the GDPR, more change could be on the way.

E-Verify Begins Notifying U.S. Employers of Terminations Under the CHNV Parole Program

The U.S. Department of Homeland Security (DHS) may exercise its authority to terminate parole or other humanitarian programs and revoke Employment Authorization Documents (EADs) at any time. 
Revoked EADs may still appear facially valid.
E-Verify will no longer issue Case Alerts to notify employers of revoked EADs.
E-Verify employers must use Form I-9, Supplement B to reverify affected employees and complete the reverification process within a reasonable period of time.

DHS recently terminated humanitarian parole and work authorization for nationals of Cuba, Haiti, Nicaragua and Venezuela (CHNV). As a result, affected individuals have received direct correspondence from DHS informing them of the termination of their parole and revocation of their EADs. Importantly, some revoked EADs may still appear facially valid for a period of time following revocation. 
As of June 20, 2025, E-Verify has issued updated guidance intended to put employers on notice of their obligations to immediately identify any current employees whose work authorization may have been revoked, and complete reverification of those employees’ work authorization within a reasonable time.
Previously, E-Verify issued “Case Alerts” to notify employers of EADs that had been revoked by DHS. Under the new guidance, Case Alerts will no longer be used. Instead, E-Verify employers are now responsible for regularly generating Status Change Reports to identify cases involving revoked EADs. Data concerning employees whose work authorization was revoked between April 9 and June 13, 2025, became available in the E-Verify system on June 20. 
What E-Verify Employers Need to Know

E-Verify employers must immediately use Form I-9, Supplement B to reverify employees identified in the Status Change Report as having a revoked EAD within a reasonable amount of time.
Employers must follow up on all entries in the Status Change Report and reverify affected employees using Form I-9, even if the EAD appears active.
Employees may still be authorized to work based on an alternative status and may provide other acceptable Form I-9 documentation to demonstrate employment authorization.

Form I-9 Reverification for E-Verify Employers

Affected employees must provide unexpired documentation from List A or List C of the Lists of Acceptable Documents. 
Employers should not reverify List B identity documents. 
Employers may not accept revoked EADs based on the Status Change Report, even if that EAD appears to be unexpired. 
Employers must allow employees to choose which acceptable documentation to present for reverification.
Employers should not create a new E-Verify case when completing the reverification process for affected employees.

Compliance Reminder
E-Verify employers should be aware that a failure to reverify potentially affected employees within a reasonable period of time may lead to liability. For more information, please contact the Barnes & Thornburg attorney with whom you work.

United Kingdom: UK Crypto Regulation: Regulated Activities

The UK is quickening the pace on the new crypto regulatory regime. The Financial Conduct Authority (FCA) published three papers in quick succession in May 2025: a discussion on key policy positions (DP25/1) and two consultations on detailed rules (CP25/14 and CP25/15). This blog focuses on DP25/1. Please see our upcoming separate blogs on the other proposals.
The FCA intends to regulate not only UK cryptoasset trading platforms but also certain non-UK overseas platforms. Any non-UK overseas cryptoasset trading platforms that service retail customers in the UK on a cross-border basis will need to get authorised by the FCA, and they will need to set up a UK physical presence in order to obtain authorisation. It is currently not clear in what circumstances an overseas crypto exchange would be considered to have retail customers in the UK – e.g. whether there would be look-through to the ultimate customers if the exchange itself services only institutional intermediaries which have underlying retail customers. 
Cryptoasset intermediaries that buy/sell cryptoassets will also need to obtain authorisation. Further, if they wish to service retail customers, the cryptoasset in question must be admitted onto at least one UK authorised cryptoasset trading platform. This means the intermediary’s business model would depend on factors outside its control, i.e. whether there would be any authorised cryptoasset trading platform that happens to have the relevant cryptoasset listed on their platform. This could present significant challenges, particularly at the start of the regime where trading platforms themselves are also applying to get authorised.
While the FCA prefers to ban cryptoasset lending and borrowing for retail customers, it leaves the door somewhat open by also exploring an alternative – to allow retail access but with enhanced conduct rules on intermediaries (e.g. requiring assessment of customer creditworthiness). Given the importance of lending/borrowing in the current crypto ecosystem, an absolute ban on retail access may likely have significant consequences. It remains to be seen where the final determination will land.
For cryptoasset staking, one key proposal is to make the staking firm liable for failures of their third party service providers (e.g. those providing technology to the firm). This may potentially have significant impact on staking firms (e.g. they may need to reconsider their arrangements with third party service providers).

UK Data Act 2025: Key Changes Seek to Streamline Privacy Compliance

The UK’s Data (Use and Access) Act 2025 (the Act) officially came into law on June 19.
The Act seeks to modernize the UK’s data protection and e-privacy regimes. It aims to help support the economy, improve public services, and make everyday life and business compliance easier by encouraging secure data sharing between consumers and third parties.
Updates to Current Legislation
The Act introduces amendments to the UK General Data Protection Regulation (GDPR), the Data Protection Act 2018, and the Privacy and Electronic Communications Regulations 2003, impacting areas such as legitimate interests, direct marketing, data subject access requests (DSARs), and automated decision-making, notably:

A new lawful basis for data processing in the form of “recognized legitimate interests.” These are specific types of processing activities that are automatically considered lawful, for example, fraud detection and prevention, information security, crime prevention, and public health and safety. 
Relaxed rules around automated decision-making and cookie consent. Notably, explicit consent will no longer be required for certain types of cookies, including analytics, site optimization, and website functionality. With respect to automated decision-making, prior rules regarding individual rights not to be subject to decisions based solely on automated processing have now been relaxed to apply only when the decision involves special category data such as health, race, region, or biometric data. 
Provides broader flexibility in connection with data subject access requests. In practice, these changes only reflect the existing guidance of the Information Commissioner’s Office (ICO), which many controllers have followed in recent years. This includes codifying the requirement for the controller’s search for personal data concerning the data subject to be (no more than) a “reasonable and proportionate search.”

Impact on Organizations
For financial services organizations, the Act may streamline their ability to process data without always needing a legitimate interests assessment (LIA), for example in connection with fraud prevention, IT security, intra-group administration, and direct marketing. 
The Act may reduce several administrative burdens that prior UK privacy laws placed on all organizations by removing opt in consent requirements for functional and analytics cookies used on websites, potentially offering greater flexibility for data subject access requests, and reducing the requirement for legitimate interest assessments in certain cases. 
The Act also lays the foundation for data initiatives that would enable data portability in certain key sectors, including transport, finance (outside of retail banking), healthcare, and energy. These purpose of these initiatives is to encourage greater innovation in these sectors, similar to Open Banking, which already exists for retail banking. Linked to this, there are also provisions for digital IDs, which might simplify know your customer (KYC) processes and remote ID verification. These changes may, in part, enable customers to switch more easily between suppliers, the aim of which is to drive more innovation through increased competition.
Although these changes may benefit UK organizations, they do not change requirements under the broader GDPR. UK organizations should carefully assess their compliance programs to ensure that any changes made to UK operations do not result in compliance gaps under GDPR and other EU member state laws.
Considerations for Companies
UK organizations should assess their compliance programs and, more generally, their data strategy to determine whether or not these remain “fit for purpose” in light of the changes the Act introduces. For example, companies should consider:

Reviewing data processing activities to identify where the new “recognized legitimate interests” basis for processing may be relied upon; 
Updating DSAR processes; 
Reassessing cookie and marketing compliance to take advantage of opt out for low-risk cookies; 
Preparing for smart data schemes where relevant; and 
Preparing for digital ID and verification frameworks.

Beyond Fingerprints: Navigating the Biometric Amendment to the Colorado Privacy Act

On July 1, 2025, the Biometric Data Privacy Amendment to the Colorado Privacy Act will take effect, creating a new, stand-alone set of obligations for any entity—whether or not it is otherwise subject to the consumer-facing portions of the Colorado Privacy Act—that collects, captures, purchases, receives, or otherwise obtains “biometric identifiers” or “biometric data” from individuals in Colorado. See C.R.S. § 6-1-1314.
Quick Hits

Starting July 1, 2025, the Biometric Data Privacy Amendment to the Colorado Privacy Act will impose new obligations on entities collecting biometric data from individuals in Colorado, including employees and job applicants.
The amendment introduces a consent paradigm limiting when employers can require biometric data, allowing mandatory consent only for specific purposes like secure access and workplace safety.
Employers must comply with a strict data-deletion schedule and maintain a written incident-response protocol for biometric data, with enforcement by the Colorado attorney general and district attorneys.

Although the underlying Colorado Privacy Act expressly excludes employees and job applicants from the definition of “consumer,” the amendment overrides that exclusion in part by imposing employer-facing duties any time an employer collects or uses employees’ or applicants’ biometric identifiers. As a result, companies that have historically viewed Colorado’s privacy law as a purely business-to-consumer (B2C) concern must now evaluate, document, and potentially redesign workplace practices that rely on fingerprints, facial geometry, iris scans, voiceprints, or any other unique biological characteristic used to identify a specific individual.
The centerpiece of the amendment is a new consent paradigm that sharply limits the circumstances in which an employer may condition employment—or continued employment—on an employee’s agreement to provide a biometric identifier. Mandatory consent is permissible only when the biometric identifier is collected and used for one of four narrowly defined workplace purposes: (1) granting access to secure physical areas or secure electronic hardware, software, or systems; (2) recording the start and end of the workday, including meal and rest breaks that exceed thirty minutes; (3) improving or monitoring workplace safety or security, or protecting the safety or security of employees; and (4) improving or monitoring public safety or security during an emergency or crisis.
If an employer’s use case falls outside these four categories—for example, tracking an employee’s physical location throughout the day, measuring productivity through keystroke dynamics, or gauging time spent inside a specific software application—the employer must offer a genuine choice. The employee may not be denied employment, disciplined, or otherwise retaliated against for withholding consent.
Two statutory carve-outs eliminate the consent requirement altogether, yet they present substantial compliance risk because they overlap and arguably conflict with other state and federal laws. The amendment waives consent when the employee “reasonably should expect” biometric collection based on the employee’s job description—for example, a security guard whose duties inherently involve biometric gate controls. It waives consent for job applicants when collection is “based on reasonable background check, application, or identification requirements,” such as fingerprints for a criminal background screen.
Employers may want to approach both exceptions with caution. In the applicant context, the federal Fair Credit Reporting Act (FCRA) already mandates written authorization before initiating any background check, including fingerprint-based checks, so reliance on the amendment’s consent waiver would invite a direct conflict with FCRA disclosure and authorization requirements. Similarly, other state biometric or privacy statutes—including the Illinois Biometric Information Privacy Act (BIPA), the California Privacy Rights Act (CPRA) as applied to employee data, the Texas Capture or Use of Biometric Identifier law (CUBI), and Washington’s biometric statute—either provide no comparable waiver or impose more stringent notice and consent mandates.
(The CPRA is an amendment to the California Consumer Protection Act (CCPA). While structured more like the European Union’s General Data Protection Regulation than BIPA, the CPRA does require employers to provide notice and obtain the consent (or “opt-in”) of employees before collecting or using their biometric templates, if they intend to sell that information. The CPRA also requires employers to provide employees notice of their rights to “opt out” of their collection practices and give employees two means of opting out: generally, by email, cell phone, or website contact.)
Accordingly, Colorado employers with multistate operations may not want to treat the amendment’s two consent waivers as safe harbors. Instead, employers may want to adopt a uniform, nationwide approach that honors the highest common denominator across jurisdictions.
Beyond consent, the amendment imposes a strict data-deletion schedule that requires covered entities to permanently destroy biometric data at the earliest of three possible trigger points: (a) once the original purpose for collection has been fulfilled; (b) twenty-four months after the employee’s or applicant’s last interaction with the employer; or (c) within forty-five days after the employer determines that continued retention is no longer necessary, adequate, or relevant to the collection purpose. Although subsections (a) and (c) appear to overlap—both hinge on satisfaction of the collection purpose—employers may want to treat each prong as an independent obligation and document retention decisions accordingly.
The amendment also requires covered entities to maintain and implement a written incident-response protocol tailored to biometric data. At a minimum, the protocol should incorporate Colorado’s existing breach-notification statute—which, unlike many states, already applies to biometric data. Prompt notification to affected individuals and, when thresholds are met, to the Colorado attorney general, must occur in accordance with statutory timelines whenever there is “reasonable belief” that a security incident has compromised biometric identifiers. See C.R.S. § 6-1-716.
Finally, while the amendment does not provide a private right of action, exclusive enforcement by the Colorado attorney general and district attorneys should not lull employers into complacency. Key operational steps between now and July 1, 2025, include:

inventorying systems and devices that capture biometric identifiers of Colorado employees or applicants;
verifying that each use fits within the four categories that allow mandatory consent and providing for voluntary consent for all other uses;
drafting or revising a biometric privacy policy and consent form that describes collection purposes, retention schedules, destruction methods, notice provisions, and incident-response obligations;
evaluating whether vendor agreements that require downstream compliance with the amendment’s retention, deletion, and incident-response obligations are needed; and
training human resources, security, and IT personnel on the new statutory framework.

OCC Enters Consent Orders Against New York-based Bank

On May 14, the OCC entered into a formal agreement with a New York-based bank after determining that the institution is in “troubled condition.” In its findings, the OCC cited alleged unsafe or unsound practices tied to the bank’s strategic planning and earnings performance.
The agreement does not cite specific statutory violations and imposes no monetary penalties. Instead, it places the bank under heightened supervisory scrutiny and requires extensive corrective action. Specifically, the bank must develop and implement two core remediation plans:

Three-year strategic plan. By September 30, the bank must submit a plan that sets measurable goals for risk management, earnings, growth, capital, and product strategy. A board-level compliance committee will oversee implementation and submit quarterly progress reports to the OCC.
Earnings improvement program. Also due by September 30, the bank must identify expense-reduction and revenue-generation opportunities, including branch and technology optimization, compensation review, and strategies to grow non-interest income while remaining compliant with consumer-protection laws.
Operations and succession. The bank’s plan must include an evaluation of its internal operations, staffing levels, management-information systems, policies, and procedures, and incorporate a management employment and succession plan to ensure adequate staffing and leadership continuity.

The board must create a compliance committee of independent directors within 15 days to oversee implementation and provide quarterly progress reports to the OCC. The agreement will remain in force until the OCC verifies that all corrective actions are fully and sustainably completed.
Putting It Into Practice: While the CFPB continues to scale back its regulatory and enforcement efforts, other federal and state agencies are continuing their oversight. What is notable here however, is that the OCC entered into a consent order without requiring the bank to pay a civil money penalty. This approach raises questions about the OCC’s enforcement posture—particularly when compared to the often-penalty-driven actions of the CFPB in recent years. The absence of a monetary penalty may signal a more collaborative or rehabilitative stance by federal regulators. Only time will tell.
Listen to this post 

New Hires More Likely to Fall for Phishing + Social Engineering Attacks

When assessing cybersecurity risk in your organization, it is important to understand your users and their behavior. A new study by Keepnet sheds light on new hire behavior concerning phishing susceptibility. According to its recent survey, the 2025 New Hires Phishing Susceptibility Report, a whopping “71% of new hires click on phishing emails within 3 months” of starting their position. New hires are 44% more likely to fall for phishing and social media attacks than seasoned employees.
The survey is based on responses from 237 companies in various industries. The report’s findings reveal that new employees are at a significantly higher risk of becoming phishing and social engineering victims because they do not get enough security training during their onboarding process, and they are less experienced than veteran staff. The survey shows that new hires are unfamiliar with the organization’s protocols and are eager to respond to requests to make a good impression. Attacks that come from the CEO or HR are particularly effective against new hires. The research found that new employees were “45% more likely than experienced staff to click on phishing emails that impersonated the CEO, showing how vulnerable they are in their first few months.”
Another interesting statistic cited is that if a company provides “adaptive phishing simulations and behavior-based training” to employees, phishing risk fell 30% after onboarding.
The key lesson to take away here is to train new employees on cybersecurity protocols early and often. Understand that they are trying to impress their superiors and that they are more vulnerable to attacks. Give them the tools to feel comfortable identifying and reporting suspicious messages and instill in them with the confidence and understanding that they are an important team member for the security of the organization.

Mastering Information Governance with the ARMA IGIM 2.1 FrameworkPart 1: Introduction to the ARMA IGIM Framework

Today, organizations face unprecedented data challenges. The sheer volume of information, evolving regulations, and the rising momentum of artificial intelligence (AI) revolutionizing industries make it clear that information governance (IG) is not optional. The ARMA IGIM 2.1 framework provides organizations with a practical, structured approach to manage data effectively, enabling them to meet these challenges head-on.
The IGIM Framework and Its Importance
At its core, the IGIM framework breaks down IG into eight domains:

Steering Committee
Authorities
Support Functions
Procedural Framework
Capabilities
Information Lifecycle
Architecture
Infrastructure

These eight domains work to ensure that every piece of information within your organization is secure and usable throughout its lifecycle. Adopting IGIM benefits organizations by streamlining workflows, reducing compliance risks, and increasing operational efficiency. But the advantages don’t end there.
Why IG is Indispensable for AI Adoption
AI thrives on high-quality, well-governed data. AI tools rely on accurate, accessible, and structured information to generate actionable insights. Without a proper IG framework, organizations often struggle with:

Data Silos: Making it difficult to consolidate or analyze data.
Dirty Data: Leading to inaccurate AI outputs.
Compliance Risks: Exposing organizations to penalties from data misuse.

By establishing effective governance practices, organizations create the foundation needed for AI to optimally perform. For example, banks using the IGIM framework to organize customer data see faster AI-driven credit risk assessments because the information is clean, structured, and easily retrievable.
Through this series, you’ll discover how the IGIM framework enables not only effective governance but also maximizes the value of AI investments. Next week, we’ll discuss laying the foundation for your IG program.
What can you do now? Assess your current data governance practices and consider how well-structured data could drive your AI initiative forward.

Where the Rubber Meets Regulation – FTC Clarifies Data Security Requirements for Auto Dealers Under Safeguards Rule

On June 16, 2025, the Federal Trade Commission (FTC) issued FAQs that directly affect many automobile dealers, clarifying how its Safeguards Rule (the Rule), part of the FTC’s implementation of the Gramm-Leach-Bliley Act (GLBA), applies to the automotive sector. The Rule requires non-banking financial institutions to implement measures to protect customer information—and the FTC is making it clear that many car dealerships fall within that definition.
While “financial institution” might traditionally bring to mind banks or lenders, the Rule defines the term much more broadly. It includes businesses significantly engaged in financial activities or closely related services. That means mortgage brokers, finance companies, financial advisors, credit counselors—and yes, car dealers who either finance or lease vehicles to consumers.
According to the FAQs, if a car dealership helps customers secure auto loans or directly provides financing, it qualifies as a financial institution under the Rule. The same goes for dealerships that lease vehicles for over 90 days, since leasing is also considered a financial activity.
The FAQs also clarify what counts as “customer information” protected by the Rule. Customer information includes a dealer’s documents like approved financing or leasing applications, spreadsheets containing customer names and financial data, and other information that could be linked to a customer’s financial profile. However, general sales reports that don’t relate to a consumer’s financing or leasing aren’t covered.
The Rule requires covered financial institutions to maintain an information security program that outlines all of the ways dealers collect and store customer information, how this information is shared with other companies, and how dealers delete such information when it is no longer needed. Though there is no one-size-fits-all approach regarding what constitutes a sufficient information security program, the FAQs advise that these programs should contain administrative, technical, and physical safeguards appropriate for a dealer’s size, complexity, type of activities, and sensitivity of the customer information involved.
The FAQs list ten key requirements for an information security program, which include a written risk assessment of reasonably foreseeable risks, oversight of service providers, a written incident response plan, and notifying the FTC of certain security breaches.
The FAQs further address various other issues and scenarios specific to automobile dealers. But the key takeaway? If your dealership is involved in financing or long-term leasing, the FTC Safeguards Rule applies—and if you are a car dealer, now is the time to evaluate whether your current data security practices meet the FTC’s expectations. With the agency signaling that it’s watching this sector, it’s best not to steer off course.

Texas AI Governance Law Signed by Governor

On June 22, 2025, Texas Governor Greg Abbott signed the Texas Responsible AI Governance Act (TRAIGA) into law. Despite the ongoing debate in the U.S. Senate over the provision in the reconciliation bill that declares a moratorium on the ability of states to legislate artificial intelligence (AI), the signing of HB 149 is a declaration that states will continue to legislate when it comes to consumer protection and the AI use unless such is preempted by a final passed reconciliation bill. That bill is pending in the Senate as of this writing.
According to Abbott’s office:
“By enacting the Texas Responsible AI Governance Act, Gov. Abbott is showing Texas-style leadership in governing artificial intelligence. During a time when others are asserting that AI is an exceptional technology that should have no guardrails, Texas shows that it is critically important to ensure both innovation and citizen safety. Gov. Abbott’s support also highlights the importance of the states as bipartisan national laboratories for nimbly developing AI policy.”
The bill seeks to: “facilitate and advance the responsible development and use of artificial intelligence systems; protect individuals and groups of individuals from known and reasonably foreseeable risks associated with artificial intelligence systems; provide transparency regarding risks in the development, deployment, and use of artificial intelligence systems; and provide reasonable notice regarding the use or contemplated use of artificial intelligence systems by state agencies.”
TRIAGA applies to developers and deployers of AI systems, including government entities. A developer and deployer of AI is broadly defined as one who “develops or deploys an artificial intelligence system in Texas.” It requires government entities to provide clear and conspicuous notice, to consumers, before or at the time of interaction, that the consumer is interacting with AI, which can be done through a hyperlink. It prohibits government entities from using AI to assign a social score, including evaluating an individuals based on personal characteristics of social behavior, or uniquely identify a consumer using biometric data without the individual’s consent.
TRIAGA further prohibits any person from developing or deploying an artificial intelligence system that “intentionally aims to incite or encourage a person to: (1) commit physical self-harm, including suicide; (2) harm another person; or (3) engage in criminal activity.” It further prohibits developing or deploying an AI system with the “sole intent” to “infringe, restrict, or otherwise impair an individual’s rights guaranteed under the United State Constitution,” or “unlawfully discriminate against a protected class,” or “producing, assisting or aiding in producing, or distributing” sexually explicit content and child pornography, including deep fakes.
The Texas Attorney General has exclusive jurisdiction over enforcement of TRIAGA and can levy civil penalties after court determination, depending on the intent and failure to cure violations in amounts of between $10,000 and $200,000, with a continued violation subject to penalties of not less than $2,000 and not more than $40,000 “for each day the violation continues.”
The law goes into effect January 1, 2026, so now is the time to determine whether it applies to you, and what measures to take to comply.