USTR Removes WeChat From List of Notorious Markets for Counterfeiting and Piracy, Adds Douyin
On January 8, 2025, the Office of the United States Trade Representative (USTR) released the 2024 Review of Notorious Markets for Counterfeiting and Piracy. Of note, the USTR removed Weixin (WeChat), a social media ‘super-app’ from the Review. Nonetheless, “China continues to be the number one source of counterfeit products in the world. Counterfeit and pirated goods from China, together with transshipped goods from China to Hong Kong, China, accounted for 84% of the value (measured by manufacturer’s suggested retail price) and 90% of the total quantity of counterfeit and pirated goods seized by U.S. Customs and Border Protection (CBP) in 2023.” Five China-based online markets remain on the list with Douyin (TikTok) replacing WeChat.
China-related excerpts from the Review regarding online markets follow. The full text is available here.
BAIDU WANGPAN
This cloud storage service is operated by Baidu, the largest search-engine provider in China. Users of this service are able to share links to files stored on their accounts with other users, and infringing content is reportedly disseminated widely through social media and other piracy linking sites. Baidu has been the subject of several copyright infringement cases in China brought by other content distributors, but right holders report little change in the site’s enforcement measures. Although Baidu has several tools to take down unauthorized content, according to right holders, procedures for filing complaints are applied unevenly and lack transparency. Additionally, takedown times are reportedly lengthy, and right holders often have to repeatedly follow-up with Baidu to ensure that pirated content does not reappear on the platform. Right holders report little progress in Baidu’s actions to suspend or terminate repeat infringers.
DHGATE
Headquartered in China. DHgate is one of the largest business-to-business cross-border e-commerce platforms in China, although it primarily serves purchasers outside of the country. This year, stakeholders have welcomed the introduction of a pilot IP enforcement program that includes a new portal for complaints, new procedures for screening of products and of prospective sellers, and enhanced penalties for repeat and high-volume infringers. Stakeholders have also expressed appreciation for DHgate’s efforts to increase engagement and collaboration with right holders. In its submission for this year’s List, DHgate described its significant investment in AI-based screening tools to detect and remove counterfeit goods, its vendor verification process that screens and blacklists repeat infringers, its pilot program and other efforts to resolve right holder complaints, and its efforts to cooperate with law enforcement authorities, including publishing a law enforcement guide and assisting with several investigations involving health and safety matters. DHgate’s reported successes also include proactively removing twice as many listings for infringing goods in 2023 as compared to 2022. However, some stakeholders continue to report that the platform contains a high volume of counterfeits and the repeat infringer policy is ineffective. They also note the platform appears to connect Chinese sellers and manufacturers specializing in counterfeits with wholesale buyers outside of China. Although some brand owners have successfully reduced counterfeits through collaboration with DHgate, others have reported mixed results. Sellers of counterfeit goods reportedly continue to evade detection by using code words and digitally blurred logos. DHgate has implemented policies to regulate influencers promoting products listed on its platform through posting on third-party websites, and right holders indicate that they need more time to determine the impact of these policies. Given that many stakeholders welcomed DHgate’s recent initiatives but continue to raise concerns, DHgate should further work to improve its proactive detection procedures, seller vetting process, and screening for repeat infringers.
DOUYIN SHANGCHENG (DOUYIN MALL)
Douyin Shangcheng (Douyin Mall) is a shopping platform under Douyin, the Chinese online platform offering short-form video, live stream, and e-commerce functionalities owned by ByteDance, also the parent company of Tiktok. Douyin has upgraded its e-commerce functions to include Douyin Mall as both a standalone application and an integrated feature accessible from the Douyin application. Douyin Mall allows users to scroll through suggested products or search for products and click through the Douyin Mall interface to view short videos or livestream videos about the products. From such videos or livestreams, users can use the shopping cart function to conduct purchases. Douyin contends that it has notice and takedown mechanisms, with multiple reporting portals for right holders to submit complaints, as well as a one-stop “IPPRO” platform for right holders to submit and manage IP infringement reports. Douyin also described its efforts to screen proactively for specific terms, to train proactive identification 25 models to target counterfeit products or sellers, and to cooperate with right holders and enforcement authorities, including on the pursuit of criminal cases offline. However, stakeholders have described a “rocketing” increase in the amount of counterfeit goods on the platform, an ineffective notice and takedown system, and reported lengthy delays in response to takedown requests, with little to no feedback on right holders’ complaints. Douyin should address concerns about the prevalence of counterfeits on its platform, including questions about the effectiveness of its proactive screening mechanisms and its system for managing IP infringement complaints.
PINDUODUO
Headquartered in China. Pinduoduo, a social commerce app, is one of the largest e-commerce platforms in China. Right holders report that Pinduoduo continues to offer a high volume of counterfeit goods on their platform. As in previous years, stakeholders continue to highlight concerns about Pinduoduo’s unwillingness to engage with brand owners to resolve issues or develop improved processes. Although the platform claims to have implemented anti-counterfeiting initiatives to assist with accurate product descriptions and combat misinformation from merchants, right holders convey that excessive delays in takedowns remain a problem and can take up to two weeks or more. Other longstanding issues remain unresolved, including onerous evidentiary requirements and lack of proactive measures to screen sellers and listings, as well as lack of transparency with enforcement processes, such as penalty mechanisms and decisions rejecting takedown requests. This year, right holders again noted Pinduoduo’s ineffective seller vetting and raised concerns about the platform’s reported practice of labeling sponsored listings as “authorized sellers,” giving the appearance of legitimacy to counterfeit products and misleading consumers into believing that they are purchasing from the legitimate manufacturer or a licensed distributor. Right holders also continue to report difficulties in receiving information and support from Pinduoduo in pursuing follow-on investigations to uncover the manufacturing and distribution channels of the counterfeit goods.
TAOBAO
Taobao, one of the largest e-commerce platforms in the world, is Alibaba’s platform for Chinese consumers. Alibaba has proactively engaged with right holders and the U.S. Government to improve its anti-counterfeiting processes and tools across its platforms, including Taobao. Although Alibaba emphasizes its ongoing engagement with enforcement authorities to combat the sale of counterfeits, right holders continue to express concern that a recent structural reorganization by Alibaba has left the platform with fewer anti-counterfeiting resources to conduct investigations. Right holders recognized Alibaba’s investment in anticounterfeiting measures and industry engagement efforts in recent years, but they also continued to report high volumes of counterfeit products and pirated goods, such as PDF copies of books. Right holders highlighted the need for improvements to address the site’s infringement reporting process and stringent criteria required for takedown notices, such as the requirement to identify specific piracy indicators within listings that infringers have kept deliberately vague. Furthermore, stakeholders convey that despite their ability to report obvious counterfeits that they identify on the platform for fast processing, high-quality counterfeits that are sold at prices similar to their authentic counterparts are not easily identified. Alibaba contends that its automated reporting platform is user friendly and only requires right holders to upload registration certificates to prove their rights or document their unregistered copyrights by filling out a specific form. USTR will continue to monitor the transparency and effectiveness of Taobao’s anti-counterfeiting efforts, including the evidentiary requirements for takedown requests.
AI Versus MFA
Ask any chief information security officer (CISO), cyber underwriter or risk manager, or cybersecurity attorney about what controls are critical for protecting an organization’s information systems, you’ll likely find multifactor authentication (MFA) at or near the top of every list. Government agencies responsible for helping to protect the U.S. and its information systems and assets (e.g., CISA, FBI, Secret Service) send the same message. But that message may be evolving a bit as criminal threat actors have started to exploit weaknesses in MFA.
According to a recent report in Forbes, for example, threat actors are harnessing AI to break though multifactor authentication strategies designed to prevent new account fraud. “Know Your Customer” procedures are critical in certain industries for validating the identity of customers, such as financial services, telecommunications, etc. Employers increasingly face similar issues with recruiting employees, when they find, after making the hiring decision, that the person doing the work may not be the person interviewed for the position.
Threat actors have leveraged a new AI deepfake tool that can be acquired on the dark web to bypass the biometric systems that been used to stop new account fraud. According to the Forbes article, the process goes something like this:
“1. Bad actors use one of the many generative AI websites to create and download a fake image of a person.
2. Next, they use the tool to synthesize a fake passport or a government-issued ID by inserting the fake photograph…
3. Malicious actors then generate a deepfake video (using the same photo) where the synthetic identity pans their head from left to right. This movement is specifically designed to match the requirements of facial recognition systems. If you pay close attention, you can certainly spot some defects. However, these are likely ignored by facial recognition because videos are prone to have distortions due to internet latency issues, buffering or just poor video conditions.
4. Threat actors then initiate a new account fraud attack where they connect a cryptocurrency exchange and proceed to upload the forged document. The account verification system then asks to perform facial recognition where the tool enables attackers to connect the video to the camera’s input.
5. Following these steps, the verification process is completed, and the attackers are notified that their account has been verified.”
Sophisticated AI tools are not the only MFA vulnerability. In December 2024, the Cybersecurity & Infrastructure Security Agency (CISA) issued best practices for mobile communications. Among its recommendations, CISA advised mobile phone users, in particular highly-targeted individuals,
Do not use SMS as a second factor for authentication. SMS messages are not encrypted—a threat actor with access to a telecommunication provider’s network who intercepts these messages can read them. SMS MFA is not phishing-resistant and is therefore not strong authentication for accounts of highly targeted individuals.
In a 2023 FBI Internet Crime Report, the FBI reported more than 1,000 “SIM swapping” investigations. A SIM swap is just another technique by threat actors involving the “use of unsophisticated social engineering techniques against mobile service providers to transfer a victim’s phone service to a mobile device in the criminal’s possession.
In December, Infosecurity Magazine reported on another vulnerability in MFA. In fact, there are many reports about various vulnerabilities with MFA.
Are we recommending against the use of MFA. Certainly not. Our point is simply to offer a reminder that there are no silver bullets to achieving security of information systems and that AI is not only used by the good guys. An information security program, preferably one that is written (a WISP), requires continuous vigilance, and not just from the IT department, as new technologies are leveraged to bypass older technologies.
“9999” SCAM OR LEAD FUNNEL RUN AMUCK?: Zillow Hit With New TCPA Class Action Over Text Messages and It Could Be a Serious Problem or A Serious Scam
With the new FCC TCPA one-to-one consent rules about to take effect in just 18 days everyone at (and in) Lead Generation World was (and is) focused on finalizing their go-to-market strategies with their new solutions.
One company I am constantly asked about is Zillow.
The real estate monster seems to be adopting a multi-pronged approach in response to the new rules and many of its strategies are raising eyebrows as they don’t seem to be completely consistent with one-to-one requirements (not throwing shade, just an observation.)
But if the allegations in a new class action are true Zillow may have a very serious problem with its lead gen funnel that is even more basic than anything having to do with one-to-one. (or it could just be the latest version of one of the oldest TCPA scams in the books.)
In CHET MICHAEL WILSON v. ZILLOW, INC. (W.D. Wash. Case No. 2:25-cv-00048) a Plaintiff sues Zillow over the receipt of multiple text messages related to various Zillow services–including apparently both mortgage and real estate offerings– related to multiple properties.
Per the complaint the plaintiff did not request the messages and the messages continued after Plaintiff texted “stop.”
Most problematically the text messages seem to have all been sent to a single number but are related to different properties and are directed to different recipient names. This suggests the messages are related to different form fills by different consumers, or that Zillow has a big problem with its lead gen engine.
Then again, the last four digits of the Plaintiff’s alleged phone number are allegedly “9999” so this could be another one of those “designed number” lawsuit scams where a Plaintiff buys a speciality number–like (310) 999-9999– just to collect TCPA dollars from companies errantly calling fake numbers. (I helped fight off a series of these sorts of cases any years ago and the experience made me realize how terrible frivolous lawsuits are.)
Still for a company as large as Zillow preventing multiple leads from looping to the same number for different people and property should be viewed as a priority– again Zillow is a massive lead gen engine relied on by so many– so I would be shocked if this is as simple as Zillow not picking up on a simple 9999 scam (but maybe it is.)
I should note I have no idea if the claims are even true and the Plaintiff could be lying. But the complaint does contain multiple screenshots like this one:
In addition to the text messages Zillow also apparently used prerecorded calls to contact the Plaintiff–eesh– so the TCPA’s regulated technology provisions are also at play here.
The Complaint seeks to represent three classes:
Robocall Class: All persons in the United States (1) to whom Zillow, Inc. placed,or caused to be placed, a call, (2) directed to a number assigned to a cellulartelephone service, but not assigned to a person with an account with Zillow, Inc.,(3) in connection with which Zillow, Inc. used an artificial or prerecorded voice,(4) from four years prior to the filing of this complaint through the date of classcertification.
IDNC Class: All persons in the United States who, within the four yearsprior to the filing of this lawsuit through the date of class certification,received two or more telemarketing calls within any 12-month period,from or on behalf of Zillow, Inc., regarding Zillow, Inc.’s goods orservices, to said person’s residential telephone number, including at leastone call after communicating to Zillow, Inc. that they did not wish toreceive such calls.
DNC CLASS: All persons in the United States who, within the four yearsprior to the filing of this action through the date of class certification, (1)were sent more than one telemarketing call within any 12-month period;(2) where the person’s telephone number had been listed on the NationalDo Not Call Registry for at least thirty days but not assigned to a personwith an account with Zillow, Inc.,; (3) regarding Zillow, Inc.’s property,goods, and/or services; (4) to said person’s residential telephone number.
Very interesting stuff and we will keep an eye on it for you.
Full complaint here: Zillow Complaint
FDA Proposes New Rule on Testing Talc-Containing Cosmetic Products
Key Takeaways
What Happened: The Food and Drug Administration proposed a rule to require manufacturers of talc-containing cosmetic products to test their products for asbestos using specific testing methods.
Who’s Impacted: Manufacturers of talc-containing cosmetic products.
What Should You Do: Consider submitting comments on this proposed rule by March 27, 2025.
Mandatory testing of talc-containing cosmetic products is coming. At the end of December, the Food and Drug Administration (FDA) proposed a cosmetics rule and test method for asbestos in talc that was required under Section 3505 of the Modernization of Cosmetics Regulation Act of 2022 (MoCRA). 89 Fed. Reg. 105492 (Dec. 27, 2024). MoCRA added substantial new cosmetics provisions to the Federal Food, Drug, and Cosmetic Act (FFDCA), including a requirement for the FDA to establish and require the use of standardized testing methods for detecting and identifying asbestos in talc-containing cosmetic products. For more details on MoCRA, see here.
Talc is mined as naturally occurring hydrous magnesium silicate and is used in many cosmetic products to absorb moisture, prevent caking, render makeup opaque, or improve product texture. However, the rock types that host talc deposits often also contain asbestos, which is a known carcinogen when inhaled. As noted in the Environmental Protection Agency’s latest risk evaluation for asbestos (November 2024), “If vermiculite or talc are mined from ore that also contains asbestos fibers, it is possible that the resulting vermiculite or talc minerals are contaminated with asbestos fibers.” (EPA’s risk evaluation did not consider asbestos in talc for use in cosmetics.) FDA has repeatedly monitored for asbestos in talc-containing cosmetic products, and though it found none in the products sampled in 2010 and 2023, the agency detected asbestos in 9 of 52 products it tested in 2019.
As laboratories lacked standardized testing methods that can be followed without modification to test for asbestos in talc-containing cosmetic products, MoCRA required that FDA develop such methods. This proposed rule aims to standardize testing in the industry so that the public can rely on talc-containing cosmetic products without concerns of asbestos contamination. This standardization would apply to all manufacturers of talc-containing cosmetic products, including cosmetic products that are subject to the requirements of Chapter V of the FFDCA, such as cosmetic products that are also drugs.
Proposed Testing Requirements
The proposed rule would require manufacturers to test a representative sample of each batch or lot of a talc-containing cosmetic product for asbestos using both Polarized Light Microscopy (PLM) (with dispersion staining) and Transmission Electron Microscopy (TEM)/Energy Dispersive Spectroscopy (EDS)/Selected Area Electron Diffraction (SAED). FDA proposes defining “representative sample” as “a sample that consists of a number of units drawn based on rational criteria, such as random sampling, and intended to ensure that the sample accurately portrays the material being sampled.” This definition is intended to provide flexibility and to align with the definition of “representative sample” in other FDA-covered product areas.
Under the proposed rule, a sample would be deemed to contain asbestos if asbestos is detected at or above the applicable detection limit using either method. FDA has specifically requested comment on this issue.
Additionally, the proposed rule would require manufacturers to either test each batch or lot of the talc cosmetic ingredient or rely on a certificate of analysis for each batch or lot from a qualified talc supplier. If a manufacturer chooses to rely on a talc certificate of analysis, the manufacturer must annually qualify the supplier by verifying the reported asbestos test results based on their own testing or that of a third-party laboratory to establish the certificate’s reliability.
Recordkeeping
The proposed rule would require manufacturers to keep certain records to demonstrate compliance. Manufacturers would need to keep records of testing for asbestos that show test data, including raw data, and to describe in detail how samples were tested. If a manufacturer relies on a certificate of analysis from its talc supplier, records must include any certificate of analysis received from the supplier for testing of the talc used to make the finished product, and documentation of how the manufacturer qualified the supplier by as described above.
The proposed rule would also require that these records be made available to the FDA within one business day of a request from the agency. The records would need to either be in English or have an English translation available and would need to be retained for three years after the date that the record was created. FDA is specifically soliciting comment on the timeframe for retention. The agency wants to ensure the timeframe is sufficient given the timing from testing to when the product containing the tested talc makes it to consumers and the average length of time consumers keep or use the product.
Enforcement
FDA proposes to enforce the new rule’s requirements under the FFDCA’s prohibition on the sale or distribution of adulterated cosmetics products, 21 U.S.C. § 331(a). As there is no established safe level below which asbestos could not cause adverse health effects, FDA will consider asbestos at any level in talc-containing cosmetic products to be injurious to users. Therefore, the proposed rule would codify in regulations that if asbestos is present in a talc-containing cosmetic product, or if a manufacturer fails to test its talc-containing cosmetic or its talc ingredient for asbestos or to maintain records or such testing, that cosmetic is adulterated under the FFDCA and illegal to sell or distribute.
FDA proposes that this rule become effective 30 days after the date of publication of the final rule in the Federal Register. For those interested in commenting on the proposed rule, parties must should submit comments to the docket by March 27, 2025.
As noted, MoCRA requires FDA to publish a final rule on testing talc-containing cosmetic products. With a new administration soon to take office, it is unclear whether the Trump FDA will proceed to final rulemaking once the comment period ends or take other action.
What to Know About the HHS HIPAA Security Standards Proposal
At the close of 2024, the Office for Civil Rights (OCR) at the U.S. Department of Health and Human Services (HHS) issued a Notice of Proposed Rulemaking (the Proposed Rule) to amend the Security Rule regulations established for protecting electronic health information under the Health Insurance Portability and Accountability Act of 1996 (HIPAA). The updated regulations would increase cybersecurity protection requirements for electronic protected health information (ePHI) maintained by covered entities and their business associates to combat rising cyber threats in the health care industry.
The Proposed Rule seeks to strengthen the HIPAA Security Rule requirements in various ways, including:
Removing the “addressable” standard for security safeguard implementation specifications and making all implementation specifications “required.”
This, in turn, will require written documentation of all Security Rule policies and encryption of all ePHI, except in narrow circumstances.
Requiring the development or revision of technology asset inventories and network maps to illustrate the movement of ePHI throughout electronic information system(s) on an ongoing basis, to be addressed not less than annually and in response to updates to an entity’s environment or operations potentially affecting ePHI.
Setting forth specific requirements for conducting a risk analysis, including identifying all reasonably anticipated threats to the confidentiality, integrity, and availability of ePHI, identifying potential vulnerabilities, and assigning a risk level for each threat and vulnerability identified.
Requiring prompt notification (within 24 hours) to other healthcare providers or business associates with access to an entity’s systems of a change or termination of a workforce member’s access to ePHI; in other words, entities will now be obligated to immediately communicate changes if an employee’s or contractor’s access to patient data is altered or revoked to mitigate the risk of unauthorized access to ePHI.
Establishing written procedures on how the entity will restore the loss of relevant electronic information systems and data within 72 hours.
Testing and revising written security incident response plans.
Requiring encryption of ePHI at rest and in transit.
Requiring specific security safeguards on workstations with access to ePHI and/or storage of ePHI, including anti-malware software, removal of extraneous software from ePHI systems, and disabling network ports pursuant to the entity’s risk analysis.
Requiring the use of multi-factor authentication (with limited exceptions).
Requiring vulnerability scanning at least every six (6) months and penetration testing at least once every year.
Requiring network segmentation.
The Proposed Rule notably includes some requirements specific to business associates only. These include a proposed new requirement for business associates to notify covered entities (and subcontractors to notify business associates) within 24 hours of activating their contingency plans. Business associates would also be required to verify, at least once a year, to their covered entity customers that the business associate has deployed the required technical safeguards to protect ePHI. This must be conducted by a subject matter expert who provides a written analysis of the business associate’s relevant electronic information systems and a written certification that the analysis has been performed and is accurate.
The Proposed Rule even includes a specific requirement for group health plans, requiring such plans to include in their plan documents requirements for their group health plan sponsors to comply with the administrative, physical, and technical safeguards of the Security Rule, requiring any agent to whom they provide ePHI to implement the administrative, physical, and technical safeguards of the Security Rule; and notify their group health plans no more than 24 hours after activation of their contingency plans.
Ultimately, the Proposed Rule seeks to implement a comprehensive update of mandated security protections and protocols for covered entities and business associates, reflecting the significant changes in health care technology and cybersecurity in recent years. The Proposed Rule’s changes are also a tacit acknowledgment that current Security Rule standards have not kept up with threats or operational changes.
The government is soliciting comments on the Proposed Rule, and all public comments are due by March 7, 2025. Given the scope of the proposed changes and the heightened obligations for all individuals and entities subject to HIPAA, there will likely be many comments from various stakeholders. We will continue to follow the Proposed Rule and reactions thereto. The Proposed Rule is available here.
5 Trends to Watch: 2025 U.S. Data Privacy & Cybersecurity
Even More States Join the Party — By the end of 2024, almost half of all U.S. states had enacted modern data privacy legislation. That trend will likely continue, particularly since a national data privacy statute may not be a top priority for the incoming administration.
It’s Time for State Enforcement — Several states have begun “staffing up” with the goal of bringing more data privacy enforcement in 2025, and some no longer have mandatory cure periods. Putting aside California, early indications are that Texas and Connecticut may take the lead among the states in enforcement activity.
It’s All About the Servers — Advertising technology’s transition from browser-side tracking technologies (cookies) to server-side tracking technologies (application programming interfaces or APIs) slowed in 2024. Nonetheless, the transition to server-side technologies continues; we may see it become the dominant medium for tracking in 2025 as organizations continue to work on aligning their digital advertising practices with applicable privacy laws.
Sensitive Data Is Everywhere — Regulators and plaintiffs continue to focus their attention on the collection, use, and sharing of sensitive data types. That trend is expected to continue in 2025 with continued focus on companies that use or share geolocation or health information.
Focus on Data Protection Impact Assessments (DPIAs) — The first states started requiring DPIAs two years ago, but regulators were reticent to demand that companies produce them. That has changed—state regulators have started requesting them and will continue requesting that companies produce DPIAs for data-processing activities that mandate them, like targeted advertising.
ESG and Supply Chains in 2024: Key Trends, Challenges, and Future Outlook
In 2024, supply chains remained a critical focal point for companies committed to environmental, social, and governance (ESG) principles. Given their significant contribution to a company’s environmental footprint and social impact, supply chains have become an essential area for implementing sustainable and ethical practices.
Advancements in technology, evolving regulatory frameworks, and innovative corporate strategies defined the landscape of ESG in supply chains this year. However, challenges such as data reliability, cost pressures, and geopolitical risks persisted in 2024. Here are seven observations highlighting progress, challenges, and potential future directions in ESG and supply chains.
1. Regulatory and Market Drivers
Governments and international organizations introduced stringent regulations in 2024, compelling companies to prioritize ESG considerations in their supply chains. These policies aimed to address environmental degradation, human rights abuses, and climate-related risks while fostering greater transparency and accountability.
EU’s Corporate Sustainability Due Diligence Directive (CSDDD): The European Union’s CSDDD came into force, mandating companies operating in the EU to identify, prevent and mitigate adverse human rights and environmental impacts throughout their supply chains. This regulation required businesses to map their suppliers, assess risks, and implement corrective actions, driving improvements in traceability and supplier accountability.
U.S. Uyghur Forced Labor Prevention Act (UFLPA): In the United States, the Department of Homeland Security’s enforcement of the UFLPA intensified. This act targeted goods produced with forced labor, particularly in China’s Xinjiang region, and placed the burden of proof on companies to demonstrate compliance. Businesses were required to adopt rigorous traceability systems to ensure their products were free from forced labor.
Carbon Border Adjustment Mechanisms (CBAMs): Carbon tariffs, implemented by the EU and other regions, incentivized companies to measure and reduce the carbon intensity of imported goods. These mechanisms encouraged businesses to collaborate with suppliers to lower emissions and adopt cleaner technologies.
2. Advances in Supply Chain Traceability and Transparency
Technological innovations were central to advancing supply chain traceability and transparency, enabling companies to identify risks, ensure compliance, and improve sustainability performance.
Blockchain Technology: Blockchain emerged as a cornerstone of supply chain transparency. By creating immutable records of transactions and product origins, blockchain technology provided stakeholders with verifiable proof of ethical sourcing and environmental compliance. Companies used blockchain to authenticate claims about sustainability, such as the origin of raw materials and the environmental credentials of finished goods.
Artificial Intelligence (AI): AI played a transformative role in supply chain management, helping companies analyze supplier risks, predict disruptions, and optimize logistics for lower emissions. AI-powered tools also enabled real-time monitoring of supply chain activities, such as emissions tracking, labor compliance, and waste reduction.
Internet of Things (IoT): IoT sensors provided granular, real-time data on supply chain metrics, such as energy consumption, shipping efficiency, and waste generation. This technology enabled companies to address inefficiencies and enhance the sustainability of their operations.
3. Responsible Sourcing Practices
Responsible sourcing became a cornerstone of supply chain ESG efforts, with companies adopting ethical and sustainable procurement practices to address environmental and social risks.
Raw Material Sourcing: Businesses focused on sourcing raw materials like cobalt, palm oil, and timber from certified suppliers to ensure compliance with environmental and labor standards. Industry-specific certifications, such as the Forest Stewardship Council and the Roundtable on Sustainable Palm Oil, gained prominence.
Fair Trade and Ethical Labor: Companies partnered with organizations promoting fair wages, equitable treatment, and safe working conditions. Certifications like Fair Trade and Sedex Responsible Business Practices helped businesses verify their commitment to ethical labor practices throughout their supply chains.
Local Sourcing: To reduce carbon footprints and enhance supply chain resilience, some companies prioritized local sourcing of raw materials and components. This shift minimized emissions from transportation and provided economic support to local communities.
4. Decarbonizing Supply Chains
As companies pursued net-zero commitments, decarbonizing supply chains became a top priority in 2024. Key strategies included:
Supplier Engagement: Companies collaborated with suppliers to reduce emissions through energy efficiency measures, renewable energy adoption, and low-carbon manufacturing techniques.
Sustainable Logistics: Businesses invested in cleaner transportation methods, such as electric vehicles, hydrogen-powered trucks, and optimized shipping routes. The rise of “green corridors” for shipping exemplified collaborative efforts to decarbonize freight transport.
Circular Economy Integration: Companies embraced circular economy principles, focusing on reusing materials, designing for recyclability, and minimizing waste. Circular supply chains not only reduced environmental impact, but also created cost-saving opportunities and new revenue streams.
5. Challenges in ESG Supply Chain Management
Despite progress, companies faced significant challenges in implementing ESG principles across their supply chains.
Data Gaps and Inconsistencies: Collecting reliable ESG data from multitiered supply chains remains a critical hurdle. Smaller suppliers often lack the tools or expertise to comply with reporting requirements, leading to incomplete transparency and inconsistent metrics.
Cost Pressures: Implementing sustainable practices, such as adopting renewable energy or traceability technologies, requires significant upfront investment. These costs are particularly burdensome for small and medium-sized enterprises (SMEs) and create financial tension for larger companies balancing competitive pricing.
Geopolitical Risks: Trade restrictions, regional conflicts, and sanctions disrupt global supply chains, complicating compliance with ESG regulations like forced labor bans or carbon tariffs. Navigating these challenges requires constant adaptation to volatile geopolitical landscapes.
Greenwashing Risks: Increasing regulatory and public scrutiny amplifies the consequences of unverified sustainability claims. Missteps in ESG disclosures expose companies to legal risks, reputational damage, and loss of stakeholder trust.
Supply Chain Complexity: Global supply chains are vast and intricate, often spanning multiple tiers and regions. Mapping these networks to monitor ESG compliance and identify risks such as labor violations or environmental harm is a resource-intensive challenge.
Technological Gaps Among Suppliers: While advanced technologies like blockchain improve traceability, many smaller suppliers lack access to these tools, creating disparities in ESG data collection and compliance across the supply chain.
Resistance to Change: Suppliers in regions with weaker regulatory frameworks often resist adopting ESG principles due to limited awareness, operational costs, or lack of incentives, requiring significant corporate investment in education and capacity-building.
Market Demand for Low-Cost Goods: Consumer demand for affordable products often conflicts with the higher costs of implementing sustainable practices, especially in competitive industries such as fast fashion and consumer electronics.
Resource Scarcity and Climate Impacts: Extreme weather events, rising energy costs, and material shortages – exacerbated by climate change – disrupt supply chains and increase the difficulty of maintaining ESG commitments.
Measurement and Reporting Challenges: A lack of universally accepted metrics for critical ESG indicators, such as Scope 3 emissions or biodiversity impact, complicates efforts to measure progress and report transparently across supply chains.
6. Leading Examples of ESG-Driven Supply Chains
In 2024, several organizations across various industries demonstrated innovative approaches to integrating ESG principles into their supply chains. These efforts highlighted best practices in sustainability, transparency, and ethical procurement, including a number of the recent advances noted above.
Outdoor Apparel Brand: A leading outdoor apparel company prioritized fair labor practices and reduction of environmental-related impacts in its supply chain. The brand collaborates with suppliers and other brands to develop and utilize tools to measure and communicate their environmental impacts, which allows for industry-wide benchmarking and large-scale improvement.
Global Food and Beverage Producer: A major food and beverage producer expanded its regenerative agriculture program by collaborating with farmers to enhance soil health, reduce greenhouse gas emissions, and promote biodiversity. Additionally, the company leveraged blockchain technology to ensure traceability in its supply chains for commodities such as coffee and cocoa, strengthening its commitment to sustainability.
Global Furniture Retailer: A prominent furniture retailer invested heavily in renewable energy and circular design principles to decarbonize its supply chain by reducing, replacing and rethinking. A formal due diligence system employs dozens of wood supply and forestry specialists to assure that wood is sourced from responsibly managed forests.
Multinational Technology Company: A technology giant implemented energy-efficient practices across its supply chain, including transitioning to renewable energy sources for manufacturing facilities and using AI-powered tools to optimize logistics, with a goal of becoming carbon neutral across its entire supply chain by 2030.
Consumer Goods Manufacturer: A global consumer goods manufacturer introduced water-saving technologies into its supply chain, particularly in regions facing water scarcity. The company also prioritized reducing plastic waste by incorporating recycled materials into its packaging and partnering with local recycling initiatives.
Global Shipping Firm: A logistics and shipping company adopted low-carbon transportation technologies, such as green fuel for its vessels, decarbonizing container terminals, electric powered vehicles for landside transport, and optimized routes to minimize emissions. The firm also collaborated with industry partners to develop “green corridors” that support cleaner and more sustainable freight transport.
7. Future Directions in ESG and Supply Chains
Integrating ESG principles into supply chain management is expected to continue evolving, with the following trends among those shaping the future:
AI-Powered Supply Chains: Artificial intelligence will transform supply chain management by predicting risks, optimizing logistics, and enhancing sustainability. Advanced analytics will enable businesses to identify inefficiencies and implement targeted improvements, reducing emissions and ensuring ethical practices. There will, however, be challenges accounting for the growing number of laws and regulations worldwide governing AI’s use and development.
Circular Economy Models: Supply chains will embrace circular economy principles, focusing on waste reduction, material reuse, and extended product life cycles. Closed-loop systems and upcycling initiatives will mitigate environmental impacts while creating new revenue streams.
Blockchain-Enabled Certification Programs: Blockchain technology will enhance transparency and accountability by providing real-time verification of ESG metrics, such as emissions reductions and ethical sourcing. This will foster trust among consumers, investors and regulators.
Supply Chain Readiness Level (SCRL) Analysis: ESG benefits will continue to flow from the steps taken by the Biden Administration to strengthen America’s supply chains over the past four years. Additionally, the Department of Energy’s Office of Manufacturing and Energy Supply Chains SCRL tool that was recently rolled out to evaluate global energy supply chain needs and gaps, quantify and eliminate risks and vulnerabilities, and strengthen U.S. energy supply chains is expected to facilitate decarbonization of supply chains.
Decentralized Energy Solutions: Decentralized energy systems, including on-site renewable energy installations and energy-sharing networks, will reduce dependence on traditional power grids. These solutions will decarbonize supply chains while promoting sustainability.
Nature-Based Solutions: Supply chains will integrate nature-based approaches, such as agroforestry partnerships and wetland restoration, to enhance biodiversity and provide environmental services like carbon sequestration and water filtration.
Advanced Water Stewardship: Companies will adopt innovative water management practices, including water recycling technologies and watershed restoration projects, to address water scarcity and ensure sustainable supplies for all stakeholders.
Scope 3 Emissions Reduction: Businesses will prioritize reducing emissions across their value chains by collaborating with suppliers, setting science-based targets, and implementing robust carbon accounting tools.
Industry-Wide Collaboration Platforms: Collaborative platforms will enable companies to share sustainability data and best practices and develop sector-specific solutions. This approach will help address systemic challenges, such as decarbonizing aviation or achieving sustainable fashion production.
Developments in ESG and supply chains in 2024 reflect a growing recognition of their critical role in achieving sustainability goals. From enhanced regulatory frameworks and technological innovations to responsible sourcing and decarbonization efforts, companies are making strides toward more sustainable and ethical supply chains.
However, challenges such as data gaps, cost pressures, and geopolitical risks highlight the complexities of this transformation. By addressing these issues and embracing future opportunities, businesses can create resilient, transparent, and sustainable supply chains that drive both success in business and environmental and social progress.
DOJ Finalizes Rule Implementing EO 14117, Establishing New National Security Cross-Border Data Regulatory Regime
On December 27, 2024, the U.S. Department of Justice (“DOJ”) issued a final rule (“Final Rule”) implementing Executive Order 14117 (Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern) (“EO 14117”), which was published in the Federal Register on January 8, 2025. The Rule will go into effect on April 8, 2025, with the exception of certain due diligence, audit and reporting obligations that will become effective on October 5, 2025. The program is intended to address the threat of foreign powers and state-sponsored threat actors using Americans’ sensitive personal data for malicious purposes, including intelligence collection, cyber attacks, repression and intimidation, and economic espionage.
The substance of the Final Rule is largely similar to the Notice of Proposed Rulemaking, which we covered in our previous post. As discussed in that post, the Final Rule establishes a new regulatory regime that either prohibits or restricts “covered data transactions,” which are certain transactions―namely, data brokerage, employment agreements, investment agreements and vendor agreements―that could result in access to bulk U.S. sensitive personal data or government-related data (1) by a “country of concern” (i.e., China, Cuba, Iran, North Korea, Russia and Venezuela) or (2) a “covered person.” The term “covered persons” is defined broadly to include, for example, entities with 50% or more ownership by a country of concern, entities that are organized or chartered under the laws of, or have their principal place of business in, a country of concern, and a foreign person that is an employee or contractor of an entity described above or a primary resident of a country of concern.
The two general categories of data regulated by the Final Rule are defined as follows:
“U.S. sensitive personal data” means precise geolocation data, biometric identifiers, human ‘omic data, personal health data, personal financial data, certain “covered personal identifiers” (i.e., certain combinations of “listed identifiers,” such as government-issued identification numbers, device-based or hardware-based identifier, demographic or contact data, and advertising identifier), or any combination thereof.
The Rule applies only to certain “bulk” thresholds of U.S. sensitive personal data, and those thresholds differ depending on the type of U.S. sensitive personal data at issue. For example, for precise geolocation data, the Rule applies if a covered data transaction results in access to such information of over 1,000 U.S. persons or devices by a country of concern or covered person. In contrast, for personal financial data or personal health data, the threshold is higher (i.e., more than 10,000 U.S. persons). The table below provides the relevant “bulk” threshold for category of U.S. sensitive personal data.
“Government-related data” means any precise geolocation data, regardless of volume, for any location within any area enumerated on the “Government-Related Location Data List” or any sensitive personal data, regardless of volume, that a transacting party markets as linked or linkable to current or recent former U.S. government employees or contractors, or former U.S. government senior officials.
The Rule prohibits U.S. persons from engaging in certain types of covered data transactions, most importantly, covered data transactions involving (1) data brokerage or (2) bulk human ’omic data. All other covered data transactions are “restricted,” meaning that U.S. persons must comply with certain compliance requirements before engaging in such transactions, including cybersecurity requirements published on January 8, 2025, by the Cybersecurity and Infrastructure Security Agency, data compliance program requirements, annual audits and recordkeeping requirements.
As noted above, the DOJ largely declined to make significant revisions to the preliminary version of the Rule in response to input received during the recent notice and comment period. That said, the Final Rule does include certain clarifying changes and provide additional commentary. For example, the DOJ made adjustments to certain key definitions, clarified that the Final Rule applies prospectively to transactions engaged on or after the effective date, even if the underlying agreements existed prior to the rule, and added three types of human ‘omic data to the definition of U.S. sensitive personal data (the preliminary version of the Rule already covered genomic data).
The DOJ plans to release further guidance on the Final Rule, engage with industry and other stakeholders as the program goes into effect, and publish information related to voluntary self-disclosure, advisory opinions and approval processes for otherwise prohibited or restricted transactions. In the meantime, companies should assess their readiness for the rapidly approaching enforcement date in April.
Legislative Update: 119th Congress Outlook on AI Policy
House Looks To Rep. Obernolte to Take Lead on AI
Representative Jay Obernolte (R-Calif.) has emerged as a pivotal figure in shaping the United States’ legislative response to artificial intelligence (AI). With a rare combination of technical expertise and political acumen, Obernolte’s leadership is poised to influence how Congress navigates both the opportunities and risks associated with AI technologies.
AI Expertise and Early Influence
Obernolte’s extensive background in AI distinguishes him among his congressional peers. Holding a graduate degree in AI and decades of experience as a technology entrepreneur, he brings firsthand knowledge to the legislative arena.
As the chair of a bipartisan House AI task force, Obernolte spearheaded the creation of a comprehensive roadmap addressing AI’s societal, economic, and national security implications. The collaborative environment he fostered, eschewing traditional seniority-based hierarchies, encouraged open dialogue and thoughtful debate among members. Co-chair Rep. Ted Lieu (D-Calif.) and other task force participants praised Obernolte’s inclusive approach to consensus building.
Legislative Priorities and Policy Recommendations
Obernolte’s leadership produced a robust policy framework encompassing:
Expanding AI Resource Accessibility: Advocating for broader access to AI tools for academic researchers and entrepreneurs to prevent monopolization of research by private companies.
Combatting Deepfake Harms: Supporting legislative efforts to address non-consensual explicit deepfakes, a growing issue affecting young people nationwide. Notably, he backed H.R. 5077 and H.R. 7569, which are expected to resurface in the 119th Congress.
Balancing Regulation and Innovation: Striving to create a regulatory environment that protects the public while fostering AI innovation.
National Data Privacy Standards: Advocating for comprehensive data privacy legislation to safeguard consumer information.
Advancing Quantum Computing: Supporting initiatives to enhance quantum technology development.
Maintaining Bipartisanship
Obernolte emphasizes the importance of bipartisan collaboration, a principle he upholds through relationship-building initiatives, including informal gatherings with task force members. His bipartisan approach is vital in developing durable AI regulations that endure beyond shifting political majorities. Speaker Mike Johnson (R-La.) recognized Obernolte’s ability to bridge divides, entrusting him with the leadership role.
Obernolte acknowledges the difficulty of balancing immediate GOP priorities, such as confirming Cabinet appointments and advancing tax reform, with the urgent need for AI governance. His strategy involves convincing leadership that AI policy proposals are well-reasoned and broadly supported.
Senate Republicans 119th Roadmap on AI
As the 119th Congress convenes under Republican leadership, Senate Republicans are expected to approach artificial intelligence (AI) legislation with a focus on fostering innovation while exercising caution regarding regulatory measures. This perspective aligns with the broader GOP emphasis on minimal government intervention in technology sectors.
Legislative Landscape and Priorities
During the 118th Congress, the Senate Bipartisan AI Working Group, which included Republican Senators Mike Rounds (R-S.D.) and Todd Young (R-Ind.), released a policy roadmap titled “Driving U.S. Innovation in Artificial Intelligence.” This document outlined strategies to promote AI development, address national security concerns, and consider ethical implications.
In the 119th Congress, Senate Republicans are anticipated to prioritize:
Promoting Innovation: Advocating for policies that support AI research and development to maintain the United States’ competitive edge in technology.
National Security: Focusing on the implications of AI in defense and security, ensuring that advancements do not compromise national safety.
Economic Growth: Encouraging the integration of AI in various industries to stimulate economic development and job creation.
Regulatory Approach
Senate Republicans generally favor a cautious approach to AI regulation, aiming to avoid stifling innovation. There is a preference for industry self-regulation and the development of ethical guidelines without extensive government mandates. This stance reflects concerns that premature or overly restrictive regulations could hinder technological progress and economic benefits associated with AI.
Bipartisan Considerations
While Republicans hold the majority, bipartisan collaboration remains essential for passing comprehensive AI legislation. Areas such as national security and economic competitiveness may serve as common ground for bipartisan efforts. However, topics like AI’s role in misinformation and election integrity could present challenges due to differing party perspectives on regulation and free speech.
Conclusion
In both the House and Senate, Republicans are approaching AI legislation with a focus on fostering innovation, enhancing national security, and promoting economic growth. Their preference leans toward industry self-regulation and minimal government intervention to avoid stifling innovation. Areas like national security offer potential bipartisan common ground, though debates around misinformation and election integrity may highlight partisan divides.
With House and Senate Republicans already working on a likely massive reconciliation package focused on top Republican priorities including tax, border security, and energy, AI advocates will be hard pressed to ensure their legislative goals find space in the final text.
The BR Privacy & Security Download: January 2025
Must Read! The U.S. Department of Health and Human Services Office for Civil Rights recently proposed an amendment to the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Security Rule to strengthen cyber security protections for electronic protected health information. Read the full alert to learn more about the first significant update to HIPAA’s Security Rule in over a decade. Read More > >
STATE & LOCAL LAWS & REGULATIONS
Five New State Comprehensive Privacy Laws Effective in January with Three More to Follow in 2025: With the start of the new year, five new state comprehensive privacy laws have become effective. The comprehensive privacy laws of Delaware, Iowa, Nebraska, and New Hampshire became effective on January 1, 2025, and New Jersey’s law will come into effect on January 15, 2025. Tennessee, Minnesota, and Maryland will follow suit and take effect on July 1, 2025, July 31, 2025, and October 1, 2025, respectively. Companies should review their privacy compliance programs to identify potential compliance gaps with differences in the increasing patchwork of state laws.
Colorado Issues Proposed Draft Amendments to CPA Rules: The Colorado Attorney General announced the adoption of amendments to the Colorado Privacy Act (“CPA”) rules. The rules will become effective on January 30, 2025. The rules provide enhanced protections for the processing of biometric data as well as the processing of the online activities of minors. Specifically, companies must develop and implement a written biometric data policy, implement appropriate security measures regarding biometric data, provide notice of the collection and processing of biometric data, obtain employee consent for the processing of biometric data, and provide a right of access to such data. In the context of minors, the amendment requires that entities obtain consent prior to using any system design feature designed to significantly increase the use of an online service of a known minor and to update the Data Protection Assessments to address processing that presents heightened risks to minors. Entities already subject to the CPA should carefully review whether they may have heightened obligations for the processing of employee biometric data, a category of data previously exempt from the scope of the CPA.
CPPA Announces Increased Fines and Penalties Under CCPA: The California Privacy Protection Agency (“CPPA”), the enforcement authority of the California Consumer Privacy Act (“CCPA”), has adjusted the fines and monetary thresholds of the CCPA. Under the CCPA, in January of every odd-numbered year, the CPPA must make this adjustment to account for changes in the Consumer Price Index. The CPPA has increased the monetary thresholds of the CCPA from $25,000,000 to $26,625,000. The CPPA also increased the range of monetary damages from between $100 to $750 per consumer per incident or actual damages (whichever is greater) to $107 to $799. The range of civil penalties and administrative fine amounts further increased from $2,500 for each violation of the CCPA or $7,500 for each intentional violation and violations involving the personal information of children under 16 to $2,663 and $7,988, respectively. The new amounts went into effect on January 1, 2025.
Connecticut State Senator Previews Proposed Legislation to Update State’s Comprehensive Privacy Law: Connecticut State Senator James Maroney (D) has announced that he is drafting a proposed update to the Connecticut Privacy Act that would expand its scope, provide enhanced data subject rights, include artificial intelligence (“AI”) provisions, and potentially eliminate certain exemptions currently available under the Act. Senator Maroney expects that the proposed bill could receive a hearing by late January or early February. Although Maroney has not published a draft, he indicated that the draft would likely (1) reduce the compliance threshold from the processing of the personal data of 100,000 consumers to 35,000 consumers; (2) include AI anti-discrimination measures, potentially in line with recent anti-discrimination requirements in California and Colorado; (3) expand the definition of sensitive data to include religious beliefs and ethnic origin, in line with other state laws; (4) expand the right to access personal data under the law to include a right to access a list of third parties to whom personal data was disclosed, mirroring similar rights in Delaware, Maryland, and Oregon; and (5) potentially eliminate or curtail categorical exemptions under the law, such as that for financial institutions subject to the Gramm-Leach-Bliley Act.
CPPA Endorses Browser Opt-Out Law: The CPPA’s board voted to sponsor a legislative proposal that would make it easier for California residents to exercise their right to opt out of the sale of personal information and sharing of personal information for cross-context behavioral advertising purposes. Last year, Governor Newsome vetoed legislation with the same requirements. Just as last year’s vetoed legislation, the legislative proposal sponsored by the CPPA requires browser vendors to include a feature that allows users to exercise their opt-out right through opt-out preference signals. Under the CCPA, businesses are required to honor opt-out preference signals as valid opt-out requests. Opt-out preference signals allow a consumer to exercise their opt-out right with all businesses they interact with online without having to make individualized requests with each business. If the proposal is adopted, California would be the first state to require browser vendors to offer consumers the option to enable these signals. Six other states (Colorado, Connecticut, Delaware, Montana, Oregon, and Texas) require businesses to honor browser privacy signals as an opt-out request.
FEDERAL LAWS & REGULATIONS
HHS Proposes Updates to HIPAA Security Rule: The U.S. Department of Health and Human Services (“HHS”) Office for Civil Rights (“OCR”) issued a Notice of Proposed Rulemaking (“NPRM”) to amend the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Security Rule to strengthen cybersecurity protections for electronic protected health information (“ePHI”). The NPRM proposes the first significant updates to HIPAA’s Security Rule in over a decade. The NPRM makes a number of updates to the administrative, physical, and technical safeguards specified by the Security Rule, removes the distinction between “required” and “addressable” implementation specifications, and makes all implementation specifications “required” with specific, limited exceptions.
Trump Selects Andrew Ferguson as New FTC Chair: President-elect Donald Trump has selected current Federal Trade Commission (“FTC”) Commissioner Andrew Ferguson to replace Lina Khan as the new FTC Chair. Ferguson is one of two Republicans of the five FTC Commissioners and has been a Commissioner since April of 2024. Prior to becoming an FTC Commissioner, Ferguson served as Virginia’s solicitor general. During his time as an FTC Commissioner, Ferguson dissented from several of Khan’s rulemaking efforts, including a ban on non-compete clauses in employment contracts. Separately, Trump also selected Mark Meador to be an FTC Commissioner. Once Meador is confirmed to give the FTC a Republican majority, a Republican-led FTC under Ferguson may deprioritize rulemaking and enforcement efforts relating to privacy and AI. In a leaked memo first reported by Punchbowl News, Ferguson wrote to Trump that, under his leadership, the FTC would “stop abusing FTC enforcement authorities as a substitute for comprehensive privacy legislation” and “end the FTC’s attempt to become an AI regulator.”
FERC Updates and Consolidates Cybersecurity Requirements for Gas Pipelines : The U.S. Federal Energy Regulatory Commission (“FERC”) has issued a final rule to update and consolidate cybersecurity requirements for interstate natural gas pipelines. Effective February 7, 2025, the rule adopts Version 4.0 of the Standards for Business Practices of Interstate Natural Gas Pipelines, as approved by the North American Energy Standards Board (“NAESB”). This update aims to enhance the efficiency, reliability, and cybersecurity of the natural gas industry. The new standards consolidate existing cybersecurity protocols into a single manual, streamlining processes and strengthening protections against cyber threats. This consolidation is expected to make it easier and faster to revise cybersecurity standards in response to evolving threats. The rule also aligns with broader U.S. government efforts to prioritize cybersecurity across critical infrastructure sectors. Compliance filings are required by February 3, 2025, and the standards must be fully adhered to by August 1, 2025.
House Taskforce on AI Delivers Report to Address AI Advancements: The House Bipartisan Task Force on Artificial Intelligence (the “Task Force”) submitted its comprehensive report to Speaker Mike Johnson and Democratic Leader Hakeem Jeffries. The Task Force was created to ensure America’s continued global leadership in AI innovation with appropriate safeguards. The report advocates for a sectoral regulatory structure and an incremental approach to AI policy, ensuring that humans remain central to decision-making processes. The report provides a blueprint for future Congressional action to address advancements in AI and articulates guiding principles for AI adoption, innovation, and governance in the United States. Key areas covered in the report include government use of AI, federal preemption of state AI law, data privacy, national security, research and development, civil rights and liberties, education and workforce development, intellectual property, and content authenticity. The report aims to serve as a roadmap for Congressional action, addressing the potential of AI while mitigating its risks.
CFPB Proposes Rule to Restrict Sale of Sensitive Data: The Consumer Financial Protection Bureau (“CFPB”) proposed a rule that would require data brokers to comply with the Fair Credit Reporting Act (“FCRA”) when selling income and certain other consumer financial data. CFPB Director Rohit Chopra stated the new proposed rule seeks to limit “widespread evasion” of the FCRA by data brokers when selling sensitive personal and financial information of consumers. Under the proposed rule, data brokers could sell financial data only for permissible purposes under the FCRA, including checking on loan applications and fraud prevention. The proposed rule would also limit the sale of personally identifying information known as credit header data, which can include basic demographic details, including names, ages, addresses, and phone contacts.
FTC Issues Technology Blog on Mitigating Security Risks through Data Management, Software Development and Product Design: The Federal Trade Commission (“FTC”) published a blog post identifying measures that companies can take to limit the risks of data breaches. These measures relate to security in data management, security in software development, and security in product design for humans. The FTC emphasizes comprehensive governance measures for data management, including (1) enforcing mandated data retention schedules; (2) mandating data deletion in accordance with these schedules; (3) controlling third-party data sharing; and (4) encrypting sensitive data both in transit and at rest. In the context of security in software development, the FTC identified (1) building products using memory-safe programming languages; (2) rigorous testing, including penetration and vulnerability testing; and (3) securing external product access to prevent unauthorized remote intrusions as key security measures. Finally, in the context of security in product design for humans, the FTC identified (1) enforcing least privilege access controls; (2) requiring phishing-resistant multifactor authentication; and (3) designing products and services without the use of dark patterns to reduce the over-collection of data. The blog post contains specific links to recent FTC enforcement actions specifically addressing each of these issues, providing users with insight into how the FTC has addressed these issues in the past. Companies reviewing their security and privacy governance programs should ensure that they consider these key issues.
U.S. LITIGATION
Texas District Court Prevents HHS from Enforcing Reproductive Health Privacy Rule Against Doctor: The U.S. District Court for the Northern District of Texas ruled that a Texas doctor is likely to prevail on her claim that HHS exceeded its statutory authority when it adopted an amendment to the Health Insurance Portability and Accountability Act (“HIPAA”) Privacy Rule that protects reproductive health care information and enjoined HHS from enforcing the rule against her. The 2024 amendment to the HIPAA Privacy Rule prohibits covered entities from disclosing information that could lead to an investigation or criminal, civil, or administrative liability for seeking, obtaining, providing, or facilitating reproductive health care. The Court stated that the rule likely unlawfully interfered with the plaintiff’s state-law duty to report suspected child abuse in violation of Congress’s delegation to the agency to enact rules interpreting HIPAA without limiting any law providing for such reporting. The plaintiff argued that, under Texas law, she is obligated to report instances of child abuse within 48 hours, and that relevant requests from Texas regulatory authorities demand the full, unredacted patient chart, which for female patients includes information about menstrual periods, number of pregnancies, and other reproductive health information, among other reproductive health information.
Attorneys General Oppose Clearview AI Biometric Data Privacy Settlement: A proposed settlement in the Clearview AI Illinois Biometric Information Privacy Act (“BIPA”) litigation is facing opposition from 22 states and the District of Columbia. The Attorneys General of each state argue that the settlement, which received preliminary approval in June 2024, lacks meaningful injunctive relief and offers an unusual financial stake in Clearview AI to plaintiffs. The settlement would grant the class of consumers a 23 percent stake in Clearview AI, potentially worth $52 million, based on a September 2023 valuation. Alternatively, the class could opt for 17 percent of the company’s revenue through September 2027. The AGs contend the settlement doesn’t adequately address consumer privacy concerns and the proposed 39 percent attorney fee award is excessive. Clearview AI has filed a motion to dismiss the states’ opposition, arguing it was submitted after the deadline for objections. A judge will consider granting final approval for the settlement at a hearing scheduled on January 30, 2025.
Federal Court Upholds New Jersey’s Daniel’s Law, Dismissing Free Speech Challenges: A federal judge affirmed the constitutionality of New Jersey’s Daniel’s Law, dismissing First Amendment objections raised by data brokers. Enacted following the murder of Daniel Anderl, son of U.S. District Judge Esther Salas, the law permits covered individuals—including active, retired, and former judges, prosecutors, law enforcement officers, and their families—to request the removal of personal details, such as home addresses and unpublished phone numbers, from online platforms. Data brokerage firms that find themselves on the receiving end of such requests are mandated by the statute to comply within ten (10) business days, with penalties for non-compliance including actual damages or a $1,000 fine for each violation, as well as potential punitive damages for instances of willful disregard. Notably, in 2023, Daniel’s Law was amended to allow claim assignments to third parties, resulting in over 140 lawsuits filed by a single consumer data protection company: Atlas Data Privacy Corporation. Atlas Data, a New Jersey firm specializing in data deletion, has emerged as a significant force in this litigation, utilizing Daniel’s Law to challenge data brokers on behalf of around 19,000 individuals. The court, in upholding Daniel’s Law, emphasized its critical role in safeguarding public officials while concurrently ensuring public oversight remains strong. Although data brokers contended that the law infringed on free speech and unfairly targeted their operations, the court dismissed these claims as lacking merit, instead placing significant emphasis on the statute’s relatively focused scope and substantial state interest at play. Although unquestionably a significant victory for advocates of privacy rights, the judge permitted an immediate appeal by the data brokers.
GoodRx Settles Class Action Suit Over Alleged Data Sharing Violations: GoodRx has agreed to a $25 million settlement in a class-action lawsuit alleging the company violated privacy laws by sharing users’ sensitive health data with advertisers like Meta Platforms, Google, and Criteo Corp. The settlement, if approved, would resolve a lawsuit filed in February 2023. The lawsuit followed an FTC action alleging that GoodRx shared information about users’ prescriptions and health conditions with advertising companies. GoodRx settled the FTC matter for $1.5 million. The proposed class in the class-action lawsuit is estimated to be in the tens of millions and would give each class member an average recovery ranging from $3.31 to $11.03. The settlement also allows the plaintiffs to use information from GoodRx to pursue their claims against the other defendants, including Meta, Google, and Criteo.
23andMe Data Breach Suit Settlement Approved: A federal judge approved a settlement to resolve claims that alleged 23andMe Inc. failed to secure the sensitive personal data causing a data breach in 2023. According to 23andMe, a threat actor was able to access roughly 14,000 user accounts through credential stuffing, which further enabled access to the personal information that approximately 6.9 million users made available through 23andMe’s DNA Relative and Family Tree profile features. Under the terms of the $30 million settlement, class members will receive cash compensation and three years of data monitoring services, including genetic services.
U.S. ENFORCEMENT
FTC Takes Action Against Company for Deceptive Claims Regarding Facial Recognition Software: The Federal Trade Commission (“FTC”) announced that it has entered into a settlement with IntelliVision Technologies Corp. (“IntelliVision”), which provides facial recognition software used in home security systems and smart home touch panels. The FTC alleged that IntelliVision’s claims that it had one of the highest accuracy rates on the market, that its software was free of gender or racial bias, and was trained on millions of faces was false or misleading. The FTC further alleged that IntelliVision did not have adequate evidence to support its claim that its anti-spoofing technology ensures the system cannot be tricked by a photo or video image. The proposed order against IntelliVision specifically prohibits IntelliVision from misrepresenting the effectiveness, accuracy, or lack of bias of its facial recognition technology and its technology to detect spoofing, and the comparative performance of the technology with respect to individuals of different genders, ethnicities, and skin tones.
FTC Settles Enforcement Actions with Data Brokers for Selling Sensitive Location Data: The FTC announced settlements with data brokers Gravy Analytics Inc. (“Gravy Analytics”) and Mobilewalla, Inc. (“Mobilewalla”) related to the tracking and sale of sensitive location data of consumers. According to the FTC, Gravy Analytics violated the FTC Act by unfairly selling sensitive consumer location data, by collecting and using consumers’ location data without obtaining verifiable user consent for commercial and government uses, and by selling data regarding sensitive characteristics such as health or medical decisions, political activities, and religious views derived from location data. Under the proposed settlement, Gravy Analytics will be prohibited from selling, disclosing, or using sensitive location data in any product or service, delete all historic location data and data products using such data, and must establish a sensitive data location compliance program. Separately, the FTC settled allegations against Mobilewalla stemming from allegations that Mobilewalla collected location data from real-time bidding exchanges and third-party aggregators, including data related to health clinic visits and visits to places of worship, without the knowledge of consumers, and subsequently sold such data. According to the FTC, when Mobilewalla bid to place an ad for its clients on a real-time advertising bidding exchange, it unfairly collected and retained the information in the bid request, even when it didn’t have a winning bid. Under the proposed settlement, Mobilewalla will be prohibited from selling sensitive location data and from collecting consumer data from online advertising auctions for purposes other than participating in those auctions.
Texas Attorney General Issues New Warnings Under State’s Comprehensive Privacy Law: The Texas Attorney General issued warnings to satellite radio broadcaster Sirius XM and three mobile app providers that they appear to be sharing sensitive data of consumers, including location data, without proper notification or obtaining consent. The letter warnings did not come with a press release or other public statement and were reported by Recorded Future News, who obtained the notices through a public records request. The letter to Sirius XM stated that the Attorney General’s office found a number of violations of the Texas Data Privacy and Security Act by the Sirius XM privacy notice, including failing to provide reasonably clear notice of the categories of sensitive data being processed and processing sensitive data without appropriate consent. Similar letters were sent to mobile app providers stating that the providers failed to obtain consumer consent for data sharing or including information on how consumers could exercise their rights under Texas law.
Texas Attorney General Launches Investigations Into 15 Companies for Children’s Privacy Practices: The Texas Attorney General’s office announced it had launched investigations into Character.AI and 14 other companies including Reddit, Instagram, and Discord. The Attorney General’s press release stated that the investigations related to the companies’ privacy and safety practices for minors pursuant to the Securing Children Online through Parental Empowerment (“SCOPE”) Act and the Texas Data Privacy and Security Act (“TDPSA”). Details of the Attorney General’s allegations were not provided in the announcement. The TDPSA requires companies to provide notice and obtain consent to collect and use minors’ personal data. The SCOPE Act prohibits digital service providers from sharing, disclosing, or selling a minor’s personal identifying information without permission from the child’s parent or legal guardian and provides parents with tools to manage privacy settings on their child’s account.
HHS Imposes Penalty Against Medical Provider for Impermissible Access to PHI and Security Rule Violations: The U.S. Department of Health and Human Services Office of Civil Rights (“OCR”) announced that it imposed a $1.19 million civil penalty against Gulf Coast Pain Consultants, LLC d/b/a Clearway Pain Solutions Institute (“GCPC”) for violations of the HIPAA Security Rule arising from a data breach. GCPC’s former contractor had impermissibly accessed GCPC’s electronic medical record system to retrieve protected health information (“PHI”) for use in potential fraudulent Medicare claims. OCR’s investigation determined that the impermissible access occurred on three occasions, affecting approximately 34,310 individuals. The compromised PHI included patient names, addresses, phone numbers, email addresses, dates of birth, Social Security numbers, chart numbers, insurance information, and primary care information. OCR’s investigations revealed multiple potential violations of the HIPAA Security Rule, including failures to conduct a compliant risk analysis and implement procedures to regularly review records of activity in information systems and terminate former workforce members’ access to electronic PHI.
HHS Settles with Health Care Clearinghouse for HIPAA Security Rule Violations: OCR announced a settlement with Inmediata Health Group, LLC (“Inmediata”), a healthcare clearinghouse, for potential violations of the HIPAA Security Rule, following OCR’s receipt of a 2018 complaint that PHI was accessible to search engines like Google, on the Internet. OCR’s investigation determined that from May 2016 through January 2019, the PHI of 1,565,338 individuals was made publicly available online. The PHI disclosed included patient names, dates of birth, home addresses, Social Security numbers, claims information, diagnosis/conditions, and other treatment information. OCR’s investigation also identified multiple potential HIPAA Security Rule violations including failures to conduct a compliant risk analysis and to monitor and review Inmediata’s health information systems’ activity. Under the settlement, Inmediata paid OCR $250,000. OCR determined that a corrective action plan was not necessary in this resolution as Inmediata had previously agreed to a settlement with 33 states that included corrective actions that addressed OCR’s findings.
New York State Healthcare Provider Settles with Attorney General Regarding Allegations of Cybersecurity Failures: HealthAlliance, a division of Westchester Medical Center Health Network (“WMCHealth”), has agreed to pay a $1.4 million fine, with $850,000 suspended, due to a 2023 data breach affecting over 240,000 patients and employees in New York State. The breach at issue, which occurred between September and October 2023, was reportedly caused by a security flaw in Citrix NetScaler—a tool used by many organizations to optimize web application performance and availability by reducing server load—that went unpatched. Although HealthAlliance was made aware of the vulnerability, they were unsuccessful in patching it due to technical difficulties, ultimately resulting in the exposure of 196 gigabytes of data, including particularly sensitive information like Social Security numbers and medication records. As part of its agreement with New York State, HealthAlliance must enhance its cybersecurity practices by implementing a comprehensive information security program, developing a data inventory, and enforcing a patch management policy to address critical vulnerabilities within 72 hours. For more details, view the press release from the New York Attorney General’s office.
HHS Settles with Children’s Hospital for HIPAA Privacy and Security Violations: OCR announced a $548,265 civil monetary penalty against Children’s Hospital Colorado (“CHC”) for violations of the HIPAA Privacy and Security Rules arising from data breaches in 2017 and 2020. The 2017 data breach involved a phishing attack that compromised an email account containing 3,370 individuals’ PHI and the 2020 data breach compromised three email accounts containing 10,840 individuals’ PHI. OCR’s investigation determined that the 2017 data breach occurred because multi-factor authentication was disabled on the affected email account. The 2020 data breach occurred, in part, when workforce members gave permission to unknown third parties to access their email accounts. OCR found violations of the HIPAA Privacy Rule for failure to train workforce members on the HIPAA Privacy Rule, and the HIPAA Security Rule requirement to conduct a compliant risk analysis to determine the potential risks and vulnerabilities to ePHI in its systems.
INTERNATIONAL LAWS & REGULATIONS
Italy Imposes Landmark GDPR Fine on AI Provider for Data Violations: In the first reported EU penalty under the GDPR relating to generative AI, Italy’s data protection authority, the Garante, fined OpenAI 15 million euros for breaching the European Union’s General Data Protection Regulation (“GDPR”). The penalty was linked to three specific incidents involving OpenAI: (1) unauthorized use of personal data for ChatGPT training without user consent, (2) inadequate age verification risking exposure of minors to inappropriate content, and (3) failure to report a March 2023 data breach that exposed users’ contact and payment information. The investigation into OpenAI, which began after the Garante was made aware of the March 2023 breach, initially resulted in Italy temporarily blocking access to ChatGPT but eventually reinstated it after OpenAI made concrete improvements to its age verification and privacy policies. Alongside the monetary penalty, OpenAI is additionally mandated to conduct a six-month public awareness campaign in Italy to educate the Italian public on data collection and individual user rights under GDPR. OpenAI has said that it plans to appeal the Garante’s decision, arguing that the fine exceeds its revenue in Italy.
Australian Parliament Approves Privacy Act Reforms and Bans Social Media Use by Minors: The Australian Parliament passed a number of privacy bills in December. The bills include reforms to the Australian Privacy Act, a law requiring age verification by social media platforms, and a law banning social media use by minors under the age of 16. Privacy Act reforms include new enforcement powers for the Office of the Australian Information Commissioner that clarify when “serious” breaches of the Privacy Act occur and allow the OAIC to bring civil penalty proceedings for lesser breaches. Other reforms include requiring entities that use personal data for automated decision-making to include in their privacy notices information about what data is used for automated decision-making and what types of decisions are made using automated decision-making technology.
EDPB Releases Opinion on Personal Data Use in AI Models: In response to a formal request from Ireland’s Data Protection Commission asking for clarity about how the EU General Data Protection Regulation (“GDPR”) applies to the training of large language models with personal data, the European Data Protection Board (“EDPB”) released its opinion regarding the lawful use of personal data for the development and deployment of artificial intelligence models (the “Opinion”). The Irish Data Protection Commission specifically requested EDPB to opine on: (1) when and how an AI model can be considered anonymous, (2) how legitimate interests can be used as the legal basis in the development and deployment phases of an AI model, and (3) the consequences of unlawful processing in the development phase of an AI model on its subsequent operation. With respect to anonymity, the EDPB stated this should be analyzed on a case-by-case basis taking into account the likelihood of obtaining personal data of individuals whose data was used to build the model and the likelihood of extracting personal data from queries. The Opinion describes certain methods that controllers can use to demonstrate anonymity. With respect to the use of legitimate interest as a legal basis for processing, the EDPB restated a three-part test to assess legitimate interest from its earlier guidance. Finally, the EDPB reviewed several scenarios in which personal data may be unlawfully processed to develop an AI model.
Second Draft of General-Purpose AI Code of Practice Published: The European Commission announced that independent experts published the Second Draft of the General Purpose AI Code of Practice. The AI Code of Practice is designed to be a guiding document for providers of general-purpose AI models, allowing them to demonstrate compliance with the AI Act. Under the EU AI Act, providers are persons or entities that develop an AI system and place that system on the market. This second draft is based on the responses and comments received on the first draft and is designed to provide a “future-proof” code. The first part of the Code details transparency and copyright obligations for all providers of general-purpose AI models. The second part of the Code applies to providers of advanced general-purpose AI models that could pose systemic risks. This section outlines measures for systemic risk assessment and mitigation, including model evaluations, incident reporting, and cybersecurity obligations. The Second Draft will be open for comments until January 15, 2025.
NOYB Approved to Bring Collective Redress Claims: The Austrian-based non-profit organization None of Your Business (“NOYB”) has been approved as a Qualified Entity in Austria and Ireland, enabling it to pursue collective redress actions across the European Union (“EU”). Famous for challenging the EU-US data transfer framework through its Schrems I and II actions, NOYB intends to use the EU’s collective action redress system to challenge what it describes as unlawful processing without consent, use of deceptive dark patterns, data sales, international data transfers, and use of “absurd” language in privacy policies. Unlike US class actions, these EU actions are strictly non-profit. However, they do provide for both injunctive and monetary redress measures. NOYB intends to bring its first actions in 2025. Click here to learn more and read NOYB’s announcement.
EDPB Issues Guidelines on Third Country Authority Data Requests: The EDPB published draft guidelines on Article 48 of the GDPR relating to the transfer or disclosure of personal data to a governmental authority in a third country (the “Guidelines”). The Guidelines state that, as a general rule, requests from governmental authorities are recognizable and enforceable under applicable international agreements. The Guidelines further state that any such transfer must also comply with Article 6 with respect to legal basis for processing and Article 46 regarding legal mechanism for international data transfer. The Guidelines will be available for public consultation until January 27, 2025.
Irish DPC Fines Meta €251 Million for Violations of the GDPR: The Irish Data Protection Commission (DPC) fined Meta €251 million following a 2018 data breach that affected 29 million Facebook accounts globally, including 3 million in the European Union. The breach exposed personal data such as names, contact information, locations, birthdates, religious and political beliefs, and children’s data. The DPC found that Meta Ireland violated General Data Protection Regulation (GDPR) Articles 33(3) and 33(5) by failing to provide complete information in their breach notification and to properly document the breach. Furthermore, Meta Ireland infringed GDPR Articles 25(1) and 25(2) by neglecting to incorporate data protection principles into the design of their processing systems and by processing more data than necessary by default.
Additional Authors: Daniel R. Saeedi, Rachel L. Schaller, Gabrielle N. Ganze, Ana Tagvoryan, P. Gavin Eastgate, Timothy W. Dickens, Jason C. Hirsch, Tianmei Ann Huang, Adam J. Landy, Amanda M. Noonan, and Karen H. Shin
Complying With the New “Open Banking” Regime: Primer and Fact Sheet
The Consumer Financial Protection Bureau (CFPB) finalized its “open banking” rule in late 2024. As required by Section 1033 of the Consumer Financial Protection Act, the CFPB promulgated the rule to require certain financial services entities to provide for the limited sharing of consumer data and to standardize the way in which that data is shared. The CFPB has stated that the open banking rule will “boost competition” by facilitating consumers’ ability to switch between banks and other financial service providers.
In general, the open banking rule:
Provides consumers with control over their data in bank accounts, credit card accounts, and other financial products, including mobile wallets and payment apps;
Allows consumers to authorize third-party access to consumers’ data including transaction information, account balance information, and information needed to initiate payments; and
Requires financial providers to make this information in accurate, machine-readable format and with no charge to consumers.
For more background on the history and policy of open banking, please review our prior alert.
Compliance Deadlines
Numerous comments to the proposed rule urged the CFPB to lengthen the period of time for businesses to comply with the rule. The CFPB responded to those comments by extending the original six month compliance date for the largest affected institutions to provide a 1.5 year implementation period. The table below summarizes the compliance schedule by which different sized entities must operate in compliance with the rule:
Compliance Timeline
Depository Institutions
Nondepository Institutions
1 April 2026 (~1.5 years)
At least US$250b total assets
At least US$10b in total receipts as of either 2023 or 2024
1 April 2027 (~2.5 years)
At least US$10b total assets, but less than US$250b total assets
Less than US$10b in total receipts in both 2023 and 2024 (this is the final compliance date for nondepository institutions)
1 April 2028 (~3.5 years)
At least US$3b total assets, but less than US$10b total assets
–
1 April 2029 (~4.5 years)
At least US$1.5b total assets, but less than US$3b total assets
–
1 April 2030 (~5.5 years)
Less than US$1.5b total assets, but more than US$850m (depositories holding less than US$850m are exempted from compliance)
–
Making Consumer Financial Data Available
Under the final rule, a “data provider” must provide, at the request of a consumer or a third party authorized by the consumer, “covered data” concerning a consumer financial product or service that the consumer obtained from the data provider. The rule defines data provider to include depository institutions, electronic payment providers, credit card issuers, and other financial services providers. The rule defines covered data to include transaction information, account balances, and other information to enable payments.
A data provider’s obligations regarding covered data arise only when holding data concerning a consumer financial product or service that the consumer actually obtained from that data provider. Notwithstanding third-party obligations, merely possessing data from another data provider does not implicate the rule. The CFPB revised the definition of covered data in a manner that offers some clarity for the consumer reporting agencies (CRAs), which typically gather data for other entities for consumer credit reports.
Electronic Payments
Credit Cards
Other Products and Services
Data Provider
A financial institution, as defined in Regulation E.
A card issuer, as defined in Regulation Z.
Any other person that controls or possesses information concerning a covered consumer financial product or service that the consumer obtained from that person.
Covered Consumer Financial Product or Service
A Regulation E account.
A Regulation Z credit card.
Facilitation of payments from a Regulation E account or Regulation Z credit card.
Data Provider Interfaces
As part of the provision of data, data providers must create both consumer and developer interfaces to enable the efficient provision and exchange of consumer data. In addition to various technical requirements, data providers must also establish and maintain written policies and procedures to ensure the efficient, secure, and accurate sharing of consumer data. Data providers are prohibited from charging fees for providing this service.
Interface Requirements
Consumer Interface
Developer Interface
When to Provide Data
Data provider receives information sufficient to: (1) authenticate the consumer’s identity; and (2) identify the scope of the data requested.
Data provider receives information sufficient to: (1) authenticate the consumer’s identity; (2) authenticate the third party’s identity; (3) document the third party is properly authorized; and (4) identify the scope of the data requested.
Data Format
Machine-readable file
Standardized and machine-readable file
Interface Performance
Strict requirement to provide data
Minimum 95% success rate
Data Request Denials
Unlawful, insecure, or otherwise unreasonable requests may be denied
Unlawful, insecure, or otherwise unreasonable requests may be denied
Authorizing Third Parties
To lawfully access covered data, a third party must generally do three things, namely: (1) provide the consumer with an authorization disclosure; (2) certify that the third party complies with various restrictions on the use of the data; and (3) obtain the consumer’s express approval to access the covered data.
The rule prohibits three uses of data: (1) targeted advertising; (2) cross-selling of other products or services; and (3) selling covered data. While commenting on the proposed rule, several CRAs requested that the CFPB allow for use of covered data for internal purposes such as research and development of products. The CFPB found this reasonable and permitted “uses that are reasonably necessary to improve the product or service the consumer requested.”
Conclusion
The open banking rule establishes a robust framework for the exchange and transmission by certain entities regarding certain types of consumer data and the safeguarding of that data. Although the final rule extends the implementation deadlines beyond those originally proposed, implementation will require careful coordination among various functions of affected data providers’ businesses and by entities authorized to receive covered data.
NEW YEAR, NEW FINES & FEES: FCC Adopts Report and Order Introducing New Fees Associated with the Robocall Mitigation Database
As I am sure you all know the Robocall Mitigation Database (RMD) was implemented to further the FCC’s efforts when it comes to protecting America’s networks from illegal robocalls and was birthed out of the TRACED Act. The RMD was put in place to monitor the traffic on our phone networks and to assist in compliance with the rules. While the FCC has expanded the types of service providers who need to file and the requirements, they still felt there were deficiencies with accuracy and up-to-date information. The newly adopted Report and Order is set to help finetune the RMD.
On December 30th the Commission adopted a Report and Order to further strengthen their efforts and fines and fees associated with the RMD. Companies that are submitting false or inaccurate information may face fines of up to $10,000 for each filing. While failing to keep your company information current might land you a $1,000 fine. There will now be a $100 filing fee associated with your RMD application along with an Annual Recertification filing fee of $100.
Aside from the fine and fees, there are a few additional developments with the RMD, see the complete list below.
Requiring prompt updates when a change to a provider’s information occurs (this must be updated within 10 business days or face a $1,000 fine)
Establishing a higher base forfeiture amount for providers submitting false or inaccurate information ($10,000 fine);
Creating a dedicated reporting portal for deficient filings;
Issuing substantive guidance and filer education;
Developing the use of a two factor authentication log-in solution; and
Requiring providers to recertify their Robocall Mitigation Database filings annually ($100).
Require providers to remit a filing fee for initial and subsequent annual submissions ($100)
Chairwoman Rosenworcel is quoted as saying “Companies using America’s phone networks must be actively involved in protecting consumers from scammers, we are tightening our rules to ensure voice service providers know their responsibilities and help stop junk robocalls. I thank my colleagues for their bipartisan support of this effort.”
The new fines and fees will become effective 30 days after publication in the CFR. While the remaining items are still under additional review. We will keep an eye on this and let you know once the Report and Order is published. Read the Report and Order here.