5 Trends to Watch: 2025 EU Data Privacy & Cybersecurity

Full Steam Ahead: The European Union’s (EU) Artificial Intelligence (AI) Act in Action — As the EU’s landmark AI Act officially takes effect, 2025 will be a year of implementation challenges and enforcement. Companies deploying AI across the EU will likely navigate strict rules on data usage, transparency, and risk management, especially for high-risk AI systems. Privacy regulators are expected to play a key role in monitoring how personal data is used in AI model training, with potential penalties for noncompliance. The interplay between the AI Act and the General Data Protection Regulation (GDPR) may add complexity, particularly for multinational organizations.
Network and Information Security Directive (NIS2) Matures: A New Era of Cybersecurity Regulation — The EU’s NIS2 Directive will enter its enforcement phase, expanding cybersecurity obligations for critical infrastructure and key sectors. Companies must adapt to stricter breach notification rules, risk management requirements, and supply-chain security mandates. Regulators are expected to focus on cross-border coordination in response to major incidents, with early cases likely setting important precedents. Organizations will likely face increasing scrutiny of their cybersecurity disclosures and incident response protocols.
The Evolution of Data Transfers: Toward a Unified Framework — After years of turbulence, 2025 may mark a turning point for transatlantic and global data flows. The EU-U.S. Data Privacy Framework will face ongoing reviews by the European Data Protection Board (EDPB) and potential legal challenges, but it offers a clearer path forward. Meanwhile, the EU may continue striking adequacy agreements with key trading partners, setting the stage for a harmonized approach to cross-border data transfers. Companies will need robust mechanisms, such as Standard Contractual Clauses and emerging Transfer Impact Assessments (TIAs), to maintain compliance.
Consumer Rights Expand Under the GDPR’s Influence — The GDPR continues to set the global benchmark for privacy laws, and 2025 will see the ripple effect of its influence as EU member states refine their own data protection frameworks. Enhanced consumer rights, such as the right to explanation in algorithmic decision-making and stricter opt-in requirements for data use, are anticipated. Regulators are also likely to target dark patterns and deceptive consent mechanisms, driving companies toward greater transparency in their user interfaces and data practices.
Digital Markets Act Meets GDPR: Privacy in the Platform Economy — The Digital Markets Act (DMA), fully enforceable in 2025, will bring sweeping changes to large online platforms, or “gatekeepers.” Interoperability mandates, restrictions on data combination across services, and limits on targeted advertising will intersect with GDPR compliance. The overlap between DMA and GDPR enforcement will challenge platforms to adapt their practices while balancing privacy obligations. This regulatory synergy may reshape data monetization strategies and set a precedent for digital market governance worldwide.

AI Versus MFA

Ask any chief information security officer (CISO), cyber underwriter or risk manager, or cybersecurity attorney about what controls are critical for protecting an organization’s information systems, you’ll likely find multifactor authentication (MFA) at or near the top of every list. Government agencies responsible for helping to protect the U.S. and its information systems and assets (e.g., CISA, FBI, Secret Service) send the same message. But that message may be evolving a bit as criminal threat actors have started to exploit weaknesses in MFA.
According to a recent report in Forbes, for example, threat actors are harnessing AI to break though multifactor authentication strategies designed to prevent new account fraud. “Know Your Customer” procedures are critical in certain industries for validating the identity of customers, such as financial services, telecommunications, etc. Employers increasingly face similar issues with recruiting employees, when they find, after making the hiring decision, that the person doing the work may not be the person interviewed for the position.
Threat actors have leveraged a new AI deepfake tool that can be acquired on the dark web to bypass the biometric systems that been used to stop new account fraud. According to the Forbes article, the process goes something like this:
“1. Bad actors use one of the many generative AI websites to create and download a fake image of a person.
2. Next, they use the tool to synthesize a fake passport or a government-issued ID by inserting the fake photograph…
3. Malicious actors then generate a deepfake video (using the same photo) where the synthetic identity pans their head from left to right. This movement is specifically designed to match the requirements of facial recognition systems. If you pay close attention, you can certainly spot some defects. However, these are likely ignored by facial recognition because videos are prone to have distortions due to internet latency issues, buffering or just poor video conditions.
4. Threat actors then initiate a new account fraud attack where they connect a cryptocurrency exchange and proceed to upload the forged document. The account verification system then asks to perform facial recognition where the tool enables attackers to connect the video to the camera’s input.
5. Following these steps, the verification process is completed, and the attackers are notified that their account has been verified.”
Sophisticated AI tools are not the only MFA vulnerability. In December 2024, the Cybersecurity & Infrastructure Security Agency (CISA) issued best practices for mobile communications. Among its recommendations, CISA advised mobile phone users, in particular highly-targeted individuals,
Do not use SMS as a second factor for authentication. SMS messages are not encrypted—a threat actor with access to a telecommunication provider’s network who intercepts these messages can read them. SMS MFA is not phishing-resistant and is therefore not strong authentication for accounts of highly targeted individuals.
In a 2023 FBI Internet Crime Report, the FBI reported more than 1,000 “SIM swapping” investigations. A SIM swap is just another technique by threat actors involving the “use of unsophisticated social engineering techniques against mobile service providers to transfer a victim’s phone service to a mobile device in the criminal’s possession.
In December, Infosecurity Magazine reported on another vulnerability in MFA. In fact, there are many reports about various vulnerabilities with MFA.
Are we recommending against the use of MFA. Certainly not. Our point is simply to offer a reminder that there are no silver bullets to achieving security of information systems and that AI is not only used by the good guys. An information security program, preferably one that is written (a WISP), requires continuous vigilance, and not just from the IT department, as new technologies are leveraged to bypass older technologies.

ESG and Supply Chains in 2024: Key Trends, Challenges, and Future Outlook

In 2024, supply chains remained a critical focal point for companies committed to environmental, social, and governance (ESG) principles. Given their significant contribution to a company’s environmental footprint and social impact, supply chains have become an essential area for implementing sustainable and ethical practices.
Advancements in technology, evolving regulatory frameworks, and innovative corporate strategies defined the landscape of ESG in supply chains this year. However, challenges such as data reliability, cost pressures, and geopolitical risks persisted in 2024. Here are seven observations highlighting progress, challenges, and potential future directions in ESG and supply chains.
1. Regulatory and Market Drivers
Governments and international organizations introduced stringent regulations in 2024, compelling companies to prioritize ESG considerations in their supply chains. These policies aimed to address environmental degradation, human rights abuses, and climate-related risks while fostering greater transparency and accountability.

EU’s Corporate Sustainability Due Diligence Directive (CSDDD): The European Union’s CSDDD came into force, mandating companies operating in the EU to identify, prevent and mitigate adverse human rights and environmental impacts throughout their supply chains. This regulation required businesses to map their suppliers, assess risks, and implement corrective actions, driving improvements in traceability and supplier accountability.
U.S. Uyghur Forced Labor Prevention Act (UFLPA): In the United States, the Department of Homeland Security’s enforcement of the UFLPA intensified. This act targeted goods produced with forced labor, particularly in China’s Xinjiang region, and placed the burden of proof on companies to demonstrate compliance. Businesses were required to adopt rigorous traceability systems to ensure their products were free from forced labor.
Carbon Border Adjustment Mechanisms (CBAMs): Carbon tariffs, implemented by the EU and other regions, incentivized companies to measure and reduce the carbon intensity of imported goods. These mechanisms encouraged businesses to collaborate with suppliers to lower emissions and adopt cleaner technologies.

2. Advances in Supply Chain Traceability and Transparency
Technological innovations were central to advancing supply chain traceability and transparency, enabling companies to identify risks, ensure compliance, and improve sustainability performance.

Blockchain Technology: Blockchain emerged as a cornerstone of supply chain transparency. By creating immutable records of transactions and product origins, blockchain technology provided stakeholders with verifiable proof of ethical sourcing and environmental compliance. Companies used blockchain to authenticate claims about sustainability, such as the origin of raw materials and the environmental credentials of finished goods.
Artificial Intelligence (AI): AI played a transformative role in supply chain management, helping companies analyze supplier risks, predict disruptions, and optimize logistics for lower emissions. AI-powered tools also enabled real-time monitoring of supply chain activities, such as emissions tracking, labor compliance, and waste reduction.
Internet of Things (IoT): IoT sensors provided granular, real-time data on supply chain metrics, such as energy consumption, shipping efficiency, and waste generation. This technology enabled companies to address inefficiencies and enhance the sustainability of their operations.

3. Responsible Sourcing Practices
Responsible sourcing became a cornerstone of supply chain ESG efforts, with companies adopting ethical and sustainable procurement practices to address environmental and social risks.

Raw Material Sourcing: Businesses focused on sourcing raw materials like cobalt, palm oil, and timber from certified suppliers to ensure compliance with environmental and labor standards. Industry-specific certifications, such as the Forest Stewardship Council and the Roundtable on Sustainable Palm Oil, gained prominence.
Fair Trade and Ethical Labor: Companies partnered with organizations promoting fair wages, equitable treatment, and safe working conditions. Certifications like Fair Trade and Sedex Responsible Business Practices helped businesses verify their commitment to ethical labor practices throughout their supply chains.
Local Sourcing: To reduce carbon footprints and enhance supply chain resilience, some companies prioritized local sourcing of raw materials and components. This shift minimized emissions from transportation and provided economic support to local communities.

4. Decarbonizing Supply Chains
As companies pursued net-zero commitments, decarbonizing supply chains became a top priority in 2024. Key strategies included:

Supplier Engagement: Companies collaborated with suppliers to reduce emissions through energy efficiency measures, renewable energy adoption, and low-carbon manufacturing techniques.
Sustainable Logistics: Businesses invested in cleaner transportation methods, such as electric vehicles, hydrogen-powered trucks, and optimized shipping routes. The rise of “green corridors” for shipping exemplified collaborative efforts to decarbonize freight transport.
Circular Economy Integration: Companies embraced circular economy principles, focusing on reusing materials, designing for recyclability, and minimizing waste. Circular supply chains not only reduced environmental impact, but also created cost-saving opportunities and new revenue streams.

5. Challenges in ESG Supply Chain Management
Despite progress, companies faced significant challenges in implementing ESG principles across their supply chains.

Data Gaps and Inconsistencies: Collecting reliable ESG data from multitiered supply chains remains a critical hurdle. Smaller suppliers often lack the tools or expertise to comply with reporting requirements, leading to incomplete transparency and inconsistent metrics.
Cost Pressures: Implementing sustainable practices, such as adopting renewable energy or traceability technologies, requires significant upfront investment. These costs are particularly burdensome for small and medium-sized enterprises (SMEs) and create financial tension for larger companies balancing competitive pricing.
Geopolitical Risks: Trade restrictions, regional conflicts, and sanctions disrupt global supply chains, complicating compliance with ESG regulations like forced labor bans or carbon tariffs. Navigating these challenges requires constant adaptation to volatile geopolitical landscapes.
Greenwashing Risks: Increasing regulatory and public scrutiny amplifies the consequences of unverified sustainability claims. Missteps in ESG disclosures expose companies to legal risks, reputational damage, and loss of stakeholder trust.
Supply Chain Complexity: Global supply chains are vast and intricate, often spanning multiple tiers and regions. Mapping these networks to monitor ESG compliance and identify risks such as labor violations or environmental harm is a resource-intensive challenge.
Technological Gaps Among Suppliers: While advanced technologies like blockchain improve traceability, many smaller suppliers lack access to these tools, creating disparities in ESG data collection and compliance across the supply chain.
Resistance to Change: Suppliers in regions with weaker regulatory frameworks often resist adopting ESG principles due to limited awareness, operational costs, or lack of incentives, requiring significant corporate investment in education and capacity-building.
Market Demand for Low-Cost Goods: Consumer demand for affordable products often conflicts with the higher costs of implementing sustainable practices, especially in competitive industries such as fast fashion and consumer electronics.
Resource Scarcity and Climate Impacts: Extreme weather events, rising energy costs, and material shortages – exacerbated by climate change – disrupt supply chains and increase the difficulty of maintaining ESG commitments.
Measurement and Reporting Challenges: A lack of universally accepted metrics for critical ESG indicators, such as Scope 3 emissions or biodiversity impact, complicates efforts to measure progress and report transparently across supply chains.

6. Leading Examples of ESG-Driven Supply Chains
In 2024, several organizations across various industries demonstrated innovative approaches to integrating ESG principles into their supply chains. These efforts highlighted best practices in sustainability, transparency, and ethical procurement, including a number of the recent advances noted above.

Outdoor Apparel Brand: A leading outdoor apparel company prioritized fair labor practices and reduction of environmental-related impacts in its supply chain. The brand collaborates with suppliers and other brands to develop and utilize tools to measure and communicate their environmental impacts, which allows for industry-wide benchmarking and large-scale improvement.
Global Food and Beverage Producer: A major food and beverage producer expanded its regenerative agriculture program by collaborating with farmers to enhance soil health, reduce greenhouse gas emissions, and promote biodiversity. Additionally, the company leveraged blockchain technology to ensure traceability in its supply chains for commodities such as coffee and cocoa, strengthening its commitment to sustainability.
Global Furniture Retailer: A prominent furniture retailer invested heavily in renewable energy and circular design principles to decarbonize its supply chain by reducing, replacing and rethinking. A formal due diligence system employs dozens of wood supply and forestry specialists to assure that wood is sourced from responsibly managed forests.
Multinational Technology Company: A technology giant implemented energy-efficient practices across its supply chain, including transitioning to renewable energy sources for manufacturing facilities and using AI-powered tools to optimize logistics, with a goal of becoming carbon neutral across its entire supply chain by 2030.
Consumer Goods Manufacturer: A global consumer goods manufacturer introduced water-saving technologies into its supply chain, particularly in regions facing water scarcity. The company also prioritized reducing plastic waste by incorporating recycled materials into its packaging and partnering with local recycling initiatives.
Global Shipping Firm: A logistics and shipping company adopted low-carbon transportation technologies, such as green fuel for its vessels, decarbonizing container terminals, electric powered vehicles for landside transport, and optimized routes to minimize emissions. The firm also collaborated with industry partners to develop “green corridors” that support cleaner and more sustainable freight transport.

7. Future Directions in ESG and Supply Chains
Integrating ESG principles into supply chain management is expected to continue evolving, with the following trends among those shaping the future:

AI-Powered Supply Chains: Artificial intelligence will transform supply chain management by predicting risks, optimizing logistics, and enhancing sustainability. Advanced analytics will enable businesses to identify inefficiencies and implement targeted improvements, reducing emissions and ensuring ethical practices. There will, however, be challenges accounting for the growing number of laws and regulations worldwide governing AI’s use and development.
Circular Economy Models: Supply chains will embrace circular economy principles, focusing on waste reduction, material reuse, and extended product life cycles. Closed-loop systems and upcycling initiatives will mitigate environmental impacts while creating new revenue streams.
Blockchain-Enabled Certification Programs: Blockchain technology will enhance transparency and accountability by providing real-time verification of ESG metrics, such as emissions reductions and ethical sourcing. This will foster trust among consumers, investors and regulators.
Supply Chain Readiness Level (SCRL) Analysis: ESG benefits will continue to flow from the steps taken by the Biden Administration to strengthen America’s supply chains over the past four years. Additionally, the Department of Energy’s Office of Manufacturing and Energy Supply Chains SCRL tool that was recently rolled out to evaluate global energy supply chain needs and gaps, quantify and eliminate risks and vulnerabilities, and strengthen U.S. energy supply chains is expected to facilitate decarbonization of supply chains.
Decentralized Energy Solutions: Decentralized energy systems, including on-site renewable energy installations and energy-sharing networks, will reduce dependence on traditional power grids. These solutions will decarbonize supply chains while promoting sustainability.
Nature-Based Solutions: Supply chains will integrate nature-based approaches, such as agroforestry partnerships and wetland restoration, to enhance biodiversity and provide environmental services like carbon sequestration and water filtration.
Advanced Water Stewardship: Companies will adopt innovative water management practices, including water recycling technologies and watershed restoration projects, to address water scarcity and ensure sustainable supplies for all stakeholders.
Scope 3 Emissions Reduction: Businesses will prioritize reducing emissions across their value chains by collaborating with suppliers, setting science-based targets, and implementing robust carbon accounting tools.
Industry-Wide Collaboration Platforms: Collaborative platforms will enable companies to share sustainability data and best practices and develop sector-specific solutions. This approach will help address systemic challenges, such as decarbonizing aviation or achieving sustainable fashion production.

Developments in ESG and supply chains in 2024 reflect a growing recognition of their critical role in achieving sustainability goals. From enhanced regulatory frameworks and technological innovations to responsible sourcing and decarbonization efforts, companies are making strides toward more sustainable and ethical supply chains.
However, challenges such as data gaps, cost pressures, and geopolitical risks highlight the complexities of this transformation. By addressing these issues and embracing future opportunities, businesses can create resilient, transparent, and sustainable supply chains that drive both success in business and environmental and social progress.

Legislative Update: 119th Congress Outlook on AI Policy

House Looks To Rep. Obernolte to Take Lead on AI
Representative Jay Obernolte (R-Calif.) has emerged as a pivotal figure in shaping the United States’ legislative response to artificial intelligence (AI). With a rare combination of technical expertise and political acumen, Obernolte’s leadership is poised to influence how Congress navigates both the opportunities and risks associated with AI technologies.
AI Expertise and Early Influence
Obernolte’s extensive background in AI distinguishes him among his congressional peers. Holding a graduate degree in AI and decades of experience as a technology entrepreneur, he brings firsthand knowledge to the legislative arena.
As the chair of a bipartisan House AI task force, Obernolte spearheaded the creation of a comprehensive roadmap addressing AI’s societal, economic, and national security implications. The collaborative environment he fostered, eschewing traditional seniority-based hierarchies, encouraged open dialogue and thoughtful debate among members. Co-chair Rep. Ted Lieu (D-Calif.) and other task force participants praised Obernolte’s inclusive approach to consensus building.
Legislative Priorities and Policy Recommendations
Obernolte’s leadership produced a robust policy framework encompassing:

Expanding AI Resource Accessibility: Advocating for broader access to AI tools for academic researchers and entrepreneurs to prevent monopolization of research by private companies.
Combatting Deepfake Harms: Supporting legislative efforts to address non-consensual explicit deepfakes, a growing issue affecting young people nationwide. Notably, he backed H.R. 5077 and H.R. 7569, which are expected to resurface in the 119th Congress.
Balancing Regulation and Innovation: Striving to create a regulatory environment that protects the public while fostering AI innovation.
National Data Privacy Standards: Advocating for comprehensive data privacy legislation to safeguard consumer information.
Advancing Quantum Computing: Supporting initiatives to enhance quantum technology development.

Maintaining Bipartisanship
Obernolte emphasizes the importance of bipartisan collaboration, a principle he upholds through relationship-building initiatives, including informal gatherings with task force members. His bipartisan approach is vital in developing durable AI regulations that endure beyond shifting political majorities. Speaker Mike Johnson (R-La.) recognized Obernolte’s ability to bridge divides, entrusting him with the leadership role.
Obernolte acknowledges the difficulty of balancing immediate GOP priorities, such as confirming Cabinet appointments and advancing tax reform, with the urgent need for AI governance. His strategy involves convincing leadership that AI policy proposals are well-reasoned and broadly supported.
Senate Republicans 119th Roadmap on AI
As the 119th Congress convenes under Republican leadership, Senate Republicans are expected to approach artificial intelligence (AI) legislation with a focus on fostering innovation while exercising caution regarding regulatory measures. This perspective aligns with the broader GOP emphasis on minimal government intervention in technology sectors.
Legislative Landscape and Priorities
During the 118th Congress, the Senate Bipartisan AI Working Group, which included Republican Senators Mike Rounds (R-S.D.) and Todd Young (R-Ind.), released a policy roadmap titled “Driving U.S. Innovation in Artificial Intelligence.” This document outlined strategies to promote AI development, address national security concerns, and consider ethical implications.
In the 119th Congress, Senate Republicans are anticipated to prioritize:

Promoting Innovation: Advocating for policies that support AI research and development to maintain the United States’ competitive edge in technology.
National Security: Focusing on the implications of AI in defense and security, ensuring that advancements do not compromise national safety.
Economic Growth: Encouraging the integration of AI in various industries to stimulate economic development and job creation.

Regulatory Approach
Senate Republicans generally favor a cautious approach to AI regulation, aiming to avoid stifling innovation. There is a preference for industry self-regulation and the development of ethical guidelines without extensive government mandates. This stance reflects concerns that premature or overly restrictive regulations could hinder technological progress and economic benefits associated with AI.
Bipartisan Considerations
While Republicans hold the majority, bipartisan collaboration remains essential for passing comprehensive AI legislation. Areas such as national security and economic competitiveness may serve as common ground for bipartisan efforts. However, topics like AI’s role in misinformation and election integrity could present challenges due to differing party perspectives on regulation and free speech.
Conclusion
In both the House and Senate, Republicans are approaching AI legislation with a focus on fostering innovation, enhancing national security, and promoting economic growth. Their preference leans toward industry self-regulation and minimal government intervention to avoid stifling innovation. Areas like national security offer potential bipartisan common ground, though debates around misinformation and election integrity may highlight partisan divides.
With House and Senate Republicans already working on a likely massive reconciliation package focused on top Republican priorities including tax, border security, and energy, AI advocates will be hard pressed to ensure their legislative goals find space in the final text. 

Change Management: How to Finesse Law Firm Adoption of Generative AI

Law firms today face a turning point. Clients demand more efficient, cost-effective services; younger associates are eager to leverage the latest technologies for legal tasks; and partners try to reconcile tradition with agility in a highly competitive marketplace. Generative artificial intelligence (AI), known for its capacity to produce novel content and insights, has emerged as a solution that promises better efficiency, improved work quality, and a real opportunity to differentiate the firm in the marketplace. Still, the question remains:
How can a law firm help its attorneys and staff to embrace AI while safeguarding the trust, ethical integrity, and traditional practices that lie at the heart of legal work?
Andrew Ng’s AI Transformation Playbook offers a valuable framework for introducing AI in ways that minimize risk and maximize organizational acceptance. Adopting these principles in a law-firm setting involves balancing the profession’s deep-seated practices with the potential of AI. From addressing cultural resistance to crafting a solid technical foundation, a thoughtful change-management plan is necessary for a sustainable and successful transition.

Overcoming Skepticism Through Pilot Projects

Law firms, governed by partnership models and a respect for precedent, tend to approach innovation cautiously. Partners who built their careers through meticulous research may worry that machine-generated insights compromise rigor and reliability. Associates might fear an AI-driven erosion of the apprenticeship model, wondering if their role will shrink as technology automates certain tasks. Concerns also loom regarding the firm’s reputation if clients suspect crucial responsibilities are being delegated to a mysterious black box.
The most direct method of quelling these doubts is to show proof of concept. Andrew Ng’s approach suggests starting with small, well-defined projects before scaling firm-wide. This tactic acknowledges that, with each successful pilot, more people become comfortable with technology that once felt like a threat. By methodically testing AI in narrower use cases, the firm ensures data security and strict confidentiality protocols remain intact. Early wins become the foundation for broader adoption.
Pilot projects help transform abstract AI potential into tangible benefits. For example, using AI to produce first drafts of nondisclosure agreements. Attorneys then refine these drafts, focusing on subtle nuances rather than repetitive details. Another natural entry point is e-discovery, where AI can sift through thousands of documents to categorize and surface relevant information more efficiently than human-only reviews. Each of these use cases is a manageable experiment. If AI truly delivers faster turnaround times and maintains accuracy, it provides evidence that can persuade skeptical stakeholders. Pilots also offer an opportunity to identify challenges, such as user training gaps or hiccups in data management, on a small scale before the technology is rolled out more broadly.

Creating a Dedicated AI Team

One of the first steps is assembling a cross-functional leadership group that aligns AI initiatives with overarching business objectives. This team typically includes partners who can advocate for AI at leadership levels, associates immersed in daily work processes, IT professionals responsible for infrastructure and cybersecurity, and compliance officers ensuring adherence to ethical mandates.
In large firms, a Chief AI Officer or Director of Legal Innovation may coordinate these efforts. In smaller firms, a few technology-minded attorneys might share multiple roles. The key is that this group does more than evaluate software. It crafts data governance policies, designs training programs, secures necessary budgets, and proactively tackles any ethical, reputational, or practical concerns that arise when introducing a technology as potentially disruptive as AI.

Training as the Core of Transformation

AI has limited value if the firm’s workforce does not know how to wield it effectively. Training must go beyond simple “tech demos,” offering interactive sessions in which legal professionals can apply AI tools to realistic tasks. For example, attorneys may practice using the system to draft a client memo or summarize case law. These hands-on experiences remove the mystique surrounding AI, giving participants a concrete understanding of its capabilities and boundaries.
Lawyers also need guidelines for verifying the AI’s output. Legally binding documents or briefs cannot be signed off without sufficient human oversight. For that reason, law firms often designate a “review attorney” role in the AI workflow, ensuring that each AI-generated product passes through a person who confirms it meets the firm’s rigorous standards. Partners benefit from shorter, strategically focused sessions that highlight how AI can influence client satisfaction, create new revenue streams, or boost efficiency in critical operations.

Developing a Coherent AI Strategy

Once the firm achieves early successes with pilot programs and begins to see a measurable return on smaller AI projects, it is time to formulate a broader vision. This strategic blueprint should identify the highest-value areas for further application of AI, whether it involves automating client intake, deploying predictive analytics for litigation, or streamlining contract drafting at scale. The key is to match AI initiatives with the firm’s core goals—boosting client satisfaction, refining operational efficiency, and ultimately reinforcing its reputation for accurate, ethical service.
But the firm’s AI strategy should never become a static directive. It must grow with the firm’s internal expertise, adjusting to real-world results, regulatory changes, and emerging AI capabilities. By regularly re-evaluating milestones and expected outcomes, the firm ensures its AI investments remain both relevant and impactful in serving clients’ evolving needs.

Communicating to Foster Trust and Transparency 

Change management thrives on dialogue. Andrew Ng’s playbook underscores the importance of transparent communication, especially in fields sensitive to reputational risk. Law firms can apply this principle by hosting informal gatherings where early adopters share their experiences—both positive and negative. These stories have a dual effect: they highlight successes that validate the technology, and they candidly address difficulties to keep expectations realistic.
Newsletters, lunch-and-learns, and internal portals all help disseminate updates and insights across different practice areas. Firms that operate multiple offices often hold virtual town halls, ensuring that attorneys and support staff everywhere can stay informed. Externally, clarity matters too. Clients who understand that a firm is leveraging AI to improve speed and accuracy (while retaining key ethical safeguards) are more likely to view the decision as innovative rather than risky.
Closing Thoughts
AI holds remarkable promise for law firms, but its full value emerges only through conscientious change management, which hinges on a delicate balance of diverse personalities. Nothing succeeds like success. By implementing small pilot projects, assembling an AI leadership team, focusing on thorough training, crafting a compelling business strategy, and clearly communicating its vision, a law firm can mitigate risks and harness AI’s transformative power.
The best outcomes result not from viewing AI as a magical shortcut, but from recognizing it as a partner that handles repetitive tasks and surfaces insights more swiftly than humans alone. This frees lawyers to direct their intellect and creativity toward high-level endeavors that deepen client relationships, identify new opportunities, and advance compelling arguments. When fused with a commitment to the highest professional and ethical standards, AI can become a catalyst for a dynamic and fruitful future—one where law firms deliver better service, operate more efficiently, and remain steadfastly true to their professional roots.

The BR Privacy & Security Download: January 2025

Must Read! The U.S. Department of Health and Human Services Office for Civil Rights recently proposed an amendment to the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Security Rule to strengthen cyber security protections for electronic protected health information. Read the full alert to learn more about the first significant update to HIPAA’s Security Rule in over a decade. Read More > >

STATE & LOCAL LAWS & REGULATIONS
Five New State Comprehensive Privacy Laws Effective in January with Three More to Follow in 2025: With the start of the new year, five new state comprehensive privacy laws have become effective. The comprehensive privacy laws of Delaware, Iowa, Nebraska, and New Hampshire became effective on January 1, 2025, and New Jersey’s law will come into effect on January 15, 2025. Tennessee, Minnesota, and Maryland will follow suit and take effect on July 1, 2025, July 31, 2025, and October 1, 2025, respectively. Companies should review their privacy compliance programs to identify potential compliance gaps with differences in the increasing patchwork of state laws.
Colorado Issues Proposed Draft Amendments to CPA Rules: The Colorado Attorney General announced the adoption of amendments to the Colorado Privacy Act (“CPA”) rules. The rules will become effective on January 30, 2025. The rules provide enhanced protections for the processing of biometric data as well as the processing of the online activities of minors. Specifically, companies must develop and implement a written biometric data policy, implement appropriate security measures regarding biometric data, provide notice of the collection and processing of biometric data, obtain employee consent for the processing of biometric data, and provide a right of access to such data. In the context of minors, the amendment requires that entities obtain consent prior to using any system design feature designed to significantly increase the use of an online service of a known minor and to update the Data Protection Assessments to address processing that presents heightened risks to minors. Entities already subject to the CPA should carefully review whether they may have heightened obligations for the processing of employee biometric data, a category of data previously exempt from the scope of the CPA. 
CPPA Announces Increased Fines and Penalties Under CCPA: The California Privacy Protection Agency (“CPPA”), the enforcement authority of the California Consumer Privacy Act (“CCPA”), has adjusted the fines and monetary thresholds of the CCPA. Under the CCPA, in January of every odd-numbered year, the CPPA must make this adjustment to account for changes in the Consumer Price Index. The CPPA has increased the monetary thresholds of the CCPA from $25,000,000 to $26,625,000. The CPPA also increased the range of monetary damages from between $100 to $750 per consumer per incident or actual damages (whichever is greater) to $107 to $799. The range of civil penalties and administrative fine amounts further increased from $2,500 for each violation of the CCPA or $7,500 for each intentional violation and violations involving the personal information of children under 16 to $2,663 and $7,988, respectively. The new amounts went into effect on January 1, 2025.
Connecticut State Senator Previews Proposed Legislation to Update State’s Comprehensive Privacy Law: Connecticut State Senator James Maroney (D) has announced that he is drafting a proposed update to the Connecticut Privacy Act that would expand its scope, provide enhanced data subject rights, include artificial intelligence (“AI”) provisions, and potentially eliminate certain exemptions currently available under the Act. Senator Maroney expects that the proposed bill could receive a hearing by late January or early February. Although Maroney has not published a draft, he indicated that the draft would likely (1) reduce the compliance threshold from the processing of the personal data of 100,000 consumers to 35,000 consumers; (2) include AI anti-discrimination measures, potentially in line with recent anti-discrimination requirements in California and Colorado; (3) expand the definition of sensitive data to include religious beliefs and ethnic origin, in line with other state laws; (4) expand the right to access personal data under the law to include a right to access a list of third parties to whom personal data was disclosed, mirroring similar rights in Delaware, Maryland, and Oregon; and (5) potentially eliminate or curtail categorical exemptions under the law, such as that for financial institutions subject to the Gramm-Leach-Bliley Act. 
CPPA Endorses Browser Opt-Out Law: The CPPA’s board voted to sponsor a legislative proposal that would make it easier for California residents to exercise their right to opt out of the sale of personal information and sharing of personal information for cross-context behavioral advertising purposes. Last year, Governor Newsome vetoed legislation with the same requirements. Just as last year’s vetoed legislation, the legislative proposal sponsored by the CPPA requires browser vendors to include a feature that allows users to exercise their opt-out right through opt-out preference signals. Under the CCPA, businesses are required to honor opt-out preference signals as valid opt-out requests. Opt-out preference signals allow a consumer to exercise their opt-out right with all businesses they interact with online without having to make individualized requests with each business. If the proposal is adopted, California would be the first state to require browser vendors to offer consumers the option to enable these signals. Six other states (Colorado, Connecticut, Delaware, Montana, Oregon, and Texas) require businesses to honor browser privacy signals as an opt-out request.

FEDERAL LAWS & REGULATIONS
HHS Proposes Updates to HIPAA Security Rule: The U.S. Department of Health and Human Services (“HHS”) Office for Civil Rights (“OCR”) issued a Notice of Proposed Rulemaking (“NPRM”) to amend the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) Security Rule to strengthen cybersecurity protections for electronic protected health information (“ePHI”). The NPRM proposes the first significant updates to HIPAA’s Security Rule in over a decade. The NPRM makes a number of updates to the administrative, physical, and technical safeguards specified by the Security Rule, removes the distinction between “required” and “addressable” implementation specifications, and makes all implementation specifications “required” with specific, limited exceptions. 
Trump Selects Andrew Ferguson as New FTC Chair: President-elect Donald Trump has selected current Federal Trade Commission (“FTC”) Commissioner Andrew Ferguson to replace Lina Khan as the new FTC Chair. Ferguson is one of two Republicans of the five FTC Commissioners and has been a Commissioner since April of 2024. Prior to becoming an FTC Commissioner, Ferguson served as Virginia’s solicitor general. During his time as an FTC Commissioner, Ferguson dissented from several of Khan’s rulemaking efforts, including a ban on non-compete clauses in employment contracts. Separately, Trump also selected Mark Meador to be an FTC Commissioner. Once Meador is confirmed to give the FTC a Republican majority, a Republican-led FTC under Ferguson may deprioritize rulemaking and enforcement efforts relating to privacy and AI. In a leaked memo first reported by Punchbowl News, Ferguson wrote to Trump that, under his leadership, the FTC would “stop abusing FTC enforcement authorities as a substitute for comprehensive privacy legislation” and “end the FTC’s attempt to become an AI regulator.”
FERC Updates and Consolidates Cybersecurity Requirements for Gas Pipelines : The U.S. Federal Energy Regulatory Commission (“FERC”) has issued a final rule to update and consolidate cybersecurity requirements for interstate natural gas pipelines. Effective February 7, 2025, the rule adopts Version 4.0 of the Standards for Business Practices of Interstate Natural Gas Pipelines, as approved by the North American Energy Standards Board (“NAESB”). This update aims to enhance the efficiency, reliability, and cybersecurity of the natural gas industry. The new standards consolidate existing cybersecurity protocols into a single manual, streamlining processes and strengthening protections against cyber threats. This consolidation is expected to make it easier and faster to revise cybersecurity standards in response to evolving threats. The rule also aligns with broader U.S. government efforts to prioritize cybersecurity across critical infrastructure sectors. Compliance filings are required by February 3, 2025, and the standards must be fully adhered to by August 1, 2025.
House Taskforce on AI Delivers Report to Address AI Advancements: The House Bipartisan Task Force on Artificial Intelligence (the “Task Force”) submitted its comprehensive report to Speaker Mike Johnson and Democratic Leader Hakeem Jeffries. The Task Force was created to ensure America’s continued global leadership in AI innovation with appropriate safeguards. The report advocates for a sectoral regulatory structure and an incremental approach to AI policy, ensuring that humans remain central to decision-making processes. The report provides a blueprint for future Congressional action to address advancements in AI and articulates guiding principles for AI adoption, innovation, and governance in the United States. Key areas covered in the report include government use of AI, federal preemption of state AI law, data privacy, national security, research and development, civil rights and liberties, education and workforce development, intellectual property, and content authenticity. The report aims to serve as a roadmap for Congressional action, addressing the potential of AI while mitigating its risks.
CFPB Proposes Rule to Restrict Sale of Sensitive Data: The Consumer Financial Protection Bureau (“CFPB”) proposed a rule that would require data brokers to comply with the Fair Credit Reporting Act (“FCRA”) when selling income and certain other consumer financial data. CFPB Director Rohit Chopra stated the new proposed rule seeks to limit “widespread evasion” of the FCRA by data brokers when selling sensitive personal and financial information of consumers. Under the proposed rule, data brokers could sell financial data only for permissible purposes under the FCRA, including checking on loan applications and fraud prevention. The proposed rule would also limit the sale of personally identifying information known as credit header data, which can include basic demographic details, including names, ages, addresses, and phone contacts. 
FTC Issues Technology Blog on Mitigating Security Risks through Data Management, Software Development and Product Design: The Federal Trade Commission (“FTC”) published a blog post identifying measures that companies can take to limit the risks of data breaches. These measures relate to security in data management, security in software development, and security in product design for humans. The FTC emphasizes comprehensive governance measures for data management, including (1) enforcing mandated data retention schedules; (2) mandating data deletion in accordance with these schedules; (3) controlling third-party data sharing; and (4) encrypting sensitive data both in transit and at rest. In the context of security in software development, the FTC identified (1) building products using memory-safe programming languages; (2) rigorous testing, including penetration and vulnerability testing; and (3) securing external product access to prevent unauthorized remote intrusions as key security measures. Finally, in the context of security in product design for humans, the FTC identified (1) enforcing least privilege access controls; (2) requiring phishing-resistant multifactor authentication; and (3) designing products and services without the use of dark patterns to reduce the over-collection of data. The blog post contains specific links to recent FTC enforcement actions specifically addressing each of these issues, providing users with insight into how the FTC has addressed these issues in the past. Companies reviewing their security and privacy governance programs should ensure that they consider these key issues.

U.S. LITIGATION
Texas District Court Prevents HHS from Enforcing Reproductive Health Privacy Rule Against Doctor: The U.S. District Court for the Northern District of Texas ruled that a Texas doctor is likely to prevail on her claim that HHS exceeded its statutory authority when it adopted an amendment to the Health Insurance Portability and Accountability Act (“HIPAA”) Privacy Rule that protects reproductive health care information and enjoined HHS from enforcing the rule against her. The 2024 amendment to the HIPAA Privacy Rule prohibits covered entities from disclosing information that could lead to an investigation or criminal, civil, or administrative liability for seeking, obtaining, providing, or facilitating reproductive health care. The Court stated that the rule likely unlawfully interfered with the plaintiff’s state-law duty to report suspected child abuse in violation of Congress’s delegation to the agency to enact rules interpreting HIPAA without limiting any law providing for such reporting. The plaintiff argued that, under Texas law, she is obligated to report instances of child abuse within 48 hours, and that relevant requests from Texas regulatory authorities demand the full, unredacted patient chart, which for female patients includes information about menstrual periods, number of pregnancies, and other reproductive health information, among other reproductive health information.
Attorneys General Oppose Clearview AI Biometric Data Privacy Settlement: A proposed settlement in the Clearview AI Illinois Biometric Information Privacy Act (“BIPA”) litigation is facing opposition from 22 states and the District of Columbia. The Attorneys General of each state argue that the settlement, which received preliminary approval in June 2024, lacks meaningful injunctive relief and offers an unusual financial stake in Clearview AI to plaintiffs. The settlement would grant the class of consumers a 23 percent stake in Clearview AI, potentially worth $52 million, based on a September 2023 valuation. Alternatively, the class could opt for 17 percent of the company’s revenue through September 2027. The AGs contend the settlement doesn’t adequately address consumer privacy concerns and the proposed 39 percent attorney fee award is excessive. Clearview AI has filed a motion to dismiss the states’ opposition, arguing it was submitted after the deadline for objections. A judge will consider granting final approval for the settlement at a hearing scheduled on January 30, 2025.
Federal Court Upholds New Jersey’s Daniel’s Law, Dismissing Free Speech Challenges: A federal judge affirmed the constitutionality of New Jersey’s Daniel’s Law, dismissing First Amendment objections raised by data brokers. Enacted following the murder of Daniel Anderl, son of U.S. District Judge Esther Salas, the law permits covered individuals—including active, retired, and former judges, prosecutors, law enforcement officers, and their families—to request the removal of personal details, such as home addresses and unpublished phone numbers, from online platforms. Data brokerage firms that find themselves on the receiving end of such requests are mandated by the statute to comply within ten (10) business days, with penalties for non-compliance including actual damages or a $1,000 fine for each violation, as well as potential punitive damages for instances of willful disregard. Notably, in 2023, Daniel’s Law was amended to allow claim assignments to third parties, resulting in over 140 lawsuits filed by a single consumer data protection company: Atlas Data Privacy Corporation. Atlas Data, a New Jersey firm specializing in data deletion, has emerged as a significant force in this litigation, utilizing Daniel’s Law to challenge data brokers on behalf of around 19,000 individuals. The court, in upholding Daniel’s Law, emphasized its critical role in safeguarding public officials while concurrently ensuring public oversight remains strong. Although data brokers contended that the law infringed on free speech and unfairly targeted their operations, the court dismissed these claims as lacking merit, instead placing significant emphasis on the statute’s relatively focused scope and substantial state interest at play. Although unquestionably a significant victory for advocates of privacy rights, the judge permitted an immediate appeal by the data brokers. 
GoodRx Settles Class Action Suit Over Alleged Data Sharing Violations: GoodRx has agreed to a $25 million settlement in a class-action lawsuit alleging the company violated privacy laws by sharing users’ sensitive health data with advertisers like Meta Platforms, Google, and Criteo Corp. The settlement, if approved, would resolve a lawsuit filed in February 2023. The lawsuit followed an FTC action alleging that GoodRx shared information about users’ prescriptions and health conditions with advertising companies. GoodRx settled the FTC matter for $1.5 million. The proposed class in the class-action lawsuit is estimated to be in the tens of millions and would give each class member an average recovery ranging from $3.31 to $11.03. The settlement also allows the plaintiffs to use information from GoodRx to pursue their claims against the other defendants, including Meta, Google, and Criteo.
23andMe Data Breach Suit Settlement Approved: A federal judge approved a settlement to resolve claims that alleged 23andMe Inc. failed to secure the sensitive personal data causing a data breach in 2023. According to 23andMe, a threat actor was able to access roughly 14,000 user accounts through credential stuffing, which further enabled access to the personal information that approximately 6.9 million users made available through 23andMe’s DNA Relative and Family Tree profile features. Under the terms of the $30 million settlement, class members will receive cash compensation and three years of data monitoring services, including genetic services. 

U.S. ENFORCEMENT
FTC Takes Action Against Company for Deceptive Claims Regarding Facial Recognition Software: The Federal Trade Commission (“FTC”) announced that it has entered into a settlement with IntelliVision Technologies Corp. (“IntelliVision”), which provides facial recognition software used in home security systems and smart home touch panels. The FTC alleged that IntelliVision’s claims that it had one of the highest accuracy rates on the market, that its software was free of gender or racial bias, and was trained on millions of faces was false or misleading. The FTC further alleged that IntelliVision did not have adequate evidence to support its claim that its anti-spoofing technology ensures the system cannot be tricked by a photo or video image. The proposed order against IntelliVision specifically prohibits IntelliVision from misrepresenting the effectiveness, accuracy, or lack of bias of its facial recognition technology and its technology to detect spoofing, and the comparative performance of the technology with respect to individuals of different genders, ethnicities, and skin tones.
FTC Settles Enforcement Actions with Data Brokers for Selling Sensitive Location Data: The FTC announced settlements with data brokers Gravy Analytics Inc. (“Gravy Analytics”) and Mobilewalla, Inc. (“Mobilewalla”) related to the tracking and sale of sensitive location data of consumers. According to the FTC, Gravy Analytics violated the FTC Act by unfairly selling sensitive consumer location data, by collecting and using consumers’ location data without obtaining verifiable user consent for commercial and government uses, and by selling data regarding sensitive characteristics such as health or medical decisions, political activities, and religious views derived from location data. Under the proposed settlement, Gravy Analytics will be prohibited from selling, disclosing, or using sensitive location data in any product or service, delete all historic location data and data products using such data, and must establish a sensitive data location compliance program. Separately, the FTC settled allegations against Mobilewalla stemming from allegations that Mobilewalla collected location data from real-time bidding exchanges and third-party aggregators, including data related to health clinic visits and visits to places of worship, without the knowledge of consumers, and subsequently sold such data. According to the FTC, when Mobilewalla bid to place an ad for its clients on a real-time advertising bidding exchange, it unfairly collected and retained the information in the bid request, even when it didn’t have a winning bid. Under the proposed settlement, Mobilewalla will be prohibited from selling sensitive location data and from collecting consumer data from online advertising auctions for purposes other than participating in those auctions.
Texas Attorney General Issues New Warnings Under State’s Comprehensive Privacy Law: The Texas Attorney General issued warnings to satellite radio broadcaster Sirius XM and three mobile app providers that they appear to be sharing sensitive data of consumers, including location data, without proper notification or obtaining consent. The letter warnings did not come with a press release or other public statement and were reported by Recorded Future News, who obtained the notices through a public records request. The letter to Sirius XM stated that the Attorney General’s office found a number of violations of the Texas Data Privacy and Security Act by the Sirius XM privacy notice, including failing to provide reasonably clear notice of the categories of sensitive data being processed and processing sensitive data without appropriate consent. Similar letters were sent to mobile app providers stating that the providers failed to obtain consumer consent for data sharing or including information on how consumers could exercise their rights under Texas law. 
Texas Attorney General Launches Investigations Into 15 Companies for Children’s Privacy Practices: The Texas Attorney General’s office announced it had launched investigations into Character.AI and 14 other companies including Reddit, Instagram, and Discord. The Attorney General’s press release stated that the investigations related to the companies’ privacy and safety practices for minors pursuant to the Securing Children Online through Parental Empowerment (“SCOPE”) Act and the Texas Data Privacy and Security Act (“TDPSA”). Details of the Attorney General’s allegations were not provided in the announcement. The TDPSA requires companies to provide notice and obtain consent to collect and use minors’ personal data. The SCOPE Act prohibits digital service providers from sharing, disclosing, or selling a minor’s personal identifying information without permission from the child’s parent or legal guardian and provides parents with tools to manage privacy settings on their child’s account.
HHS Imposes Penalty Against Medical Provider for Impermissible Access to PHI and Security Rule Violations: The U.S. Department of Health and Human Services Office of Civil Rights (“OCR”) announced that it imposed a $1.19 million civil penalty against Gulf Coast Pain Consultants, LLC d/b/a Clearway Pain Solutions Institute (“GCPC”) for violations of the HIPAA Security Rule arising from a data breach. GCPC’s former contractor had impermissibly accessed GCPC’s electronic medical record system to retrieve protected health information (“PHI”) for use in potential fraudulent Medicare claims. OCR’s investigation determined that the impermissible access occurred on three occasions, affecting approximately 34,310 individuals. The compromised PHI included patient names, addresses, phone numbers, email addresses, dates of birth, Social Security numbers, chart numbers, insurance information, and primary care information. OCR’s investigations revealed multiple potential violations of the HIPAA Security Rule, including failures to conduct a compliant risk analysis and implement procedures to regularly review records of activity in information systems and terminate former workforce members’ access to electronic PHI.
HHS Settles with Health Care Clearinghouse for HIPAA Security Rule Violations: OCR announced a settlement with Inmediata Health Group, LLC (“Inmediata”), a healthcare clearinghouse, for potential violations of the HIPAA Security Rule, following OCR’s receipt of a 2018 complaint that PHI was accessible to search engines like Google, on the Internet. OCR’s investigation determined that from May 2016 through January 2019, the PHI of 1,565,338 individuals was made publicly available online. The PHI disclosed included patient names, dates of birth, home addresses, Social Security numbers, claims information, diagnosis/conditions, and other treatment information. OCR’s investigation also identified multiple potential HIPAA Security Rule violations including failures to conduct a compliant risk analysis and to monitor and review Inmediata’s health information systems’ activity. Under the settlement, Inmediata paid OCR $250,000. OCR determined that a corrective action plan was not necessary in this resolution as Inmediata had previously agreed to a settlement with 33 states that included corrective actions that addressed OCR’s findings.
New York State Healthcare Provider Settles with Attorney General Regarding Allegations of Cybersecurity Failures: HealthAlliance, a division of Westchester Medical Center Health Network (“WMCHealth”), has agreed to pay a $1.4 million fine, with $850,000 suspended, due to a 2023 data breach affecting over 240,000 patients and employees in New York State. The breach at issue, which occurred between September and October 2023, was reportedly caused by a security flaw in Citrix NetScaler—a tool used by many organizations to optimize web application performance and availability by reducing server load—that went unpatched. Although HealthAlliance was made aware of the vulnerability, they were unsuccessful in patching it due to technical difficulties, ultimately resulting in the exposure of 196 gigabytes of data, including particularly sensitive information like Social Security numbers and medication records. As part of its agreement with New York State, HealthAlliance must enhance its cybersecurity practices by implementing a comprehensive information security program, developing a data inventory, and enforcing a patch management policy to address critical vulnerabilities within 72 hours. For more details, view the press release from the New York Attorney General’s office.
HHS Settles with Children’s Hospital for HIPAA Privacy and Security Violations: OCR announced a $548,265 civil monetary penalty against Children’s Hospital Colorado (“CHC”) for violations of the HIPAA Privacy and Security Rules arising from data breaches in 2017 and 2020. The 2017 data breach involved a phishing attack that compromised an email account containing 3,370 individuals’ PHI and the 2020 data breach compromised three email accounts containing 10,840 individuals’ PHI. OCR’s investigation determined that the 2017 data breach occurred because multi-factor authentication was disabled on the affected email account. The 2020 data breach occurred, in part, when workforce members gave permission to unknown third parties to access their email accounts. OCR found violations of the HIPAA Privacy Rule for failure to train workforce members on the HIPAA Privacy Rule, and the HIPAA Security Rule requirement to conduct a compliant risk analysis to determine the potential risks and vulnerabilities to ePHI in its systems.

INTERNATIONAL LAWS & REGULATIONS
Italy Imposes Landmark GDPR Fine on AI Provider for Data Violations: In the first reported EU penalty under the GDPR relating to generative AI, Italy’s data protection authority, the Garante, fined OpenAI 15 million euros for breaching the European Union’s General Data Protection Regulation (“GDPR”). The penalty was linked to three specific incidents involving OpenAI: (1) unauthorized use of personal data for ChatGPT training without user consent, (2) inadequate age verification risking exposure of minors to inappropriate content, and (3) failure to report a March 2023 data breach that exposed users’ contact and payment information. The investigation into OpenAI, which began after the Garante was made aware of the March 2023 breach, initially resulted in Italy temporarily blocking access to ChatGPT but eventually reinstated it after OpenAI made concrete improvements to its age verification and privacy policies. Alongside the monetary penalty, OpenAI is additionally mandated to conduct a six-month public awareness campaign in Italy to educate the Italian public on data collection and individual user rights under GDPR. OpenAI has said that it plans to appeal the Garante’s decision, arguing that the fine exceeds its revenue in Italy.
Australian Parliament Approves Privacy Act Reforms and Bans Social Media Use by Minors: The Australian Parliament passed a number of privacy bills in December. The bills include reforms to the Australian Privacy Act, a law requiring age verification by social media platforms, and a law banning social media use by minors under the age of 16. Privacy Act reforms include new enforcement powers for the Office of the Australian Information Commissioner that clarify when “serious” breaches of the Privacy Act occur and allow the OAIC to bring civil penalty proceedings for lesser breaches. Other reforms include requiring entities that use personal data for automated decision-making to include in their privacy notices information about what data is used for automated decision-making and what types of decisions are made using automated decision-making technology. 
EDPB Releases Opinion on Personal Data Use in AI Models: In response to a formal request from Ireland’s Data Protection Commission asking for clarity about how the EU General Data Protection Regulation (“GDPR”) applies to the training of large language models with personal data, the European Data Protection Board (“EDPB”) released its opinion regarding the lawful use of personal data for the development and deployment of artificial intelligence models (the “Opinion”). The Irish Data Protection Commission specifically requested EDPB to opine on: (1) when and how an AI model can be considered anonymous, (2) how legitimate interests can be used as the legal basis in the development and deployment phases of an AI model, and (3) the consequences of unlawful processing in the development phase of an AI model on its subsequent operation. With respect to anonymity, the EDPB stated this should be analyzed on a case-by-case basis taking into account the likelihood of obtaining personal data of individuals whose data was used to build the model and the likelihood of extracting personal data from queries. The Opinion describes certain methods that controllers can use to demonstrate anonymity. With respect to the use of legitimate interest as a legal basis for processing, the EDPB restated a three-part test to assess legitimate interest from its earlier guidance. Finally, the EDPB reviewed several scenarios in which personal data may be unlawfully processed to develop an AI model. 
Second Draft of General-Purpose AI Code of Practice Published: The European Commission announced that independent experts published the Second Draft of the General Purpose AI Code of Practice. The AI Code of Practice is designed to be a guiding document for providers of general-purpose AI models, allowing them to demonstrate compliance with the AI Act. Under the EU AI Act, providers are persons or entities that develop an AI system and place that system on the market. This second draft is based on the responses and comments received on the first draft and is designed to provide a “future-proof” code. The first part of the Code details transparency and copyright obligations for all providers of general-purpose AI models. The second part of the Code applies to providers of advanced general-purpose AI models that could pose systemic risks. This section outlines measures for systemic risk assessment and mitigation, including model evaluations, incident reporting, and cybersecurity obligations. The Second Draft will be open for comments until January 15, 2025.
NOYB Approved to Bring Collective Redress Claims: The Austrian-based non-profit organization None of Your Business (“NOYB”) has been approved as a Qualified Entity in Austria and Ireland, enabling it to pursue collective redress actions across the European Union (“EU”). Famous for challenging the EU-US data transfer framework through its Schrems I and II actions, NOYB intends to use the EU’s collective action redress system to challenge what it describes as unlawful processing without consent, use of deceptive dark patterns, data sales, international data transfers, and use of “absurd” language in privacy policies. Unlike US class actions, these EU actions are strictly non-profit. However, they do provide for both injunctive and monetary redress measures. NOYB intends to bring its first actions in 2025. Click here to learn more and read NOYB’s announcement.
EDPB Issues Guidelines on Third Country Authority Data Requests: The EDPB published draft guidelines on Article 48 of the GDPR relating to the transfer or disclosure of personal data to a governmental authority in a third country (the “Guidelines”). The Guidelines state that, as a general rule, requests from governmental authorities are recognizable and enforceable under applicable international agreements. The Guidelines further state that any such transfer must also comply with Article 6 with respect to legal basis for processing and Article 46 regarding legal mechanism for international data transfer. The Guidelines will be available for public consultation until January 27, 2025.
Irish DPC Fines Meta €251 Million for Violations of the GDPR: The Irish Data Protection Commission (DPC) fined Meta €251 million following a 2018 data breach that affected 29 million Facebook accounts globally, including 3 million in the European Union. The breach exposed personal data such as names, contact information, locations, birthdates, religious and political beliefs, and children’s data. The DPC found that Meta Ireland violated General Data Protection Regulation (GDPR) Articles 33(3) and 33(5) by failing to provide complete information in their breach notification and to properly document the breach. Furthermore, Meta Ireland infringed GDPR Articles 25(1) and 25(2) by neglecting to incorporate data protection principles into the design of their processing systems and by processing more data than necessary by default. 
Additional Authors: Daniel R. Saeedi, Rachel L. Schaller, Gabrielle N. Ganze, Ana Tagvoryan, P. Gavin Eastgate, Timothy W. Dickens, Jason C. Hirsch, Tianmei Ann Huang, Adam J. Landy, Amanda M. Noonan, and Karen H. Shin

SoftBank Set to Unveil $100 Billion Investment Initiative in the U.S. During CEO’s Meeting with Donald Trump

SoftBank Set to Unveil $100 Billion Investment Initiative in the U.S. During CEO’s Meeting with Donald Trump. Japanese investment powerhouse SoftBank (SFTBY) is reportedly preparing to announce a $100 billion investment initiative in the United States, marking a significant commitment to the American economy. This revelation comes ahead of a meeting between SoftBank’s CEO, Masayoshi […]

US Federal Judge Uses ChatGPT in Appeal Case

Snell v. United Specialty Insurance Company, 11th U.S. Circuit Court of Appeals, No. 22-12581. U.S. Appeals Judge Kevin Newsom of the 11th U.S. Circuit Court of Appeals wrote in Snell v. United Specialty Insurance Co. a concurring opinion whose express purpose was to offer ideas about how judges might use generative AI. Throughout a 32-page […]

Emotional Entanglement in Generative AI

The NIST AI 600-1 “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile” (GenAI Profile) is a “companion” publication to the NIST AI Risk Management Framework. Among its contributions, 600-1 […]