Growing Role of Natural Gas in Supporting Hyperscale AI Data Center Development
Data centers are foundational to the artificial intelligence (AI) ecosystem, providing the computational power necessary to train complex algorithms and manage massive data flows. The surge in data center construction across the United States is estimated to require an additional 47 gigawatts of power capacity by 2030.
Notably, approximately 60 percent of this U.S. demand is expected to be met by natural gas, creating significant growth opportunities for oil and gas suppliers. Globally, the demand for natural gas is projected to rise substantially, with some analysts forecasting up to 50 percent market growth over the next five years.
Oil companies, which historically produced electricity solely to support their own operations, are now poised to enter the broader power market amidst surging demand. This increased demand is driven in part by the rapid growth of technologies such as generative AI, which is expected to push U.S. electricity consumption to unprecedented levels in 2025 following two decades of stagnation.
In response to this growing need for power, the U.S. energy sector has invested heavily in new natural gas infrastructure and advocated for delaying the retirement of fossil-fuel power plants.
For major oil companies, this presents a promising opportunity. Some oil giants have already emphasized that the future of AI will depend as much on natural gas production from areas like the Southwest’s Permian Basin as it will on technological innovation in Silicon Valley. While data center developers have historically demonstrated a strong interest in using renewable energy sources, the intermittent nature of renewables and the slow pace of their buildout pose significant challenges. As a result, developers – particularly those constructing hyperscale data centers– are increasingly prioritizing energy solutions that offer speed and reliability.
This shift presents substantial growth opportunities for midstream natural gas transmission companies, which possess extensive pipeline networks capable of delivering consistent and efficient energy to meet the demands of large-scale data centers. By leveraging these established infrastructure assets, midstream companies are well-positioned to address the pressing energy needs of the rapidly expanding data center industry, even as the sector continues to explore pathways toward long-term sustainability.
Key Applications
One key area of impact is the advancement of battery technologies. These innovations can store surplus energy from renewable sources and provide reliable backup power during outages. Additionally, the industry’s knowledge of optimizing energy transmission networks can minimize energy losses and ensure maximum power delivery to data centers. Through strategic partnerships with technology companies, the energy sector is playing an essential role in creating more sustainable and efficient AI data centers.
Another critical contribution is the application of carbon capture and storage (CCS) technologies to reduce AI data centers’ environmental footprint. These facilities, which support large-scale AI models and applications, consume vast amounts of energy and contribute to greenhouse gas emissions. By implementing CCS solutions, energy companies can capture and store CO2 emissions generated by data centers, significantly reducing their carbon footprint. This approach aligns with the broader goals of both industries to lower environmental impact and transition toward cleaner energy solutions.
The intensifying competition for electricity has also led some major technology companies to reconsider their climate-focused commitments, which previously emphasized reliance exclusively on renewable energy sources, such as wind and solar, for powering energy-intensive AI data centers. This shift underscores the tension between the rising energy demands of emerging technologies and efforts to achieve sustainability goals.
Building an AI data center facility is a complex and capital-intensive endeavor that requires careful planning and working with trusted collaborators in multiple areas, including infrastructure, power, cooling, networking, tax considerations, security, and compliance. Site selection considerations include choosing a location with access to reliable power, cooling, and minimal risk of natural disasters that close to research hubs, cloud regions, or high-tech industries. Government incentives, raising capital, tax breaks and local data protection laws are also factors that play into data center growth and expansion.
Takeaways
The collaboration between the energy and technology sectors offers mutual benefits. The energy industry’s expertise in infrastructure, energy management, and renewable technologies can optimize AI data center operations, drive cost efficiencies, and support the shift toward greener energy. Simultaneously, technology companies can achieve more sustainable and energy-efficient facilities, advancing their environmental objectives. This partnership not only fosters innovation, but also contributes to a more sustainable future in both the energy and technology landscapes.
Key Insights on President Trump’s New AI Executive Order and Policy and Regulatory Implications
In recent days, the Biden administration’s reliance on export controls to curb China’s AI advancements has come under increasing scrutiny, particularly following the release of China’s DeepSeek R1 chatbot. This development raises concerns that prior U.S. restrictions have failed to slow China’s progress while potentially undermining U.S. global competitiveness in AI hardware and computing ecosystems. President Trump’s early actions—rescinding Biden’s AI executive order and emphasizing innovation over risk mitigation—signal a shift away from restrictive policies toward a more pro-innovation, market-driven approach.
In its final days, the Biden administration issued an interim final rule seeking to “regulate the global diffusion” of AI by imposing export licensing restrictions on advanced chips to 150 “middle-tier” countries, while maintaining existing embargoes on China, Russia, and Iran. The rule was widely viewed as an acknowledgment that previous AI export controls, dating back to 2022, had failed to fully prevent China’s access to AI-enabling technologies through third parties. Critics argue that these restrictions will reduce global demand for U.S. chips and incentivize non-U.S. computing ecosystems, weakening America’s long-term AI leadership rather than protecting national security.
Some have called DeepSeek R1’s recent emergence a “Sputnik moment,” highlighting the inadequacy of past U.S. controls and reinforcing calls among some quarters for a strategic overhaul. AI and crypto advisor to President Trump, David Sacks, has framed the chatbot’s release as evidence that Biden’s policies constrained American AI companies without effectively restricting China’s advancements.
Adding to these concerns, DeepSeek’s rapid rise has also triggered significant international regulatory scrutiny. Italy’s data protection authority, the Garante, has formally demanded explanations on how DeepSeek handles Italian users’ personal data, following accusations that its privacy policy violates multiple provisions of the EU General Data Protection Regulation (GDPR). Consumer advocacy groups argue that DeepSeek lacks transparency regarding data retention, profiling, and user rights, while also storing user data in China, raising potential security risks.
Notably, the White House has also begun reviewing DeepSeek over possible national security threats, echoing past scrutiny faced by Chinese tech firms like TikTok. Indeed, Sacks acknowledged evidence that DeepSeek improperly used data from U.S. company models to train its own, stating that leading U.S. AI companies would take further steps to prevent that practice going forward. This aligns with the Trump administration’s broader economic philosophy, which favors deregulation and private-sector empowerment over government-imposed constraints.
At this point, we believe that Trump’s approach to AI export controls may include: (1) repealing or significantly revising Biden-era AI restrictions to avoid isolating U.S. firms from global markets, (2) prioritizing domestic AI and semiconductor growth through deregulation and incentives, (3) refining export control enforcement to focus on targeted national security threats rather than broad-based restrictions, and (4) recalibrating international AI coordination to ensure U.S. competitiveness rather than emphasizing strict regulatory alignment with allies.
This shift could accelerate AI innovation in the U.S., but it also raises concerns about how to balance national security with economic competitiveness. If AI export controls are loosened too aggressively, China may find easier pathways to advanced AI technologies. Additionally, the divergence between the U.S.’s deregulated AI approach and the EU’s AI Act, which prioritizes stringent governance and risk mitigation, may create regulatory challenges for U.S. companies operating in international markets.
The Trump administration’s AI strategy marks a decisive departure from Biden’s risk-focused policies, opting instead for a more aggressive push to strengthen U.S. AI dominance through industry-led growth. While this shift may create opportunities for American companies, it also requires careful navigation of global regulatory landscapes and national security considerations.
The Changing Landscape of AI: Federal Guidance for Employers Reverses Course With New Administration
In the midst of the multiple executive orders issued in the first days of the Trump administration, on 23 January 2025, the White House issued an executive order entitled Removing Barriers to American Leadership in Artificial Intelligence (AI EO). At a high-level, President Trump issued the AI EO to (1) implement the revocation of President Biden’s executive order on artificial intelligence (AI), entitled the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110) and (2) create President Trump’s AI action plan to ensure that AI systems are “free from ideological bias or engineered social agendas.” As a result of the AI EO, the Equal Employment Opportunity Commission (EEOC) and Department of Labor (DOL) pulled or updated a number of AI-related publications and other documents from their websites. This alert provides an overview of the AI EO as well as the changes to guidance from the EEOC and DOL. It also outlines best practices for employers to ensure compliance at both the federal and state levels.
Revocation of President Biden’s EO 14110
Similar to the White House’s executive order entitled Protecting Civil Rights and Expanding Individual Opportunity (DEI EO)1 issued on 21 January 2025, the AI EO implemented the revocation of an executive order on the same subject issued during the Biden administration, EO 14110. EO 14110 required the implementation of safeguards (i) for the development of AI and (ii) to ensure AI policies were consistent with the advancement of “equity and civil rights.” EO 14110 also directed agencies to develop plans, policies, and guidance on AI, which they did before the end of 2024.
In connection with rescinding EO 14110, the White House issued a fact sheet and asserted that EO 14110 “hinder[ed] AI innovation and impose[d] onerous and unnecessary government control over the development of AI”. It directed executive departments and agencies to revise or rescind all actions, including policies, directives, regulations, and orders, taken under EO 14110 that are inconsistent with the AI EO. Further, the AI EO mandates that the director of the Office of Management and Budget (OMB) revise OMB Memoranda M-24-10 and M-24-18 (which address the federal government’s acquisition and governance of AI) within 60 days. To the extent that an action cannot be immediately suspended, revised, or rescinded, then the AI EO authorizes agencies to allow exemptions until the actions may be suspended, revised, or rescinded.
AI Action Plan
As part of its stated goal to “enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security,” the AI EO directs the assistant to the president for science and technology, the special advisor for AI and crypto, and the assistant to the president for national security affairs to develop an AI action plan within 180 days. Such AI action plan will be developed in coordination with the assistant to the president for economic policy, the assistant to the president for domestic policy, the director of the OMB, and any relevant heads of executive departments and agencies.
Updates to Existing Federal Agency Guidance on AI
Dovetailing with the AI EO, on 27 January 2025, the EEOC removed AI-related guidance2 from its website. This guidance, published in May 2023, addressed how existing federal anti-discrimination law may apply to employers’ use of AI when hiring, firing, or promoting employees.
Similarly, the DOL noted on its website that the “AI & Inclusive Hiring Framework”3 published in September 2024 by the Partnership on Employment & Accessible Technology, and the DOL’s October 2024 “Artificial Intelligence Best Practices”4 guidance may now be outdated or not reflective of current policies. Both publications are also unavailable in certain locations in which they were previously accessible.While the DOL and the EEOC updated and removed their AI-related guidance respectively, the Office of Federal Contract Compliance (OFCCP) website still maintains its 29 April 2024 nonbinding guidance on how federal contractors and subcontractors should use AI to ensure compliance with existing equal employment opportunity obligations under federal law.5
Best Practices for Employers
As with any change to executive branch and federal agency guidance, employers should continue to monitor developments that may impact AI-related policies. Further, employers should review current AI policies for compliance with the recent executive branch equal employment guidance related to diversity, equity, and inclusion.6 While guidance and enforcement priorities at the federal level have changed, employers must still comply with the various state-level regulations on AI, as many states have passed regulations addressing the use of AI in employment decisions.7 For example, in 2019, Illinois enacted the Artificial Intelligence Video Interview Act, whereby employers who use AI to analyze video interviews must provide notice to the candidate, obtain their consent, and provide an explanation of the AI technology used. Illinois also amended the Illinois Human Rights Act8 to prohibit discrimination by employers who utilize AI in recruitment, hiring, promotion, professional development, and other employment decisions. More recently, in 2024, Colorado enacted a law in 2024 that prohibits algorithmic discrimination in “consequential decisions,” which is defined to include those related to employment or employment opportunity.9 Moreover, with this shift in guidance at the federal level, employers should anticipate increased state and local regulation of AI in employment.10
Conclusion
There are likely to be many more developments in the coming days and weeks. Our Labor, Employment, and Workplace Safety practice regularly counsels clients on the issues discussed above and is well-positioned to provide guidance and assistance to clients on these significant developments.
Footnotes
1 See K&L Gates LLP Legal Alert, Uncharted Waters: Employers Brace for Significant and Unprecedented Changes to Employment Law Enforcement Under New Administration, January 24, 2025, https://www.klgates.com/Uncharted-Waters-Employers-Brace-for-Significant-and-Unprecedented-Changes-to-Employment-Law-Enforcement-Under-New-Administration-1-24-2025.
2 See K&L Gates Legal Alert, Employer Use of Artificial Intelligence to Avoid Adverse Impact Liability Under Title VII, May 31, 2023, https://www.klgates.com/EEOC-Issues-Nonbinding-Guidance-on-Permissible-Employer-Use-of-Artificial-Intelligence-to-Avoid-Adverse-Impact-Liability-Under-Title-VII-5-31-2023.
3 See K&L Gates Legal Alert, DOL’s AI Hiring Framework Offers Employers Helpful Guidance on Combatting Algorithmic Bias, November 12, 2024, https://www.klgates.com/DOLs-AI-Hiring-Framework-Offers-Employers-Helpful-Guidance-on-Combatting-Algorithmic-Bias-11-12-2024.
4 See K&L Gates Legal Alert, The DOL Publishes Best Practices That Employers Can Follow to Decrease the Legal Risks Associated With Using AI in Employment Decisions, December 16, 2024, https://www.klgates.com/The-DOL-Publishes-Best-Practices-That-Employers-Can-Follow-to-Decrease-the-Legal-Risks-Associated-With-Using-AI-in-Employment-Decisions.
5 See K&L Gates Legal Alert, OFCCP Guidance Expands Federal Scrutiny of Artificial Intelligence Use by Employers, July 16, 2024, https://www.klgates.com/OFCCP-Guidance-Expands-Federal-Scrutiny-of-Artificial-Intelligence-Use-by-Employers-7-16-2024.
6 Supra note 1…
7 820 ILCS 42/1.
8 775 ILCS 5/2-101-102.
9 Col. Rev. Stat. 6-1-1701-1707.
10 See K&L Gates Legal Alert, The Texas Responsible AI Governance Act and Its Potential Impact on Employers, January 13, 2025, https://www.klgates.com/The-Texas-Responsible-AI-Governance-Act-and-Its-Potential-Impact-on-Employers-1-13-2025.
FDA Releases Draft Guidance on AI-Enabled Medical Devices
Go-To Guide:
The FDA issued draft guidance on AI-enabled medical devices, emphasizing a total product life cycle approach from design to post-market monitoring.
The guidance outlines recommended documentation for marketing submissions, including device descriptions, performance validation, and risk management plans.
Transparency and bias mitigation are highlighted as crucial elements in fostering trust and ensuring equitable outcomes for AI-enabled devices.
The FDA encourages manufacturers to provide clear, user-friendly labeling that explains AI functionality, limitations, and instructions for use.
This guidance may be subject to review and revision in light of President Trump’s recent AI-focused Executive Order.
On Jan. 7, 2025, the FDA issued its draft guidance, “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.” In its latest draft guidance on medical devices, the FDA provides recommendations on the documentation and information that should be included in marketing submissions for devices that include AI-enabled device software functions.
The guidance emphasizes the FDA’s holistic total product life cycle (TPLC) approach, which requires manufacturers to consider the entire lifespan of an AI-enabled device—from initial concept and design to post-market performance monitoring. The guidance also underscores the importance of transparency and bias mitigation in AI-enabled devices to foster trust and equitable outcomes. By addressing the unique challenges AI poses, the guidance establishes standards for transparency, accountability, and flexibility in managing AI-enabled devices across their TPLC.
Total Product Life Cycle Approach
The guidance highlights the importance of managing AI-enabled devices using a TPLC approach. This method seeks to ensure continuous oversight, from design and development through post-market performance. The FDA’s recommendations for manufacturers at each TPLC phase include:
Design and Development: Integrate risk management and human factors engineering early in the design process to mitigate potential risks associated with AI functionalities.
Validation and Testing: Utilize rigorous methodologies to validate AI performance, ensuring effectiveness across diverse patient populations and real-world settings.
Post-Market Monitoring: Continuously monitor in real-time to identify and address performance deviations or safety concerns, supported by mechanisms for timely updates.
Marketing Submission Requirements
The FDA emphasizes the critical elements that sponsors should provide in premarket submissions for AI-enabled devices. These include:
Device Description: Clear, comprehensive details about the device’s inputs and outputs, an explanation of how AI is used to achieve the device’s intended use, a description of the intended users, the level and type of training intended users have or will receive, the intended use environment, the intended workflow of the use of the device, and a description of installation and maintenance procedures, as well as any calibration or configuration procedures that must be regularly performed by users.
User Interface Information: Information that demonstrates the device workflow and how that information is presented to users, which may be accomplished through graphical representations, written descriptions, example reports, and recorded videos.
Labeling: Explanations, in an appropriate reading level, that the device includes AI, how AI is used to achieve the device’s intended use, model inputs and outputs, any automated functions, model architecture, development and performance data and metrics, performance monitoring, any known limitations of the device, and instructions for use. Appendix E provides exemplar communication models for sponsors to consider when developing labeling.
Training and Testing Data: Descriptions of data collection, data cleaning and processing, test data independence, reference standards, and representativeness.
Performance Validation: Evidence to demonstrate accuracy, reliability, and repeatability in clinical and non-clinical settings, including testing for specific populations. Appendix C includes recommendations for clinical performance validation, while Appendix D describes human factors considerations.
Change Management Plans: Information regarding performance monitoring plans, including measures to capture device performance after deployment, including updates, mitigations, and corrective actions.
Risk Management: A risk management file that includes a risk management plan and robust assessments to evaluate the risks of AI functions and their impact on patient safety, considering biases, software malfunctions, or data inaccuracies.
Cybersecurity and Data Integrity: Information regarding the measures taken to protect against data breaches and ensure the integrity of AI models.
Public Submission Summary: A summary with details about the AI-enabled device’s characteristics for use in public facing documents. Appendix F provides examples for communicating the required information.
Appendix B includes recommendations for developing a transparent device centered on users. The draft guidance encourages sponsors to take a holistic, user-centered approach to transparency, beginning at the design phase of the TPLC to ensure important information is both accessible and functionally understandable. Because transparency is contextually dependent, appropriate information to include would vary across devices, and the draft provides examples for sponsors to consider.
Conclusion
By focusing on lifecycle management, transparency, bias mitigation, and flexibility, the FDA aims to balance innovation with public safety. Aligning with these recommendations may help manufacturers accelerate AI technology deployment in healthcare. The FDA actively seeks input from stakeholders, including manufacturers, healthcare professionals, and the public, to refine this draft guidance. Comments on the guidance are welcomed through April 7, 2025.
While currently uncertain, President Trump’s rescission of President Biden’s AI Executive Order No. 14110 and issuance of his own AI-focused Executive Order entitled “Removing Barriers to American Leadership in Artificial Intelligence” on Jan. 23, 2025, may lead to a widespread reevaluation of all AI policies and guidances agencies such as the FDA have submitted. Accordingly, relevant stakeholders should monitor the viability and advancement of this draft guidance.
Geolocation Data in AI: Lessons from Niantic’s “Pokémon Go”
The use of geolocation data in AI development is rapidly evolving, with its applications expanding across various industries. In this advisory, members of Varnum’s Data Privacy and Cybersecurity team examine a key AI use case: Niantic’s “Pokémon Go”. This case highlights critical considerations that businesses must address as they leverage vast amounts of data for new applications. These considerations include the protection of children’s data and the compliance requirements necessary to safeguard sensitive information.
What is “Pokémon Go”?
“Pokémon Go”, launched in 2016 by Niantic, is an augmented reality (AR) mobile game that overlays digital creatures on real-world locations. Players interact with the game by traveling to specific geolocated spots to capture virtual Pokémon, participate in battles and explore their surroundings. With over one billion downloads globally, “Pokémon Go” has gathered vast amounts of geolocation data as users traverse real-world environments.
What Did Niantic Do with This Data?
Recently, Niantic revealed that it has been leveraging data collected from “Pokémon Go” to develop a large-scale geospatial AI model. This model uses anonymized and aggregated location data to better understand geographic patterns and improve AR experiences. According to Niantic, the model not only aids in enhancing its existing products but also paves the way for broader applications in geospatial intelligence, urban planning and beyond. Niantic’s efforts underscore the value of real-world data in building sophisticated AI systems, potentially revolutionizing industries ranging from gaming to infrastructure.
Why Is This Valuable for Companies?
The integration of real-world geolocation data into AI systems offers significant advantages:
Enhanced AI Models: Access to extensive geospatial data allows companies to train AI systems that better understand spatial relationships and human movement patterns.
Improved Customer Experiences: Applications powered by such AI models can offer personalized and context-aware services, leading to increased user engagement and satisfaction.
New Revenue Streams: Companies can monetize insights derived from location data across industries such as retail, real estate and logistics.
Special Considerations for Children’s Data
The ability to use data collected from a globally popular app highlights the potential for gaming companies and other businesses to pivot into data-driven AI innovation. However, leveraging such data raises critical privacy considerations. For example, when leveraging a mobile gaming application, companies should be cognizant of the fact that the game may be largely used by younger audiences, increasing the likelihood that the company will be collecting children’s data and be subject to regulations such as the Children’s Online Privacy Protection Act (COPPA). As such, companies must address several key issues to ensure compliance with privacy regulations and maintain public trust:
Transparency: Companies should clearly disclose how geolocation data is collected, processed and used. For example, concise and accessible privacy policies tailored to both parents and children can help end users better understand how the company is leveraging the data collected through the use of the app and foster better understanding and trust.
Consent: In many cases, companies should obtain explicit parental consent before collecting or processing data from children. This step is crucial to comply with regulations in the United States and similar laws globally. For example, COPPA not only mandates that a company obtain verifiable parental consent before collecting personal information from minors, but also requires that the parent is given the opportunity to prohibit the company from disclosing that information to third parties (unless disclosure is integral to the site or service, in which case, this must be made clear to parents).
Opt-Out Mechanisms: Companies should give parents the opportunity to prevent further use or online collection of a child’s personal information. Providing users, especially parents, with the ability to opt out of data collection or usage for AI development ensures greater control over personal information.
Protections and Guardrails: Companies should implement safeguards to prevent misuse or unauthorized access to children’s data. This includes anonymizing datasets, restricting data sharing and adhering to data minimization principles. Companies should also have mechanisms in place to allow parents access to their child’s personal information to review and/or have the information deleted.
As companies increasingly leverage tracking technologies, such as geolocation data or online behavior, to enhance their AI models, it is imperative to address privacy concerns proactively. Sensitive data, particularly information related to children, must be handled with care to comply with regulatory requirements and uphold ethical standards.
Niantic’s use of “Pokémon Go” data serves as a compelling example of how innovative applications of real-world data can drive advancements in AI. However, it also emphasizes the need for organizations to prioritize transparency, consent and robust data protection. By doing so, businesses can unlock the potential of cutting-edge technology while safeguarding user trust and meeting their legal obligations.
Illinois Supreme Court Announces Policy on Artificial Intelligence
Last year, the Illinois Judicial Conference Task Force on Artificial Intelligence (IJC) was created to develop recommendations for how the Illinois Judicial Branch should regulate and use artificial intelligence (AI) in the court system. The IJC made recommendations to the Illinois Supreme Court, which adopted a policy on AI effective January 1, 2025.
The policy is consistent with the American Bar Association’s AI Policy. The policy states that “the Illinois Courts will be vigilant against AI technologies that jeopardize due process, equal protection, or access to justice. Unsubstantiated or deliberately misleading AI generated content that perpetuates bias, prejudices litigants, or obscures truth-finding and decision-making will not be tolerated.” In addition, the Illinois Supreme Court reiterated that “The Rules of Professional Conduct and the Code of Judicial Conduct apply fully to the use of AI technologies. Attorneys, judges, and self-represented litigants are accountable for their final work product. All users must thoroughly review AI-generated content before submitting it in any court proceeding to ensure accuracy and compliance with legal and ethical obligations. Prior to employing any technology, including generative AI applications, users must understand both general AI capabilities and the specific tools being utilized.”
Simultaneously, the Illinois Supreme Court published a judicial reference sheet that explains what AI and generative AI are, and what judges should watch for if litigants are using AI technology, including hallucinations, deepfakes, and extended reality. We anticipate more state courts will develop and adopt policies for AI use in the court system. Judges, lawyers, and pro se litigants should stay apprised of the court rules in the states in which they are active.
FINRA Publishes 2025 Annual Regulatory Oversight Report
On January 28, the Financial Industry Regulatory Authority (FINRA) published the 2025 update to its annual Regulatory Oversight Report.1 The report collects recent observations and findings from FINRA’s oversight programs – Member Supervision, Market Regulation and Transparency Services, and Enforcement – and provides FINRA member firms with a helpful resource to evaluate compliance on a number of cutting-edge topics. Through the report, member firms get to “see what FINRA sees” when it examines firms, conducts enforcement actions throughout the industry, and engages with firms throughout the year in providing regulatory guidance.
While FINRA’s report is not new, the 2025 edition is particularly noteworthy. First, it addresses new topics (like third-party risk/vendor management and extended-hours trading) and adds new findings and effective practices for prior topics. But more importantly, the report highlights practices and topics deemed important to FINRA. With Trump’s second administration in Washington and the likely change in regulatory priorities from federal securities regulators, coupled with the Supreme Court and other federal court decisions limiting the role of federal administrative agencies, FINRA may very well fill any vacuum of regulation. For these reasons, the topics deemed important to FINRA and the best practices that FINRA highlights and encourages may take on outsized importance. The report is not a list of enforcement priorities (which FINRA published through 2020), but it still provides a helpful window into the topics that FINRA is considering and, therefore, what member firms should similarly consider to the extent applicable to their business.
Using FINRA’s Regulatory Oversight Report
Given the wide range of business that FINRA member firms conduct, it is impossible to provide a one-size-fits-all document. Retail firms, for example, will find greater use for topics such as ACH transfer fraud, Regulation BI compliance, and issues relating to senior investors. Institutional firms and firms with trading execution businesses will make better use of guidance on the Market Access Rule and Regulation SHO bona fide market-making compliance. And all firms will benefit from observations on core compliance issues such as books and records, net capital, outside business activities and private securities transactions.
Regardless of the applicable topic, the report is organized as a helpful resource on the topics that it covers and the regulatory requirements applicable to them. A firm currently engaged in a particular subject business can evaluate whether it has experienced or considered any of the emerging threats that FINRA lists. It can conduct a gap analysis of the firm’s current supervisory systems and written supervisory procedures to see how the firm supervises the applicable business in the face of those threats. Members can review FINRA’s recent findings on topics from two perspectives: if the firm has experienced issues similar to a recent finding, it can evaluate the remedial steps taken and any new policies and procedures implemented. A firm not yet affected by a recent finding can ask “what if” and evaluate whether its supervisory and other systems are well-designed to address or prevent the issue. Further, armed with an answer to the age-old question of “What do other firms do here?,” a firm can critically assess the best practices highlighted by FINRA to determine whether the firm can and/or should implement any of them in its compliance and supervisory systems. Above all, the report provides a good opportunity to prompt informed discussion among applicable stakeholders in the organization and a helpful resource to lead that discussion.
New Topics Added to the Report
Described in greater detail below, FINRA added two new sections to the 2025 report to address third-party risks and extended hours trading. FINRA also added additional information addressing Generative artificial intelligence (AI).
Third-Party Risk Landscape. As noted in the report, cyberattacks on and outages at third-party vendors are on the rise. The report reminds firms that its supervisory obligations extend to activities and functions performed by third-party vendors. FINRA recommends effective practices to address third-party vendor risks, including:
maintaining a list of third-party vendor-provided services, systems and software components;
adopting supervisory controls and conducting risk assessments on the effects of a third-party vendor failure;
taking reasonable steps to help ensure that third-party vendors do not utilize Generative AI in a manner that would ingest the firm’s or customers’ sensitive information;
periodically reviewing third-party vendor tool default features and settings;
assessing third-party vendors’ ability to protect sensitive firm and customer non-public information and data; and
revoking a third-party vendor’s access with the relationship ends.
While it is unclear which specific regulatory requirement a firm needs to supervise with respect to the use of third-party vendors, it would certainly be prudent to take the steps described to avoid or minimize any service interruptions or other deficiencies that a third-party vendor might introduce.2
Extended Hours Trading. US securities markets trading outside of regular trading hours has become increasingly popular. The report reminds firms that offer extended hours trading to provide their customers with a risk disclosure statement that addresses extended hours trading under FINRA Rule 2265. The report also recommends effective practices to address risks associated with extended hours trading, including:
conducting best execution reviews that properly evaluate execution quality during extended hours;
reviewing customer disclosures to help ensure that the disclosures properly address extended hours trading risks;
establishing and maintaining appropriate supervision that addresses the unique risks of extended hours trading; and
evaluating operational readiness, customer support and business continuity planning associated with extended hours trading.
Focus on Generative AI. The report focuses on the risk of artificial intelligence and particularly notes how Generative AI can and is being used to further account takeovers and other forms of fraud. The report highlights a number of emerging cybercrime-related threats, including the use of Generative AI to provide fake content and to create malware that can constantly change to avoid detection.
Next Steps
We encourage firms to begin with the report’s table of contents to identify the topics most applicable to their business.
1 The full report is available at https://www.finra.org/rules-guidance/guidance/reports/2025-finra-annual-regulatory-oversight-report.
2 Separately, FINRA requested that its member firms provide FINRA with information about their vendors and banks by February 25, 2025.
Italian Garante Investigates DeepSeek’s Data Practices
On January 28, 2025, the Italian Data Protection Authority (“Garante”) announced that it had launched an investigation into the data processing practices of Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence (collectively, “DeepSeek”). The investigation focuses on the collection, use and storage of personal data in relation to DeepSeek’s chatbot services.
Key Areas of Inquiry
The Garante indicated that it has formally requested information from DeepSeek with regards the following:
Details on how DeepSeek collects personal data, including the specific methods and channels used, and which personal data are collected. The Garante is also seeking information on the nature of the data used to train the AI system, including whether personal data is included.
Clarification on whether data is sourced directly from users, third parties or other mechanisms. Particularly, the Garante is interested in understanding whether personal data is collected through web scraping, and how both registered and unregistered users are informed about such data processing activities.
Identification of the legal basis relied on to legitimize the processing of personal data.
Confirmation on whether personal data is stored on servers located in China and compliance with applicable international data transfers requirements.
Next Steps
DeepSeek is required to provide the requested information to the Garante within 20 days. Failure to do so could lead to further regulatory action, including potential enforcement actions. The Garante previously sanctioned OpenAI’s ChatGPT for infringements to certain requirements of the European General Data Protection Regulation following a similar fact-finding inquiry.
Read the Garante’s press release (in Italian and English).
AI Regulation in Financial Services: US House Report
In December 2024, the US House of Representatives Bipartisan Task Force on Artificial Intelligence released a comprehensive report examining artificial intelligence’s (AI) impact across various sectors, including a significant focus on financial services. The report provides important insights into both the opportunities and challenges of AI adoption in the financial sector that will be a focus of the next Congress.
Key findings from the report
The task force highlighted several critical aspects of AI in financial services:
AI decision-making risks: AI automated decision-making tools trained on flawed or biased data can produce harmful outputs that may disproportionately affect certain groups. This risk is particularly heightened in areas such as lending and credit decisions, credit scoring models, and compliance with the Equal Credit Opportunity Act and Regulation B, and has been a strong focus of the Consumer Financial Protection Bureau’s supervisory highlights.
Consumer data privacy: Given AI’s reliance on large datasets, data privacy has emerged as a major concern. Financial institutions must carefully balance data utilization for AI systems with robust privacy protections.
Access to financial services: AI has the potential to increase access to financial services, particularly for underserved communities, through innovations such as alternative data underwriting and automated customer service.
Institution size disparity: Smaller financial institutions often lack the resources to develop and implement sophisticated AI tools, potentially creating competitive disadvantages against larger institutions.
Legacy integration: The financial sector has been utilizing AI technologies for decades, with applications ranging from fraud detection to algorithmic trading. However, recent advances in generative AI have introduced new considerations for regulation and oversight.
Practical takeaways for financial institutions
1. Governance and oversight
Establish internal AI governance bodies to oversee AI implementation
Maintain human oversight of AI systems, particularly for critical decisions
Document AI decision-making processes and maintain clear audit trails
2. Data management
Implement robust data quality controls for AI training data
Ensure compliance with privacy regulations when collecting and using customer data
Regularly audit AI systems for potential bias or discriminatory outcomes
3. Risk management
Develop comprehensive AI risk assessment frameworks
Maintain clear processes for monitoring and validating AI model outputs
Create contingency plans for AI system failures or errors
4. Regulatory compliance
Stay informed about evolving regulatory guidance on AI use
Ensure AI systems comply with existing antidiscrimination and consumer protection laws
Maintain transparency in AI-driven decisions affecting customers
5. Customer protection
Implement clear disclosure practices for AI-driven services
Consider alternative service options for customers who prefer non-AI interactions
Develop processes for addressing AI-related customer complaints
Looking ahead
The report suggests that future legislation will likely take a principles-based approach to AI regulation in financial services, focusing on existing regulatory frameworks while addressing new challenges posed by AI technology. Financial institutions should prepare for increased scrutiny of their AI systems while continuing to innovate responsibly.
For financial institutions considering or expanding their use of AI, the key message is clear: Embrace innovation while maintaining robust controls and oversight. Success will require balancing technological advancement with consumer protection and regulatory compliance.
This report serves as a valuable road map for financial institutions navigating the evolving landscape of AI regulation. Banks and financial services firms should review their current AI practices against these findings and prepare for potential regulatory developments in this space.
Two Takeaways from the U.S. Copyright Office’s Jan. 2025 Report on AI-Created Works
The U.S. Copyright Office released Part 2 of its report on Copyright and Artificial Intelligence on January 29, 2025. Part 2 focuses on the copyrightability of outputs created using generative AI. (The highly anticipated Part 3, which is forthcoming, will address copyright infringement and fair use issues involving generative AI.)
Two key takeaways from Part 2 are as follows.
1. No softening of the “human authorship” requirement.
Consistent with its position taken in recent decisions and court cases, the Copyright Office reiterated, and did not soften, its position that human control over the creative expression in generative AI outputs is required for copyright registration. The Office explained that prompts (user instructions) to create works using generative AI do not provide sufficient human control over the output created by the AI. As such, even exhaustive prompt engineering will not result in copyrightable expression using today’s generative AI technology. The key wording from the Office is as follows:
In theory, AI systems could someday allow users to exert so much control over how their expression is reflected in an output that the system’s contribution would become [protectable]. The evidence as to the operation of today’s AI systems indicates that this is not currently the case. Prompts do not appear to adequately determine the expressive elements produced, or control how the system translates them into an output.
The Office reiterated its position that copyright protection may currently be available for: (a) human-created works of authorship used as inputs/prompts that are perceptible in AI-generated outputs; (b) creative selection, coordination, or arrangement of material in the outputs (i.e., compilations); (c) creative modifications of the outputs; and (d) the prompts themselves if they are sufficiently creative (but not the outputs created in response to the prompts).
2. The Office believes foreign laws are mostly consistent with its positions.
The Office undertook a comparative analysis of the copyrightability of AI-generated works in South Korea, Japan, China, the EU, the UK, Hong Kong, India, New Zealand, Canada, and Australia. The Office concluded that other countries “that have addressed this issue so far have agreed that copyright requires human authorship.”
Interestingly, the Office pointed to a 2023 decision of the Beijing Internet Court that allowed copyright protection for an AI-generated image in an infringement case. According to the Office, that case decided that “the selection of over 150 prompts combined with subsequent adjustments and modifications demonstrated that the image was the result of the author’s ‘intellectual achievements,’ reflecting his personalized expression.” Given the Office’s position on prompts discussed above, it is unclear whether this Chinese decision is truly consistent with the U.S. view—perhaps it is partially consistent with respect to the “adjustments and modifications.” Indeed, commentators often cite that same case to show that China differs from the U.S. in allowing the protection of AI-generated images.
The Office did note that the legal positions in many countries are evolving and, in some cases, unclear.
We now eagerly await Part 3 of the Office’s report, which is being prepared while dozens of AI copyright infringement ligations are coming to a head in courts around the country. We should soon start to see some answers to AI infringement and fair use questions, even if only preliminary answers.
Trump 2.0 Executive Orders; Shock and Awe
Overview
Since his inauguration on 20 January 2025, President Donald J. Trump has signed dozens of executive orders and presidential memoranda on topics including, but not limited to, energy and the environment; immigration; international trade; foreign policy; diversity, equity and inclusion (DEI); transforming the civil service and federal government; and technology. These presidential actions include recission orders of Biden-era regulations, withdrawal orders from international organizations and agreements, and orders implementing the administration’s affirmative policy objectives. These actions are indicative of the broader “America First” policy agenda set forth by President Trump throughout his campaign and signal key priority areas for his administration in the coming months.
End of the Biden Era
It is typical for an incoming president to issue a series of executive orders rescinding prior executive actions that conflict with the new administration’s agenda. For example, in former President Biden’s first 100 days in office, he reversed 62 of President Trump’s executive orders from his first term in office. In his first week back in office, President Trump has rolled back over 50 of President Biden’s executive actions. These Biden-era policies included topics such as ethics requirements for presidential appointees, COVID-19 response mechanisms, the enactment of health equity task forces, labor protections for federal workers, and efforts to mitigate climate change, among others.
In addition to rescinding former President Biden’s executive orders, President Trump issued a series of orders that temporarily suspended pending rules and programs of the Biden administration. First, President Trump instated a regulatory freeze pending review on all proposed, pending, or finalized agency rules that have not yet been enacted. The freeze includes finalized rules that went unpublished in the Federal Register before the end of the Biden administration, published rules that have not yet taken effect, and any “regulatory actions . . . guidance documents . . . or substantive action” from federal agencies. President Trump also issued a hiring freeze on any new federal civilian employees, exempting military and immigration enforcement positions.
Moreover, the Office of Management and Budget (OMB) issued an internal memo on Monday temporarily pausing federal grants, loans, and other financial assistance programs. However, the OMB later clarified that the pause only applies to programs implicated by seven of President Trump’s executive orders, and was particularly meant to target, among other things, ending policies such as “DEI, the green new deal, and funding nongovernmental organizations that undermine the national interest.” Shortly before it was to take effect, a temporary stay was issued by the U.S. District Court for the District of Columbia through 3 February 2025. On Wednesday, OMB announced that the original memo has been rescinded in light of widespread confusion on its potential implications. The seven individual EOs originally mentioned still remain effective.
Looking Ahead: President Trump’s Agenda
Through this round of executive actions, President Trump has demonstrated his intention to utilize the full power of the presidency, in tandem with Republican control of Congress, to quickly enact his “America First” agenda. President Trump has identified key policy areas he plans to address in his second term, which primarily include energy dominance, immigration enforcement, global competition, undoing “woke” Biden-era policies, and American independence.
In keeping with these campaign priorities, he has ordered the end of DEI within the federal government and directed federal agencies to investigate DEI efforts in the private sector, declared national emergencies on energy and immigration, ordered a review of U.S. trade imbalances in preparation for widespread tariffs, delayed the ban on TikTok, withdrew from the World Health Organization and Paris Climate Agreement, elevated domestic artificial intelligence (AI) technology, and renamed the Gulf of Mexico to the “Gulf of America.”
Conclusion
Although the Trump administration has a clear and focused policy and regulatory agenda, and can work alongside a Republican-led Congress, narrow margins of majority remain in both chambers which will, at times, necessitate bipartisanship. As such, we expect President Trump to continue working to take action on issues in which he can act unilaterally so as to narrow the scope of policies that congressional Republicans must work to either enact via the budget reconciliation process, or build consensus with their Democrat counterparts.
Additional Authors: Lauren E. Hamma, Neeki Memarzadeh, and Jasper G. Noble
EU KI-Verordnung: Wenn Die KI-Kompetenz Zur Pflicht Wird
Für viele gehören KI-Tools wie Copilot und ChatGPT bereits heute zum Alltag. Die EU KI-Verordnung (KIVO) wird gerade deshalb erhebliche Auswirkungen auf viele Unternehmen haben, insbesondere im Beschäftigungskontext. Diese Verordnung zielt darauf ab, den Einsatz von Künstlicher Intelligenz (KI) in der EU zu regulieren und sicherzustellen, dass KI-Systeme sicher und transparent eingesetzt werden. Wir beleuchten die wichtigsten Punkte, die Unternehmen beachten sollten.
Risikobasierter Ansatz
Die KIVO verfolgt einen risikobasierten Ansatz, bei dem KI-Systeme in drei Kategorien eingeteilt werden:
Verbotene Praktiken: Techniken, die die Entscheidungsfähigkeit von Personen beeinträchtigen oder deren Verhalten ausnutzen.
Hochrisiko-KI-Systeme: Systeme, die in kritischen Bereichen wie Beschäftigung und Personalmanagement.
KI-Systeme mit begrenztem Risiko: Systeme mit weniger strengen Anforderungen.
Hochrisiko-KI-Systeme im Beschäftigungskontext
Besonders relevant für Arbeitgeber sind KI-Systeme, die für Entscheidungen im Personalwesen eingesetzt werden, wie z.B.:
Einstellung oder Auswahl von Bewerbern;
Entscheidungen über Beförderungen und Kündigungen; oder
Zuweisung von Aufgaben basierend auf individuellem Verhalten oder persönlichen Merkmalen.
KI-Systeme mit diesem Einsatzgebiet sind regelmäßig als Hochrisiko-KI-System einzustufen, sodass besondere Anforderungen gelten. Es ist wichtig, sich frühzeitig mit den Anforderungen der KIVO vertraut zu machen und entsprechende Maßnahmen zur Umsetzung zu ergreifen.
Darüber hinaus sind KI-Systeme als Teil von verschiedensten Anwendungen denkbar, die die Unternehmen zur Erfüllung ihrer Aufgaben und Arbeitsabläufe einsetzen.
Spezifische Pflichten für Anbieter und Betreiber
Anbieter und Betreiber von KI-Systemen müssen verschiedene Pflichten erfüllen, darunter:
Sicherstellung der menschlichen Aufsicht über KI-Systeme durch Personen mit KI-Kompetenz.
Implementierung von Risikomanagement- und Qualitätsmanagementsystemen.
Transparenzverpflichtungen und Informationspflichten gegenüber betroffenen Personen.
Durchführung von Grundrechte-Folgenabschätzungen für Hochrisiko-KI-Systeme.
Unternehmen werden hier, bieten sie nicht selbst KI-Systeme an, vor allem in der Rolle der Betreiber betroffen sein.
Datenschutz und KI
Die KIVO und die Datenschutz-Grundverordnung (DSGVO) arbeiten Hand in Hand. Während die KIVO einen starken Fokus auf Produktsicherheitsaspekte hat, deckt die DSGVO die Rechte des Einzelnen bei der Verarbeitung personenbezogener Daten ab. Unternehmen müssen sicherstellen, dass sie beide Verordnungen einhalten.
Mitbestimmungsrechte
Die Einführung von KI-Systemen kann Mitbestimmungsrechte der Arbeitnehmervertretungen betreffen. Unternehmen sollten die relevanten Beteiligungsrechte beachten und gegebenenfalls Rahmen-Betriebsvereinbarungen zum KI-Einsatz abschließen.
Fahrplan zur Umsetzung der KIVO
Die KIVO ist seit dem 1. August 2024 geltendes Recht. Um den Unternehmen ausreichend Zeit zur Umsetzung zu geben, erfolgt die Einführung der Vorschriften stufenweise. Hier ein Auszug der wichtigsten Daten:
2. Februar 2025: Mitarbeiterinnen und Mitarbeiter von Unternehmen müssen über ausreichende KI-Kompetenz verfügen, d.h. Unternehmen ihre Mitarbeiterinnen und Mitarbeiter entsprechend schulen. Verbote für bestimmte KI-Praktiken treten in Kraft.
2. August 2026: Start eines weiteren Teils der KIVO-Vorschriften, einschließlich spezifischer Anforderungen für Hochrisiko-KI-Systeme.
2. August 2027: Anwendung der Vorschriften für Hochrisiko-KI-Systeme auf spezifisch regulierte Produkte.
Gerade für Unternehmen als Arbeitgeber und „Betreiber“ i.S. der KIVO ist der 2. Februar 2025 ein wichtiges Datum, um die KI-Kompetenz der Mitarbeiterinnen und Mitarbeiter herzustellen.