The Proskauer Brief Episode 52: AI at Work – Design Use Mismatches [Podcast]

In the final installment of our AI at Work series, partner Guy Brenner and senior counsel Jonathan Slowik tackle a critical issue: mismatches between how artificial intelligence (or AI) tools are designed and how they are actually used in practice. Many AI developers emphasize their rigorous efforts to eliminate bias, reassuring employers that their tools are fair and objective, but a system designed to be bias-free can still produce biased outcomes if used improperly. Tune in as we explore real-world examples of these risks and what employers can do to ensure they are leveraging AI responsibly.

UK ICO Sets Out Proposals to Promote Sustainable Economic Growth

On January 24, 2025, the UK Information Commissioner’s Office (“ICO”) published the letter it sent to the UK Prime Minister, Chancellor of the Exchequer, and Secretary of State for Business and Trade, in response to their request for proposals to boost business confidence, improve the investment climate, and foster sustainable economic growth in the UK. In the letter, the ICO sets out its proposals for doing so, including:

New rules for AI: The ICO recognizes that regulatory uncertainty can be a barrier to innovation, so it proposes a single set of rules for those developing or deploying AI products, supporting the UK government in legislating for such rules.
New guidance on other emerging technologies: The ICO will support businesses and “innovators” by publishing innovation focused guidance in areas such as neurotech, cloud computing and Internet of Things devices.
Reducing costs for small and medium-sized companies (“SMEs”): Focusing on the administrative burden that SMEs face when complying with a complex regulatory framework, the ICO commits to simplifying existing requirements and easing the burden of compliance, including by launching a Data Essentials training and assurance programme for SMEs during 2025/26.
Sandboxes: The ICO will expand on its previous sandbox services by launching an “experimentation program” where companies will get a “time-limited derogation” from specific legal requirements, under the strict control of the ICO, to test new ideas. The ICO would support legislation from UK government in this area.
Privacy-preserving digital advertising: The ICO recognizes the financial and societal benefits provided by the digital advertising economy but notes there are aspects of the regulatory landscape that businesses find difficult to navigate. The ICO wishes to help reduce the burdens for both businesses and customers related to digital advertising. To do so, the ICO, amongst other things, referred to its approach to regulating digital advertising as detailed in the 2025 Online Tracking Strategy (as discussed here).
International transfers: Recognizing the importance of international transfers to the UK economy, the ICO will, amongst other things, publish new guidance to enable quicker and easier transfers of data, and work through international fora, such as G7, to build international agreement on increasing data transfer mechanisms.
Promote information sharing between regulators: The ICO acknowledges that engaging with multiple regulators can be resource intensive, especially for SMEs. The ICO will work with the Digital Regulation Cooperation Forum to simplify this process, and would encourage legislation to simplify information sharing between regulators.

Read the letter from the ICO.

Rules on AI Literacy and Prohibited Systems Under the EU AI Act Become Applicable

On February 2, 2025, the EU AI Act’s rules on AI literacy, along with the prohibition of certain types of AI system, became applicable in the EU.
Under the new AI literacy obligations, providers and deployers will be required to ensure a sufficient level of AI literacy for their staff and other persons working with AI systems on their behalf. For this purpose, organizations should put in place robust AI training programs.
Additionally, under the new rules, the placing on the market, the putting into service or the use of AI systems that present unacceptable risks to the fundamental rights of individuals will be prohibited in the EU. AI systems covered by the new prohibition include AI used for:

social scoring for public and private purposes;
exploitation of vulnerabilities of persons through the use of subliminal techniques;
real time remote biometric identification in publicly accessible spaces by law enforcement, subject to narrow exceptions;
biometric categorization of natural persons based on biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs or sexual orientation;
individual predictive policing;
emotion recognition in the workplace and education institutions, unless for medical or safety reasons; and
untargeted scraping of Internet or CCTV for facial images to build-up or expand databases.

Literacy and prohibitions of the abovementioned AI systems will be the first obligations under the AI Act to become applicable. The remaining will apply to entities under scope in stages following a transition period. The length of the transition period will vary depending on the type of AI system or model.

Specific obligations applicable to general-purpose AI models will become applicable on August 2, 2025.
Most obligations under the AI Act, including the rules applicable to high-risk AI systems under Annex III and systems subject to specific transparency requirements will become applicable on August 2, 2026.
Obligations related to high-risk systems included in Annex I of the AI Act will become applicable on August 2, 2027.

Certain AI systems and models already on the market may be exempt or have longer compliance deadlines.
Read the AI Act.

The Copyright Office’s Latest Guidance on AI and Copyrightability

The U.S. Copyright Office’s January 2025 report on AI and copyrightability reaffirms the longstanding principle that copyright protection is reserved for works of human authorship. Outputs created entirely by generative artificial intelligence (AI), with no human creative input, are not eligible for copyright protection. The Office offers a framework for assessing human authorship for works involving AI, outlining three scenarios: (1) using AI as an assistive tool rather than a replacement for human creativity, (2) incorporating human-created elements into AI-generated output, and (3) creatively arranging or modifying AI-generated elements.
A key takeaway from the report is that text prompts alone – even detailed ones – cannot currently provide sufficient human control over their execution to attribute authorship rights in the resulting output to the user. While highly detailed, iterative prompts may describe the user’s desired expressive elements, the same prompt can generate widely different results each time, demonstrating a lack of meaningful human control over the AI’s internal “black box” processes, at least as the technology stands now. The Office acknowledged that this conclusion could change if future AI tools offer users a higher degree of direct control over the expressive aspects of the final output.
To illustrate its stance, the Office points to Jackson Pollock’s paintings as examples of copyrightable works even where the output of the creative process appears random or unpredictable. While the final arrangement of paint in a Pollock piece may not have been fully predictable, Pollock himself chose the colors, number of layers, texture, and used his own body movements to physically execute these choices. By contrast, AI-generated works often involve an algorithmic process largely outside the user’s direct control. Pollock’s work, the Office notes, came from tangible and deliberate human decisions—rather than from a system where the user simply prompts and then observes. The key issue is the degree of human control over the creative process, rather than the predictability of the result.
The Office distinguishes between simple text prompts and other forms of creative input provided by a user to an AI system (such as a photograph or drawing). If the user’s creative input is perceptible in the AI-generated output, or the user makes creative modifications to AI-generated material, those portions of the work may be eligible for copyright protection. Similarly, where AI is used to augment or enhance human-created works (like films using AI visual effects), the overall work may remain copyrightable as a whole, even if the AI-generated components would not be individually protectable.
Many AI platforms now allow users to select, edit, and re-arrange individual elements of AI-generated content, offering a greater level of human engagement than text prompts alone. The Office reiterates that a case-by-case analysis of the creation process is necessary to determine whether the human contributions are sufficient for copyright protection, leaving it to the courts to provide further guidance on the extent of human authorship required in specific contexts.
The Office believes the existing legal frameworks are flexible enough to address emerging AI-related copyright issues, and that enacting new regulations would not provide the desired clarity given the inherently fact-specific nature of the analysis and AI’s wide and evolving role in creative processes. The Office also raises policy concerns that extending blanket copyright protection to AI-generated works could flood the market with AI-generated content, potentially devaluing or disincentivizing human-created works.
In light of this guidance, it is essential for creators and businesses to document their creative process, including human-created elements, modifications, and arrangements of AI-generated outputs, and how the AI tool is being used—whether as an assistive technology or as the principal decision-maker in shaping the creative elements of the final output. This documentation may be required or helpful during the copyright application process and in potential ownership or infringement disputes.
While AI continues to transform creative industries, the Office’s guidance is a reminder of the fundamental role of human creativity in works seeking U.S. copyright protection. Notably, the Office’s position largely aligns with emerging international consensus—jurisdictions including Korea, Japan, and EU member states have similarly reaffirmed that meaningful human creative contribution is a prerequisite for copyright protection. Looking ahead, Part 3 of the report will address the training of AI models on copyrighted works, licensing considerations, fair use, and allocation of liability. These topics are among the most complex and closely watched issues at the intersection of AI and copyright.

Growing Role of Natural Gas in Supporting Hyperscale AI Data Center Development

Data centers are foundational to the artificial intelligence (AI) ecosystem, providing the computational power necessary to train complex algorithms and manage massive data flows. The surge in data center construction across the United States is estimated to require an additional 47 gigawatts of power capacity by 2030.
Notably, approximately 60 percent of this U.S. demand is expected to be met by natural gas, creating significant growth opportunities for oil and gas suppliers. Globally, the demand for natural gas is projected to rise substantially, with some analysts forecasting up to 50 percent market growth over the next five years.
Oil companies, which historically produced electricity solely to support their own operations, are now poised to enter the broader power market amidst surging demand. This increased demand is driven in part by the rapid growth of technologies such as generative AI, which is expected to push U.S. electricity consumption to unprecedented levels in 2025 following two decades of stagnation.
In response to this growing need for power, the U.S. energy sector has invested heavily in new natural gas infrastructure and advocated for delaying the retirement of fossil-fuel power plants.
For major oil companies, this presents a promising opportunity. Some oil giants have already emphasized that the future of AI will depend as much on natural gas production from areas like the Southwest’s Permian Basin as it will on technological innovation in Silicon Valley. While data center developers have historically demonstrated a strong interest in using renewable energy sources, the intermittent nature of renewables and the slow pace of their buildout pose significant challenges. As a result, developers – particularly those constructing hyperscale data centers– are increasingly prioritizing energy solutions that offer speed and reliability.
This shift presents substantial growth opportunities for midstream natural gas transmission companies, which possess extensive pipeline networks capable of delivering consistent and efficient energy to meet the demands of large-scale data centers. By leveraging these established infrastructure assets, midstream companies are well-positioned to address the pressing energy needs of the rapidly expanding data center industry, even as the sector continues to explore pathways toward long-term sustainability.
Key Applications
One key area of impact is the advancement of battery technologies. These innovations can store surplus energy from renewable sources and provide reliable backup power during outages. Additionally, the industry’s knowledge of optimizing energy transmission networks can minimize energy losses and ensure maximum power delivery to data centers. Through strategic partnerships with technology companies, the energy sector is playing an essential role in creating more sustainable and efficient AI data centers.
Another critical contribution is the application of carbon capture and storage (CCS) technologies to reduce AI data centers’ environmental footprint. These facilities, which support large-scale AI models and applications, consume vast amounts of energy and contribute to greenhouse gas emissions. By implementing CCS solutions, energy companies can capture and store CO2 emissions generated by data centers, significantly reducing their carbon footprint. This approach aligns with the broader goals of both industries to lower environmental impact and transition toward cleaner energy solutions.
The intensifying competition for electricity has also led some major technology companies to reconsider their climate-focused commitments, which previously emphasized reliance exclusively on renewable energy sources, such as wind and solar, for powering energy-intensive AI data centers. This shift underscores the tension between the rising energy demands of emerging technologies and efforts to achieve sustainability goals.
Building an AI data center facility is a complex and capital-intensive endeavor that requires careful planning and working with trusted collaborators in multiple areas, including infrastructure, power, cooling, networking, tax considerations, security, and compliance. Site selection considerations include choosing a location with access to reliable power, cooling, and minimal risk of natural disasters that close to research hubs, cloud regions, or high-tech industries. Government incentives, raising capital, tax breaks and local data protection laws are also factors that play into data center growth and expansion.
Takeaways
The collaboration between the energy and technology sectors offers mutual benefits. The energy industry’s expertise in infrastructure, energy management, and renewable technologies can optimize AI data center operations, drive cost efficiencies, and support the shift toward greener energy. Simultaneously, technology companies can achieve more sustainable and energy-efficient facilities, advancing their environmental objectives. This partnership not only fosters innovation, but also contributes to a more sustainable future in both the energy and technology landscapes.

Key Insights on President Trump’s New AI Executive Order and Policy and Regulatory Implications

In recent days, the Biden administration’s reliance on export controls to curb China’s AI advancements has come under increasing scrutiny, particularly following the release of China’s DeepSeek R1 chatbot. This development raises concerns that prior U.S. restrictions have failed to slow China’s progress while potentially undermining U.S. global competitiveness in AI hardware and computing ecosystems. President Trump’s early actions—rescinding Biden’s AI executive order and emphasizing innovation over risk mitigation—signal a shift away from restrictive policies toward a more pro-innovation, market-driven approach.
In its final days, the Biden administration issued an interim final rule seeking to “regulate the global diffusion” of AI by imposing export licensing restrictions on advanced chips to 150 “middle-tier” countries, while maintaining existing embargoes on China, Russia, and Iran. The rule was widely viewed as an acknowledgment that previous AI export controls, dating back to 2022, had failed to fully prevent China’s access to AI-enabling technologies through third parties. Critics argue that these restrictions will reduce global demand for U.S. chips and incentivize non-U.S. computing ecosystems, weakening America’s long-term AI leadership rather than protecting national security.
Some have called DeepSeek R1’s recent emergence a “Sputnik moment,” highlighting the inadequacy of past U.S. controls and reinforcing calls among some quarters for a strategic overhaul. AI and crypto advisor to President Trump, David Sacks, has framed the chatbot’s release as evidence that Biden’s policies constrained American AI companies without effectively restricting China’s advancements.
Adding to these concerns, DeepSeek’s rapid rise has also triggered significant international regulatory scrutiny. Italy’s data protection authority, the Garante, has formally demanded explanations on how DeepSeek handles Italian users’ personal data, following accusations that its privacy policy violates multiple provisions of the EU General Data Protection Regulation (GDPR). Consumer advocacy groups argue that DeepSeek lacks transparency regarding data retention, profiling, and user rights, while also storing user data in China, raising potential security risks.
Notably, the White House has also begun reviewing DeepSeek over possible national security threats, echoing past scrutiny faced by Chinese tech firms like TikTok. Indeed, Sacks acknowledged evidence that DeepSeek improperly used data from U.S. company models to train its own, stating that leading U.S. AI companies would take further steps to prevent that practice going forward. This aligns with the Trump administration’s broader economic philosophy, which favors deregulation and private-sector empowerment over government-imposed constraints.
At this point, we believe that Trump’s approach to AI export controls may include: (1) repealing or significantly revising Biden-era AI restrictions to avoid isolating U.S. firms from global markets, (2) prioritizing domestic AI and semiconductor growth through deregulation and incentives, (3) refining export control enforcement to focus on targeted national security threats rather than broad-based restrictions, and (4) recalibrating international AI coordination to ensure U.S. competitiveness rather than emphasizing strict regulatory alignment with allies.
This shift could accelerate AI innovation in the U.S., but it also raises concerns about how to balance national security with economic competitiveness. If AI export controls are loosened too aggressively, China may find easier pathways to advanced AI technologies. Additionally, the divergence between the U.S.’s deregulated AI approach and the EU’s AI Act, which prioritizes stringent governance and risk mitigation, may create regulatory challenges for U.S. companies operating in international markets.
The Trump administration’s AI strategy marks a decisive departure from Biden’s risk-focused policies, opting instead for a more aggressive push to strengthen U.S. AI dominance through industry-led growth. While this shift may create opportunities for American companies, it also requires careful navigation of global regulatory landscapes and national security considerations.

The Changing Landscape of AI: Federal Guidance for Employers Reverses Course With New Administration

In the midst of the multiple executive orders issued in the first days of the Trump administration, on 23 January 2025, the White House issued an executive order entitled Removing Barriers to American Leadership in Artificial Intelligence (AI EO). At a high-level, President Trump issued the AI EO to (1) implement the revocation of President Biden’s executive order on artificial intelligence (AI), entitled the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (EO 14110) and (2) create President Trump’s AI action plan to ensure that AI systems are “free from ideological bias or engineered social agendas.” As a result of the AI EO, the Equal Employment Opportunity Commission (EEOC) and Department of Labor (DOL) pulled or updated a number of AI-related publications and other documents from their websites. This alert provides an overview of the AI EO as well as the changes to guidance from the EEOC and DOL. It also outlines best practices for employers to ensure compliance at both the federal and state levels.
Revocation of President Biden’s EO 14110
Similar to the White House’s executive order entitled Protecting Civil Rights and Expanding Individual Opportunity (DEI EO)1 issued on 21 January 2025, the AI EO implemented the revocation of an executive order on the same subject issued during the Biden administration, EO 14110. EO 14110 required the implementation of safeguards (i) for the development of AI and (ii) to ensure AI policies were consistent with the advancement of “equity and civil rights.” EO 14110 also directed agencies to develop plans, policies, and guidance on AI, which they did before the end of 2024. 
In connection with rescinding EO 14110, the White House issued a fact sheet and asserted that EO 14110 “hinder[ed] AI innovation and impose[d] onerous and unnecessary government control over the development of AI”. It directed executive departments and agencies to revise or rescind all actions, including policies, directives, regulations, and orders, taken under EO 14110 that are inconsistent with the AI EO. Further, the AI EO mandates that the director of the Office of Management and Budget (OMB) revise OMB Memoranda M-24-10 and M-24-18 (which address the federal government’s acquisition and governance of AI) within 60 days. To the extent that an action cannot be immediately suspended, revised, or rescinded, then the AI EO authorizes agencies to allow exemptions until the actions may be suspended, revised, or rescinded.
AI Action Plan
As part of its stated goal to “enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security,” the AI EO directs the assistant to the president for science and technology, the special advisor for AI and crypto, and the assistant to the president for national security affairs to develop an AI action plan within 180 days. Such AI action plan will be developed in coordination with the assistant to the president for economic policy, the assistant to the president for domestic policy, the director of the OMB, and any relevant heads of executive departments and agencies. 
Updates to Existing Federal Agency Guidance on AI
Dovetailing with the AI EO, on 27 January 2025, the EEOC removed AI-related guidance2 from its website. This guidance, published in May 2023, addressed how existing federal anti-discrimination law may apply to employers’ use of AI when hiring, firing, or promoting employees. 
Similarly, the DOL noted on its website that the “AI & Inclusive Hiring Framework”3 published in September 2024 by the Partnership on Employment & Accessible Technology, and the DOL’s October 2024 “Artificial Intelligence Best Practices”4 guidance may now be outdated or not reflective of current policies. Both publications are also unavailable in certain locations in which they were previously accessible.While the DOL and the EEOC updated and removed their AI-related guidance respectively, the Office of Federal Contract Compliance (OFCCP) website still maintains its 29 April 2024 nonbinding guidance on how federal contractors and subcontractors should use AI to ensure compliance with existing equal employment opportunity obligations under federal law.5
Best Practices for Employers 
As with any change to executive branch and federal agency guidance, employers should continue to monitor developments that may impact AI-related policies. Further, employers should review current AI policies for compliance with the recent executive branch equal employment guidance related to diversity, equity, and inclusion.6 While guidance and enforcement priorities at the federal level have changed, employers must still comply with the various state-level regulations on AI, as many states have passed regulations addressing the use of AI in employment decisions.7 For example, in 2019, Illinois enacted the Artificial Intelligence Video Interview Act, whereby employers who use AI to analyze video interviews must provide notice to the candidate, obtain their consent, and provide an explanation of the AI technology used. Illinois also amended the Illinois Human Rights Act8 to prohibit discrimination by employers who utilize AI in recruitment, hiring, promotion, professional development, and other employment decisions. More recently, in 2024, Colorado enacted a law in 2024 that prohibits algorithmic discrimination in “consequential decisions,” which is defined to include those related to employment or employment opportunity.9 Moreover, with this shift in guidance at the federal level, employers should anticipate increased state and local regulation of AI in employment.10
Conclusion
There are likely to be many more developments in the coming days and weeks. Our Labor, Employment, and Workplace Safety practice regularly counsels clients on the issues discussed above and is well-positioned to provide guidance and assistance to clients on these significant developments.

Footnotes

1 See K&L Gates LLP Legal Alert, Uncharted Waters: Employers Brace for Significant and Unprecedented Changes to Employment Law Enforcement Under New Administration, January 24, 2025, https://www.klgates.com/Uncharted-Waters-Employers-Brace-for-Significant-and-Unprecedented-Changes-to-Employment-Law-Enforcement-Under-New-Administration-1-24-2025.
2 See K&L Gates Legal Alert, Employer Use of Artificial Intelligence to Avoid Adverse Impact Liability Under Title VII, May 31, 2023, https://www.klgates.com/EEOC-Issues-Nonbinding-Guidance-on-Permissible-Employer-Use-of-Artificial-Intelligence-to-Avoid-Adverse-Impact-Liability-Under-Title-VII-5-31-2023.
3 See K&L Gates Legal Alert, DOL’s AI Hiring Framework Offers Employers Helpful Guidance on Combatting Algorithmic Bias, November 12, 2024, https://www.klgates.com/DOLs-AI-Hiring-Framework-Offers-Employers-Helpful-Guidance-on-Combatting-Algorithmic-Bias-11-12-2024.
4 See K&L Gates Legal Alert, The DOL Publishes Best Practices That Employers Can Follow to Decrease the Legal Risks Associated With Using AI in Employment Decisions, December 16, 2024, https://www.klgates.com/The-DOL-Publishes-Best-Practices-That-Employers-Can-Follow-to-Decrease-the-Legal-Risks-Associated-With-Using-AI-in-Employment-Decisions.
5 See K&L Gates Legal Alert, OFCCP Guidance Expands Federal Scrutiny of Artificial Intelligence Use by Employers, July 16, 2024, https://www.klgates.com/OFCCP-Guidance-Expands-Federal-Scrutiny-of-Artificial-Intelligence-Use-by-Employers-7-16-2024.
6 Supra note 1…
7 820 ILCS 42/1.
8 775 ILCS 5/2-101-102.
9 Col. Rev. Stat. 6-1-1701-1707.
10 See K&L Gates Legal Alert, The Texas Responsible AI Governance Act and Its Potential Impact on Employers, January 13, 2025, https://www.klgates.com/The-Texas-Responsible-AI-Governance-Act-and-Its-Potential-Impact-on-Employers-1-13-2025.

FDA Releases Draft Guidance on AI-Enabled Medical Devices

Go-To Guide:

The FDA issued draft guidance on AI-enabled medical devices, emphasizing a total product life cycle approach from design to post-market monitoring. 
The guidance outlines recommended documentation for marketing submissions, including device descriptions, performance validation, and risk management plans. 
Transparency and bias mitigation are highlighted as crucial elements in fostering trust and ensuring equitable outcomes for AI-enabled devices. 
The FDA encourages manufacturers to provide clear, user-friendly labeling that explains AI functionality, limitations, and instructions for use. 
This guidance may be subject to review and revision in light of President Trump’s recent AI-focused Executive Order.

On Jan. 7, 2025, the FDA issued its draft guidance, “Artificial Intelligence-Enabled Device Software Functions: Lifecycle Management and Marketing Submission Recommendations.” In its latest draft guidance on medical devices, the FDA provides recommendations on the documentation and information that should be included in marketing submissions for devices that include AI-enabled device software functions.
The guidance emphasizes the FDA’s holistic total product life cycle (TPLC) approach, which requires manufacturers to consider the entire lifespan of an AI-enabled device—from initial concept and design to post-market performance monitoring. The guidance also underscores the importance of transparency and bias mitigation in AI-enabled devices to foster trust and equitable outcomes. By addressing the unique challenges AI poses, the guidance establishes standards for transparency, accountability, and flexibility in managing AI-enabled devices across their TPLC.
Total Product Life Cycle Approach
The guidance highlights the importance of managing AI-enabled devices using a TPLC approach. This method seeks to ensure continuous oversight, from design and development through post-market performance. The FDA’s recommendations for manufacturers at each TPLC phase include:

Design and Development: Integrate risk management and human factors engineering early in the design process to mitigate potential risks associated with AI functionalities. 
Validation and Testing: Utilize rigorous methodologies to validate AI performance, ensuring effectiveness across diverse patient populations and real-world settings. 
Post-Market Monitoring: Continuously monitor in real-time to identify and address performance deviations or safety concerns, supported by mechanisms for timely updates.

Marketing Submission Requirements
The FDA emphasizes the critical elements that sponsors should provide in premarket submissions for AI-enabled devices. These include:

Device Description: Clear, comprehensive details about the device’s inputs and outputs, an explanation of how AI is used to achieve the device’s intended use, a description of the intended users, the level and type of training intended users have or will receive, the intended use environment, the intended workflow of the use of the device, and a description of installation and maintenance procedures, as well as any calibration or configuration procedures that must be regularly performed by users. 
User Interface Information: Information that demonstrates the device workflow and how that information is presented to users, which may be accomplished through graphical representations, written descriptions, example reports, and recorded videos. 
Labeling: Explanations, in an appropriate reading level, that the device includes AI, how AI is used to achieve the device’s intended use, model inputs and outputs, any automated functions, model architecture, development and performance data and metrics, performance monitoring, any known limitations of the device, and instructions for use. Appendix E provides exemplar communication models for sponsors to consider when developing labeling. 
Training and Testing Data: Descriptions of data collection, data cleaning and processing, test data independence, reference standards, and representativeness. 
Performance Validation: Evidence to demonstrate accuracy, reliability, and repeatability in clinical and non-clinical settings, including testing for specific populations. Appendix C includes recommendations for clinical performance validation, while Appendix D describes human factors considerations. 
Change Management Plans: Information regarding performance monitoring plans, including measures to capture device performance after deployment, including updates, mitigations, and corrective actions. 
Risk Management: A risk management file that includes a risk management plan and robust assessments to evaluate the risks of AI functions and their impact on patient safety, considering biases, software malfunctions, or data inaccuracies. 
Cybersecurity and Data Integrity: Information regarding the measures taken to protect against data breaches and ensure the integrity of AI models. 
Public Submission Summary: A summary with details about the AI-enabled device’s characteristics for use in public facing documents. Appendix F provides examples for communicating the required information.

Appendix B includes recommendations for developing a transparent device centered on users. The draft guidance encourages sponsors to take a holistic, user-centered approach to transparency, beginning at the design phase of the TPLC to ensure important information is both accessible and functionally understandable. Because transparency is contextually dependent, appropriate information to include would vary across devices, and the draft provides examples for sponsors to consider.
Conclusion
By focusing on lifecycle management, transparency, bias mitigation, and flexibility, the FDA aims to balance innovation with public safety. Aligning with these recommendations may help manufacturers accelerate AI technology deployment in healthcare. The FDA actively seeks input from stakeholders, including manufacturers, healthcare professionals, and the public, to refine this draft guidance. Comments on the guidance are welcomed through April 7, 2025.
While currently uncertain, President Trump’s rescission of President Biden’s AI Executive Order No. 14110 and issuance of his own AI-focused Executive Order entitled “Removing Barriers to American Leadership in Artificial Intelligence” on Jan. 23, 2025, may lead to a widespread reevaluation of all AI policies and guidances agencies such as the FDA have submitted. Accordingly, relevant stakeholders should monitor the viability and advancement of this draft guidance.

Geolocation Data in AI: Lessons from Niantic’s “Pokémon Go”

The use of geolocation data in AI development is rapidly evolving, with its applications expanding across various industries. In this advisory, members of Varnum’s Data Privacy and Cybersecurity team examine a key AI use case: Niantic’s “Pokémon Go”. This case highlights critical considerations that businesses must address as they leverage vast amounts of data for new applications. These considerations include the protection of children’s data and the compliance requirements necessary to safeguard sensitive information.
What is “Pokémon Go”?
“Pokémon Go”, launched in 2016 by Niantic, is an augmented reality (AR) mobile game that overlays digital creatures on real-world locations. Players interact with the game by traveling to specific geolocated spots to capture virtual Pokémon, participate in battles and explore their surroundings. With over one billion downloads globally, “Pokémon Go” has gathered vast amounts of geolocation data as users traverse real-world environments.
What Did Niantic Do with This Data?
Recently, Niantic revealed that it has been leveraging data collected from “Pokémon Go” to develop a large-scale geospatial AI model. This model uses anonymized and aggregated location data to better understand geographic patterns and improve AR experiences. According to Niantic, the model not only aids in enhancing its existing products but also paves the way for broader applications in geospatial intelligence, urban planning and beyond. Niantic’s efforts underscore the value of real-world data in building sophisticated AI systems, potentially revolutionizing industries ranging from gaming to infrastructure.
Why Is This Valuable for Companies?
The integration of real-world geolocation data into AI systems offers significant advantages:

Enhanced AI Models: Access to extensive geospatial data allows companies to train AI systems that better understand spatial relationships and human movement patterns.
Improved Customer Experiences: Applications powered by such AI models can offer personalized and context-aware services, leading to increased user engagement and satisfaction.
New Revenue Streams: Companies can monetize insights derived from location data across industries such as retail, real estate and logistics.

Special Considerations for Children’s Data
The ability to use data collected from a globally popular app highlights the potential for gaming companies and other businesses to pivot into data-driven AI innovation. However, leveraging such data raises critical privacy considerations. For example, when leveraging a mobile gaming application, companies should be cognizant of the fact that the game may be largely used by younger audiences, increasing the likelihood that the company will be collecting children’s data and be subject to regulations such as the Children’s Online Privacy Protection Act (COPPA). As such, companies must address several key issues to ensure compliance with privacy regulations and maintain public trust:

Transparency: Companies should clearly disclose how geolocation data is collected, processed and used. For example, concise and accessible privacy policies tailored to both parents and children can help end users better understand how the company is leveraging the data collected through the use of the app and foster better understanding and trust.
Consent: In many cases, companies should obtain explicit parental consent before collecting or processing data from children. This step is crucial to comply with regulations in the United States and similar laws globally. For example, COPPA not only mandates that a company obtain verifiable parental consent before collecting personal information from minors, but also requires that the parent is given the opportunity to prohibit the company from disclosing that information to third parties (unless disclosure is integral to the site or service, in which case, this must be made clear to parents).
Opt-Out Mechanisms: Companies should give parents the opportunity to prevent further use or online collection of a child’s personal information. Providing users, especially parents, with the ability to opt out of data collection or usage for AI development ensures greater control over personal information.
Protections and Guardrails: Companies should implement safeguards to prevent misuse or unauthorized access to children’s data. This includes anonymizing datasets, restricting data sharing and adhering to data minimization principles. Companies should also have mechanisms in place to allow parents access to their child’s personal information to review and/or have the information deleted.

As companies increasingly leverage tracking technologies, such as geolocation data or online behavior, to enhance their AI models, it is imperative to address privacy concerns proactively. Sensitive data, particularly information related to children, must be handled with care to comply with regulatory requirements and uphold ethical standards.
Niantic’s use of “Pokémon Go” data serves as a compelling example of how innovative applications of real-world data can drive advancements in AI. However, it also emphasizes the need for organizations to prioritize transparency, consent and robust data protection. By doing so, businesses can unlock the potential of cutting-edge technology while safeguarding user trust and meeting their legal obligations.

Illinois Supreme Court Announces Policy on Artificial Intelligence

Last year, the Illinois Judicial Conference Task Force on Artificial Intelligence (IJC) was created to develop recommendations for how the Illinois Judicial Branch should regulate and use artificial intelligence (AI) in the court system. The IJC made recommendations to the Illinois Supreme Court, which adopted a policy on AI effective January 1, 2025.
The policy is consistent with the American Bar Association’s AI Policy. The policy states that “the Illinois Courts will be vigilant against AI technologies that jeopardize due process, equal protection, or access to justice. Unsubstantiated or deliberately misleading AI generated content that perpetuates bias, prejudices litigants, or obscures truth-finding and decision-making will not be tolerated.” In addition, the Illinois Supreme Court reiterated that “The Rules of Professional Conduct and the Code of Judicial Conduct apply fully to the use of AI technologies. Attorneys, judges, and self-represented litigants are accountable for their final work product. All users must thoroughly review AI-generated content before submitting it in any court proceeding to ensure accuracy and compliance with legal and ethical obligations. Prior to employing any technology, including generative AI applications, users must understand both general AI capabilities and the specific tools being utilized.”
Simultaneously, the Illinois Supreme Court published a judicial reference sheet that explains what AI and generative AI are, and what judges should watch for if litigants are using AI technology, including hallucinations, deepfakes, and extended reality. We anticipate more state courts will develop and adopt policies for AI use in the court system. Judges, lawyers, and pro se litigants should stay apprised of the court rules in the states in which they are active.

FINRA Publishes 2025 Annual Regulatory Oversight Report

On January 28, the Financial Industry Regulatory Authority (FINRA) published the 2025 update to its annual Regulatory Oversight Report.1 The report collects recent observations and findings from FINRA’s oversight programs – Member Supervision, Market Regulation and Transparency Services, and Enforcement – and provides FINRA member firms with a helpful resource to evaluate compliance on a number of cutting-edge topics. Through the report, member firms get to “see what FINRA sees” when it examines firms, conducts enforcement actions throughout the industry, and engages with firms throughout the year in providing regulatory guidance.
While FINRA’s report is not new, the 2025 edition is particularly noteworthy. First, it addresses new topics (like third-party risk/vendor management and extended-hours trading) and adds new findings and effective practices for prior topics. But more importantly, the report highlights practices and topics deemed important to FINRA. With Trump’s second administration in Washington and the likely change in regulatory priorities from federal securities regulators, coupled with the Supreme Court and other federal court decisions limiting the role of federal administrative agencies, FINRA may very well fill any vacuum of regulation. For these reasons, the topics deemed important to FINRA and the best practices that FINRA highlights and encourages may take on outsized importance. The report is not a list of enforcement priorities (which FINRA published through 2020), but it still provides a helpful window into the topics that FINRA is considering and, therefore, what member firms should similarly consider to the extent applicable to their business.
Using FINRA’s Regulatory Oversight Report
Given the wide range of business that FINRA member firms conduct, it is impossible to provide a one-size-fits-all document. Retail firms, for example, will find greater use for topics such as ACH transfer fraud, Regulation BI compliance, and issues relating to senior investors. Institutional firms and firms with trading execution businesses will make better use of guidance on the Market Access Rule and Regulation SHO bona fide market-making compliance. And all firms will benefit from observations on core compliance issues such as books and records, net capital, outside business activities and private securities transactions.
Regardless of the applicable topic, the report is organized as a helpful resource on the topics that it covers and the regulatory requirements applicable to them. A firm currently engaged in a particular subject business can evaluate whether it has experienced or considered any of the emerging threats that FINRA lists. It can conduct a gap analysis of the firm’s current supervisory systems and written supervisory procedures to see how the firm supervises the applicable business in the face of those threats. Members can review FINRA’s recent findings on topics from two perspectives: if the firm has experienced issues similar to a recent finding, it can evaluate the remedial steps taken and any new policies and procedures implemented. A firm not yet affected by a recent finding can ask “what if” and evaluate whether its supervisory and other systems are well-designed to address or prevent the issue. Further, armed with an answer to the age-old question of “What do other firms do here?,” a firm can critically assess the best practices highlighted by FINRA to determine whether the firm can and/or should implement any of them in its compliance and supervisory systems. Above all, the report provides a good opportunity to prompt informed discussion among applicable stakeholders in the organization and a helpful resource to lead that discussion.
New Topics Added to the Report
Described in greater detail below, FINRA added two new sections to the 2025 report to address third-party risks and extended hours trading. FINRA also added additional information addressing Generative artificial intelligence (AI).
Third-Party Risk Landscape. As noted in the report, cyberattacks on and outages at third-party vendors are on the rise. The report reminds firms that its supervisory obligations extend to activities and functions performed by third-party vendors. FINRA recommends effective practices to address third-party vendor risks, including:

maintaining a list of third-party vendor-provided services, systems and software components;
adopting supervisory controls and conducting risk assessments on the effects of a third-party vendor failure;
taking reasonable steps to help ensure that third-party vendors do not utilize Generative AI in a manner that would ingest the firm’s or customers’ sensitive information;
periodically reviewing third-party vendor tool default features and settings;
assessing third-party vendors’ ability to protect sensitive firm and customer non-public information and data; and
revoking a third-party vendor’s access with the relationship ends.

While it is unclear which specific regulatory requirement a firm needs to supervise with respect to the use of third-party vendors, it would certainly be prudent to take the steps described to avoid or minimize any service interruptions or other deficiencies that a third-party vendor might introduce.2
Extended Hours Trading. US securities markets trading outside of regular trading hours has become increasingly popular. The report reminds firms that offer extended hours trading to provide their customers with a risk disclosure statement that addresses extended hours trading under FINRA Rule 2265. The report also recommends effective practices to address risks associated with extended hours trading, including:

conducting best execution reviews that properly evaluate execution quality during extended hours;
reviewing customer disclosures to help ensure that the disclosures properly address extended hours trading risks;
establishing and maintaining appropriate supervision that addresses the unique risks of extended hours trading; and
evaluating operational readiness, customer support and business continuity planning associated with extended hours trading.

Focus on Generative AI. The report focuses on the risk of artificial intelligence and particularly notes how Generative AI can and is being used to further account takeovers and other forms of fraud. The report highlights a number of emerging cybercrime-related threats, including the use of Generative AI to provide fake content and to create malware that can constantly change to avoid detection.
Next Steps
We encourage firms to begin with the report’s table of contents to identify the topics most applicable to their business. 

1 The full report is available at https://www.finra.org/rules-guidance/guidance/reports/2025-finra-annual-regulatory-oversight-report.
2 Separately, FINRA requested that its member firms provide FINRA with information about their vendors and banks by February 25, 2025.

Italian Garante Investigates DeepSeek’s Data Practices

On January 28, 2025, the Italian Data Protection Authority (“Garante”) announced that it had launched an investigation into the data processing practices of Hangzhou DeepSeek Artificial Intelligence and Beijing DeepSeek Artificial Intelligence (collectively, “DeepSeek”). The investigation focuses on the collection, use and storage of personal data in relation to DeepSeek’s chatbot services.
Key Areas of Inquiry
The Garante indicated that it has formally requested information from DeepSeek with regards the following:

Details on how DeepSeek collects personal data, including the specific methods and channels used, and which personal data are collected. The Garante is also seeking information on the nature of the data used to train the AI system, including whether personal data is included.
Clarification on whether data is sourced directly from users, third parties or other mechanisms. Particularly, the Garante is interested in understanding whether personal data is collected through web scraping, and how both registered and unregistered users are informed about such data processing activities.
Identification of the legal basis relied on to legitimize the processing of personal data.
Confirmation on whether personal data is stored on servers located in China and compliance with applicable international data transfers requirements.

Next Steps
DeepSeek is required to provide the requested information to the Garante within 20 days. Failure to do so could lead to further regulatory action, including potential enforcement actions. The Garante previously sanctioned OpenAI’s ChatGPT for infringements to certain requirements of the European General Data Protection Regulation following a similar fact-finding inquiry.
Read the Garante’s press release (in Italian and English).