AI Regulation in Financial Services: US House Report

In December 2024, the US House of Representatives Bipartisan Task Force on Artificial Intelligence released a comprehensive report examining artificial intelligence’s (AI) impact across various sectors, including a significant focus on financial services. The report provides important insights into both the opportunities and challenges of AI adoption in the financial sector that will be a focus of the next Congress.
Key findings from the report
The task force highlighted several critical aspects of AI in financial services:

AI decision-making risks: AI automated decision-making tools trained on flawed or biased data can produce harmful outputs that may disproportionately affect certain groups. This risk is particularly heightened in areas such as lending and credit decisions, credit scoring models, and compliance with the Equal Credit Opportunity Act and Regulation B, and has been a strong focus of the Consumer Financial Protection Bureau’s supervisory highlights.
Consumer data privacy: Given AI’s reliance on large datasets, data privacy has emerged as a major concern. Financial institutions must carefully balance data utilization for AI systems with robust privacy protections.
Access to financial services: AI has the potential to increase access to financial services, particularly for underserved communities, through innovations such as alternative data underwriting and automated customer service.
Institution size disparity: Smaller financial institutions often lack the resources to develop and implement sophisticated AI tools, potentially creating competitive disadvantages against larger institutions.
Legacy integration: The financial sector has been utilizing AI technologies for decades, with applications ranging from fraud detection to algorithmic trading. However, recent advances in generative AI have introduced new considerations for regulation and oversight.

Practical takeaways for financial institutions
1. Governance and oversight

Establish internal AI governance bodies to oversee AI implementation
Maintain human oversight of AI systems, particularly for critical decisions
Document AI decision-making processes and maintain clear audit trails

2. Data management

Implement robust data quality controls for AI training data
Ensure compliance with privacy regulations when collecting and using customer data
Regularly audit AI systems for potential bias or discriminatory outcomes

3. Risk management

Develop comprehensive AI risk assessment frameworks
Maintain clear processes for monitoring and validating AI model outputs
Create contingency plans for AI system failures or errors

4. Regulatory compliance

Stay informed about evolving regulatory guidance on AI use
Ensure AI systems comply with existing antidiscrimination and consumer protection laws
Maintain transparency in AI-driven decisions affecting customers

5. Customer protection

Implement clear disclosure practices for AI-driven services
Consider alternative service options for customers who prefer non-AI interactions
Develop processes for addressing AI-related customer complaints

Looking ahead
The report suggests that future legislation will likely take a principles-based approach to AI regulation in financial services, focusing on existing regulatory frameworks while addressing new challenges posed by AI technology. Financial institutions should prepare for increased scrutiny of their AI systems while continuing to innovate responsibly.
For financial institutions considering or expanding their use of AI, the key message is clear: Embrace innovation while maintaining robust controls and oversight. Success will require balancing technological advancement with consumer protection and regulatory compliance.
This report serves as a valuable road map for financial institutions navigating the evolving landscape of AI regulation. Banks and financial services firms should review their current AI practices against these findings and prepare for potential regulatory developments in this space.

Two Takeaways from the U.S. Copyright Office’s Jan. 2025 Report on AI-Created Works

The U.S. Copyright Office released Part 2 of its report on Copyright and Artificial Intelligence on January 29, 2025. Part 2 focuses on the copyrightability of outputs created using generative AI. (The highly anticipated Part 3, which is forthcoming, will address copyright infringement and fair use issues involving generative AI.) 
Two key takeaways from Part 2 are as follows.
1. No softening of the “human authorship” requirement.
Consistent with its position taken in recent decisions and court cases, the Copyright Office reiterated, and did not soften, its position that human control over the creative expression in generative AI outputs is required for copyright registration. The Office explained that prompts (user instructions) to create works using generative AI do not provide sufficient human control over the output created by the AI. As such, even exhaustive prompt engineering will not result in copyrightable expression using today’s generative AI technology. The key wording from the Office is as follows:
In theory, AI systems could someday allow users to exert so much control over how their expression is reflected in an output that the system’s contribution would become [protectable]. The evidence as to the operation of today’s AI systems indicates that this is not currently the case. Prompts do not appear to adequately determine the expressive elements produced, or control how the system translates them into an output.
The Office reiterated its position that copyright protection may currently be available for: (a) human-created works of authorship used as inputs/prompts that are perceptible in AI-generated outputs; (b) creative selection, coordination, or arrangement of material in the outputs (i.e., compilations); (c) creative modifications of the outputs; and (d) the prompts themselves if they are sufficiently creative (but not the outputs created in response to the prompts).
2. The Office believes foreign laws are mostly consistent with its positions.
The Office undertook a comparative analysis of the copyrightability of AI-generated works in South Korea, Japan, China, the EU, the UK, Hong Kong, India, New Zealand, Canada, and Australia. The Office concluded that other countries “that have addressed this issue so far have agreed that copyright requires human authorship.” 
Interestingly, the Office pointed to a 2023 decision of the Beijing Internet Court that allowed copyright protection for an AI-generated image in an infringement case. According to the Office, that case decided that “the selection of over 150 prompts combined with subsequent adjustments and modifications demonstrated that the image was the result of the author’s ‘intellectual achievements,’ reflecting his personalized expression.” Given the Office’s position on prompts discussed above, it is unclear whether this Chinese decision is truly consistent with the U.S. view—perhaps it is partially consistent with respect to the “adjustments and modifications.” Indeed, commentators often cite that same case to show that China differs from the U.S. in allowing the protection of AI-generated images. 
The Office did note that the legal positions in many countries are evolving and, in some cases, unclear.
We now eagerly await Part 3 of the Office’s report, which is being prepared while dozens of AI copyright infringement ligations are coming to a head in courts around the country. We should soon start to see some answers to AI infringement and fair use questions, even if only preliminary answers.

Trump 2.0 Executive Orders; Shock and Awe

Overview
Since his inauguration on 20 January 2025, President Donald J. Trump has signed dozens of executive orders and presidential memoranda on topics including, but not limited to, energy and the environment; immigration; international trade; foreign policy; diversity, equity and inclusion (DEI); transforming the civil service and federal government; and technology. These presidential actions include recission orders of Biden-era regulations, withdrawal orders from international organizations and agreements, and orders implementing the administration’s affirmative policy objectives. These actions are indicative of the broader “America First” policy agenda set forth by President Trump throughout his campaign and signal key priority areas for his administration in the coming months.
End of the Biden Era
It is typical for an incoming president to issue a series of executive orders rescinding prior executive actions that conflict with the new administration’s agenda. For example, in former President Biden’s first 100 days in office, he reversed 62 of President Trump’s executive orders from his first term in office. In his first week back in office, President Trump has rolled back over 50 of President Biden’s executive actions. These Biden-era policies included topics such as ethics requirements for presidential appointees, COVID-19 response mechanisms, the enactment of health equity task forces, labor protections for federal workers, and efforts to mitigate climate change, among others. 
In addition to rescinding former President Biden’s executive orders, President Trump issued a series of orders that temporarily suspended pending rules and programs of the Biden administration. First, President Trump instated a regulatory freeze pending review on all proposed, pending, or finalized agency rules that have not yet been enacted. The freeze includes finalized rules that went unpublished in the Federal Register before the end of the Biden administration, published rules that have not yet taken effect, and any “regulatory actions . . . guidance documents . . . or substantive action” from federal agencies. President Trump also issued a hiring freeze on any new federal civilian employees, exempting military and immigration enforcement positions. 
Moreover, the Office of Management and Budget (OMB) issued an internal memo on Monday temporarily pausing federal grants, loans, and other financial assistance programs. However, the OMB later clarified that the pause only applies to programs implicated by seven of President Trump’s executive orders, and was particularly meant to target, among other things, ending policies such as “DEI, the green new deal, and funding nongovernmental organizations that undermine the national interest.” Shortly before it was to take effect, a temporary stay was issued by the U.S. District Court for the District of Columbia through 3 February 2025. On Wednesday, OMB announced that the original memo has been rescinded in light of widespread confusion on its potential implications. The seven individual EOs originally mentioned still remain effective. 
Looking Ahead: President Trump’s Agenda
Through this round of executive actions, President Trump has demonstrated his intention to utilize the full power of the presidency, in tandem with Republican control of Congress, to quickly enact his “America First” agenda. President Trump has identified key policy areas he plans to address in his second term, which primarily include energy dominance, immigration enforcement, global competition, undoing “woke” Biden-era policies, and American independence. 
In keeping with these campaign priorities, he has ordered the end of DEI within the federal government and directed federal agencies to investigate DEI efforts in the private sector, declared national emergencies on energy and immigration, ordered a review of U.S. trade imbalances in preparation for widespread tariffs, delayed the ban on TikTok, withdrew from the World Health Organization and Paris Climate Agreement, elevated domestic artificial intelligence (AI) technology, and renamed the Gulf of Mexico to the “Gulf of America.” 
Conclusion
Although the Trump administration has a clear and focused policy and regulatory agenda, and can work alongside a Republican-led Congress, narrow margins of majority remain in both chambers which will, at times, necessitate bipartisanship. As such, we expect President Trump to continue working to take action on issues in which he can act unilaterally so as to narrow the scope of policies that congressional Republicans must work to either enact via the budget reconciliation process, or build consensus with their Democrat counterparts. 
Additional Authors: Lauren E. Hamma, Neeki Memarzadeh, and Jasper G. Noble

EU KI-Verordnung: Wenn Die KI-Kompetenz Zur Pflicht Wird

Für viele gehören KI-Tools wie Copilot und ChatGPT bereits heute zum Alltag. Die EU KI-Verordnung (KIVO) wird gerade deshalb erhebliche Auswirkungen auf viele Unternehmen haben, insbesondere im Beschäftigungskontext. Diese Verordnung zielt darauf ab, den Einsatz von Künstlicher Intelligenz (KI) in der EU zu regulieren und sicherzustellen, dass KI-Systeme sicher und transparent eingesetzt werden. Wir beleuchten die wichtigsten Punkte, die Unternehmen beachten sollten.
Risikobasierter Ansatz
Die KIVO verfolgt einen risikobasierten Ansatz, bei dem KI-Systeme in drei Kategorien eingeteilt werden:

Verbotene Praktiken: Techniken, die die Entscheidungsfähigkeit von Personen beeinträchtigen oder deren Verhalten ausnutzen.
Hochrisiko-KI-Systeme: Systeme, die in kritischen Bereichen wie Beschäftigung und Personalmanagement.
KI-Systeme mit begrenztem Risiko: Systeme mit weniger strengen Anforderungen.

Hochrisiko-KI-Systeme im Beschäftigungskontext
Besonders relevant für Arbeitgeber sind KI-Systeme, die für Entscheidungen im Personalwesen eingesetzt werden, wie z.B.:

Einstellung oder Auswahl von Bewerbern;
Entscheidungen über Beförderungen und Kündigungen; oder
Zuweisung von Aufgaben basierend auf individuellem Verhalten oder persönlichen Merkmalen.

KI-Systeme mit diesem Einsatzgebiet sind regelmäßig als Hochrisiko-KI-System einzustufen, sodass besondere Anforderungen gelten. Es ist wichtig, sich frühzeitig mit den Anforderungen der KIVO vertraut zu machen und entsprechende Maßnahmen zur Umsetzung zu ergreifen.
Darüber hinaus sind KI-Systeme als Teil von verschiedensten Anwendungen denkbar, die die Unternehmen zur Erfüllung ihrer Aufgaben und Arbeitsabläufe einsetzen.
Spezifische Pflichten für Anbieter und Betreiber
Anbieter und Betreiber von KI-Systemen müssen verschiedene Pflichten erfüllen, darunter:

Sicherstellung der menschlichen Aufsicht über KI-Systeme durch Personen mit KI-Kompetenz.
Implementierung von Risikomanagement- und Qualitätsmanagementsystemen.
Transparenzverpflichtungen und Informationspflichten gegenüber betroffenen Personen.
Durchführung von Grundrechte-Folgenabschätzungen für Hochrisiko-KI-Systeme.

Unternehmen werden hier, bieten sie nicht selbst KI-Systeme an, vor allem in der Rolle der Betreiber betroffen sein.
Datenschutz und KI
Die KIVO und die Datenschutz-Grundverordnung (DSGVO) arbeiten Hand in Hand. Während die KIVO einen starken Fokus auf Produktsicherheitsaspekte hat, deckt die DSGVO die Rechte des Einzelnen bei der Verarbeitung personenbezogener Daten ab. Unternehmen müssen sicherstellen, dass sie beide Verordnungen einhalten.
Mitbestimmungsrechte
Die Einführung von KI-Systemen kann Mitbestimmungsrechte der Arbeitnehmervertretungen betreffen. Unternehmen sollten die relevanten Beteiligungsrechte beachten und gegebenenfalls Rahmen-Betriebsvereinbarungen zum KI-Einsatz abschließen.
Fahrplan zur Umsetzung der KIVO
Die KIVO ist seit dem 1. August 2024 geltendes Recht. Um den Unternehmen ausreichend Zeit zur Umsetzung zu geben, erfolgt die Einführung der Vorschriften stufenweise. Hier ein Auszug der wichtigsten Daten:

2. Februar 2025: Mitarbeiterinnen und Mitarbeiter von Unternehmen müssen über ausreichende KI-Kompetenz verfügen, d.h. Unternehmen ihre Mitarbeiterinnen und Mitarbeiter entsprechend schulen. Verbote für bestimmte KI-Praktiken treten in Kraft.
2. August 2026: Start eines weiteren Teils der KIVO-Vorschriften, einschließlich spezifischer Anforderungen für Hochrisiko-KI-Systeme.
2. August 2027: Anwendung der Vorschriften für Hochrisiko-KI-Systeme auf spezifisch regulierte Produkte.

Gerade für Unternehmen als Arbeitgeber und „Betreiber“ i.S. der KIVO ist der 2. Februar 2025 ein wichtiges Datum, um die KI-Kompetenz der Mitarbeiterinnen und Mitarbeiter herzustellen.

Alibaba Launches Qwen 2.5 AI Model

Alibaba Launches Qwen 2.5 AI Model Alibaba’s latest innovation, Qwen 2.5, is a powerful upgrade to the company’s previously released Qwen model, designed to push the boundaries of generative AI. With impressive advancements in performance, data processing, and multi-modal capabilities, Qwen 2.5 is set to transform various industries by enabling businesses to harness the full […]

USPTO Issues Artificial Intelligence Strategy

Artificial Intelligence (AI) in intellectual property is as big – and as fast-changing – a topic as ever. On January 14, 2025, the U.S. Patent and Trademark Office (USPTO) published an Artificial Intelligence Strategy (“USPTO’s AI Strategy”) document which discusses how the USPTO “aim[s] to address AI’s promise and challenges across intellectual property (IP) policy, agency operations, and the broader innovation ecosystem.” 
The precise direction that the USPTO will take is still uncertain. The USPTO’s AI Strategy was developed in alignment with President Biden’s October 2023 Executive Order on AI, entitled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”[1] However, the October 2023 Executive Order was revoked by President Trump’s January 20, 2025 Executive Order entitled “Initial Rescissions of Harmful Executive Orders and Actions.” On January 23, 2025, President Trump issued a new Executive Order on AI entitled “Removing Barriers to American Leadership in Artificial Intelligence,” which calls for the development of an Artificial Intelligence Action Plan within 180 days of the order. The January 23, 2025 Executive Order also calls for the suspension, revision, or rescinding of actions taken pursuant to President Biden’s October 2023 Executive Order that are inconsistent with a policy “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” At Mintz, we are closely following these developments and will continue to monitor how (and if) these policies and strategies indicated in the USPTO’s AI Strategy may be impacted by the institution of a new administration.[2] 
As it currently stands, the USPTO’s AI Strategy sets forth the USPTO’s AI vision and mission across the following five focus areas: 
1. Advance the development of IP policies that promote inclusive AI innovation and creativity.
The USPTO’s AI Strategy reiterates the USPTO’s commitment to advancing a positive future for AI and acknowledges that the USPTO plays a critical role in advancing emerging technologies such as AI by providing IP protection in the United States for AI-based inventions in a manner that incentivizes and supports innovation in AI. 
With this in mind, the USPTO discusses the need to anticipate and effectively respond to emerging AI-related IP policy issues such as the implications generative AI may play in the inventive process and its impacts on inventorship, subject matter eligibility, obviousness, enablement, and written description. The development and use of AI systems also impacts policy considerations for trademarks, copyrights, and trade secret laws. 
The USPTO also aims to study the interplay between AI innovation, economic activity and IP policy by conducting economic and legal research on the impacts of IP policy on AI-related innovation and through direct engagement with AI researchers, practitioners, and other stakeholders. The USPTO also encourages inclusion in the AI innovation ecosystem by fostering involvement with educational institutions and their participants and by contributing towards broader IP policymaking. 
2. Build best-in-class AI capabilities by investing in computational infrastructure, data resources, and business-driven product development.
The USPTO’s AI Strategy also involves using AI innovation to boost the USPTO’s IT portfolio in order to increase operational efficiencies and empower their workforce. 
In their AI Strategy, the USPTO discusses AI-driven systems that have been implemented at the USPTO including AI systems for analyzing nonprovisional utility patent applications to help identify patent classifications, assisting patent examiners in retrieving potential prior art, and providing virtual assistants to entrepreneurs interacting with the USPTO. The USPTO anticipates extending the use of AI tools into trademark examination and design patent examination processes. At Mintz we have been tracking the use of AI tools in patent examination, including the USPTO’s AI implemented “similarity search”[3] as well as the development of AI technology that could help improve productivity of the internal IP processes of your business.
To build upon its AI capabilities, the USPTO indicates that it will need to improve the USPTO’s computational infrastructure and IT systems. 
3. Promote the responsible use of AI within the USPTO and across the broader innovation ecosystem.
In view of the principles of responsible AI: safety, fairness, transparency, privacy, reliability and accountability, the USPTO aims to promote responsible use of AI within the USPTO through value-aligned product development, risk mitigation, and transparent stakeholder communication. To uphold public trust, the USPTO aims to ensure that sourcing, selection, and use of data across the USPTO’s AI initiatives is done while upholding equity, rights, and civil liberties in a manner that is lawful, ethical, and transparent. Similarly, the USPTO will put into place responsible AI development and clearly communicate the benefits and limitations of the AI systems to stakeholders. 
The USPTO will also work to promote respect for IP laws and policies as a part of responsible AI practice. 
4. Develop AI expertise within the USPTO’s workforce. 
The USPTO’s AI Strategy includes providing expanded training to USPTO Examiners in order to address AI-related subject matter in patent and trademark examination.[4] This will include developing foundational curricula that is made available to all Examiners – not just those who examine core AI technologies. Examiners will be provided with access to technical training to expand their AI knowledge and the USPTO will aim to attract and recruit Examiners with backgrounds in AI-related matters. Additional training will be provided to each USPTO business unit, including Patent Trial and Appeal Board (PTAB) judges, in order to support their individual needs. 
5. Collaborate with other U.S. government agencies, international partners, and the public on shared AI priorities.
The USPTO’s AI Strategy signals a dedication to a collaborative approach to developing the USPTO’s AI policy and technology. The USPTO aims to collaborate with the public, other agencies, and international partners on AI matters impacting the global IP system. 
Conclusion 
With these key focus areas, the USPTO’s AI Strategy emphasizes the USPTO’s vision to unleash American potential through the adoption of AI in order to drive and scale U.S. innovation, inclusive capitalism, and global competitiveness. The USPTO’s AI Strategy emphasizes the unique considerations that development in AI technology brings to policy and legal considerations. While the change in Presidential administrations is expected to affect how the USPTO’s AI Strategy is implemented, there is no question that this will continue to be an important topic for the foreseeable future.

[1] Biden’s Executive Order on Artificial Intelligence — AI: The Washington Report | Mintz
[2] President Trump Starts First Week with AI Executive Orders and Investments – AI: The Washington Report | Mintz
[3] Artificial Intelligence (AI) Takes a Role in USPTO Patent Searches | Mintz
[4] Navigating AI Integration: USPTO’s New Guidance for Patent and Trademark Practices | Mintz

The Impact of AI Executive Order’s Revocation Remains Uncertain, but New Trump EO Points to Path Forward

On January 20, 2025, President Trump revoked a number of Biden-era Executive Orders, including Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“EO 14110”). We previously reported on EO 14110. The full impact of this particular revocation is still being assessed, but Trump’s newly published Executive Order on Removing Barriers to American Leadership in Artificial Intelligence (“Trump EO”), issued on January 23, specifically directs his advisors to “identify any actions taken pursuant to Executive Order 14110 that are or may be inconsistent with, or present obstacles to, the policy set forth in . . . this order.”
EO 14110, issued by President Biden in 2023, called for a plethora of evaluations, reports, plans, frameworks, guidelines, and best practices related to the development and deployment of “safe, secure, and trustworthy AI systems.” While much of the directive demanded action from federal agencies, it also directed private companies to share with the federal government the results of “red-team” safety tests for foundation models that pose certain risks.
Many EO 14110-inspired actions have already been initiated by both the public and private sectors, but it is unclear the extent to which any such actions should be or have already been halted. It is also unclear whether final rules based, even in part, on EO 14110’s directives—such as the Department of Commerce’s Framework for Artificial Intelligence Diffusion and Health & Human Services’ Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing—are or will be affected.
The as-yet unnumbered Trump EO, issued on January 23, directs the Assistant to the President for Science and Technology, the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs, to “review, in coordination with the heads of all agencies as they deem relevant, all policies, directives, regulations, orders, and other actions taken pursuant to the revoked Executive Order 14110 . . . and identify any actions taken pursuant to Executive Order 14110 that are or may be inconsistent with, or present obstacles to, the policy set forth in section 2 of this order.”
Section 2 of the Trump EO provides: “It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” Hunton will continue to monitor for more specific indications associated with Executive Order 14110’s revocation and the Trump EO’s implementation and will share updates accordingly.

5 Key Takeaways | SI’s Downtown ‘Cats Discuss Artificial Intelligence (AI)

Recently, we brought together over 100 alumni and parents of the St. Ignatius College Preparatory community, aka the Downtown (Wild)Cats, to discuss the impact of Artificial Intelligence (AI) on the Bay Area business community.
On a blustery evening in San Francisco, I was joined on a panel by fellow SI alumni Eurie Kim of Forerunner Ventures and Eric Valle of Foundry1 and by my Mintz colleague Terri Shieh-Newton. Thank you to my firm Mintz for hosting us.
There are a few great takeaways from the event:

What makes a company an “AI Company”?  
The panel confirmed that you cannot just put “.ai” at the end of your web domain to be considered an AI company. 
Eurie Kim shared that there are two buckets of AI companies (i) AI-boosted and (ii) AI-enabled.  
Most tech companies in the Bay Area are AI-boosted in some way – it has become table stakes, like a website 25 years ago. The AI-enabled companies are doing things you could not do before, from AI personal assistants (Duckbill) to autonomous driving (Waymo).   
What is the value of AI to our businesses?
In the future, companies will be infinitely more interesting using AI to accelerate growth and reduce costs. 
Forerunner, who has successfully invested in direct-to-consumer darlings like Bonobos, Warby Parker, Oura, Away and Chime, is investing in companies using AI to win on quality. 
Eurie explained that we do not need more information from companies on the internet, we need the answer. Eurie believes that AI can deliver on the era of personalization in consumer purchasing that we have been talking about for the last decade.  
What are the limitations of AI?
The panel discussed that there is a difference between how AI can handle complex human problems and simple human problems.  Right now, AI can replace humans for simple problems, like gathering all of the data you need to make a decision. But, AI has struggled to solve for the more complex human problems, like driving an 18-wheeler from New York to California. 
This means that, we will need humans using AI to effectively solve complex human problems. Or, as NVIDIA CEO Jensen Huang says, “AI won’t take your job, it’s somebody using AI that will take your job.”  
What is one of the most unique uses of AI today? 
Terri Shieh-Newton shared a fascinating use of AI in life sciences called “Digital Twinning”. This is the use of a digital twin for the placebo group in a clinical trial. Terri explained that we would be able to see the effect of a drug being tested without testing it on humans. This reduces the cost and the number of people required to enroll in a clinical trial. It would also have a profound human effects because patients would not be disappointed at the end of the trial to learn that they were taking the placebo and not receiving the treatment. 
Why is so much money being invested in AI companies?
Despite the still nascent AI market, a lot of investors are pouring money into building large language models (LLMs) and investing in AI startups. 
Eric Valle noted that early in his career the tech market generally delivered outsized returns to investors, but the maturing market and competition among investors has moderated those returns. AI could be the kind of investment that could generate those returns 20x+ returns.  
Eric also talked about the rise of venture studios like his Foundry1 in AI. Venture studios are a combination of accelerator, incubator and traditional funds, where the fund partners play a direct role in formulating the idea and navigating the fragile early stages. This venture studio model is great for AI because the studio can take small ideas and expand them exponentially – and then raise the substantial amount of money it takes to operationalize an AI company.

Retailers: Questions About 2024’s AI-Assisted Invention Guidance? The USPTO May Have Answers

In February 2024, we posted about the USPTO’s Inventorship Guidance for AI-assisted Inventions and how that guidance might affect a retailer in New USPTO AI-Assisted Invention Guidance Will Affect Retailers and Consumer Goods Companies. With a year now having passed, it is likely you have questions about the guidance.
In mid-January 2025, the USPTO released a series of FAQs relating to this guidance that may answer certain questions. Specifically, there are three questions and responses in the FAQs. The USPTO characterized these FAQs as being issued to “provide additional information for stakeholders and examiners on how inventorship is analyzed, including for artificial intelligence (AI)-assisted inventions.” The USPTO further stated that “[w]e issued the FAQs in response to feedback from stakeholders. The FAQs explain that the guidance does not create a heightened standard for inventorship when technologies like AI are used in the creation of an invention, and the inventorship analysis should focus on the human contribution to the conception of the invention.” The FAQs appear to stem, at least in part, from written comments the USPTO received from the public on the guidance.
The FAQs serve to clarify key issues, including that:
1) there is no heightened standard for inventorship of AI-assisted inventions;
2) examiners do not typically make inquiries into inventorship during patent examination and the guidance does not create any new standards or responsibilities on examiners in this regard; and
3) there is no additional duty to disclose information, beyond what is already mandated by existing rules and policies.
A key statement by the USPTO in the FAQ responses is: “The USPTO will continue to presume that the named inventor(s) in a patent application or patent are the actual inventor(s).”
Regardless of whether the USPTO will (most likely) not make an inventorship inquiry during patent examination, IP counsel should still ensure that appropriate inventorship inquiries are made during the patent application drafting process. A best practice is to maintain all applicable records after drafting to support an inventorship inquiry, which may not come until after a patent issues, such as during litigation.
While the FAQs may not address all questions about the AI inventorship guidance, they are a step towards demonstrating how the USPTO will handle AI related patent issues moving forward.

Happy Privacy Day: Emerging Issues in Privacy, Cybersecurity, and AI in the Workplace

As the integration of technology in the workplace accelerates, so do the challenges related to privacy, cybersecurity, and the ethical use of artificial intelligence (AI). Human resource professionals and in-house counsel must navigate a rapidly evolving landscape of legal and regulatory requirements. This National Privacy Day, it’s crucial to spotlight emerging issues in workplace technology and the associated implications for data privacy, cybersecurity, and compliance.
We explore here practical use cases raising these issues, highlight key risks, and provide actionable insights for HR professionals and in-house counsel to manage these concerns effectively.
1. Wearables and the Intersection of Privacy, Security, and Disability Law
Wearable devices have a wide range of use cases including interactive training, performance monitoring, and navigation tracking. Wearables such as fitness trackers and smartwatches became more popular in HR and employee benefits departments when they were deployed in wellness programs to monitor employees’ health metrics, promote fitness, and provide a basis for doling out insurance premium incentives. While these tools offer benefits, they also collect sensitive health and other personal data, raising significant privacy and cybersecurity concerns under the Health Insurance Portability and Accountability Act (HIPAA), the Americans with Disabilities Act (ADA), and state privacy laws.
Earlier this year, the Equal Employment Opportunity Commission (EEOC) issued guidance emphasizing that data collected through wearables must align with ADA rules. More recently, the EEOC withdrew that guidance in response to an Executive Order issued by President Trump. Still, employers should evaluate their use of wearables and whether they raise ADA issues, such as voluntary use of such devices when collecting confidential medical information, making disability-related inquiries, and using aggregated or anonymized data to prevent discrimination claims.
Beyond ADA compliance, cybersecurity is critical. Wearables often collect sensitive data and transmit same to third-party vendors. Employers must assess these vendors’ data protection practices, including encryption protocols and incident response measures, to mitigate the risk of breaches or unauthorized access.
Practical Tip: Implement robust contracts with third-party vendors, requiring adherence to privacy laws, breach notification, and security standards. Also, ensure clear communication with employees about how their data will be collected, used, and stored.
2. Performance Management Platforms and Employee Monitoring
Platforms like Insightful and similar performance management tools are increasingly being used to monitor employee productivity and/or compliance with appliable law and company policies. These platforms can capture a vast array of data, including screen activity, keystrokes, and time spent on tasks, raising significant privacy concerns.
While such tools may improve efficiency and accountability, they also risk crossing boundaries, particularly when employees are unaware of the extent of monitoring and/or where the employer doesn’t have effective data minimization controls in place. State laws like the California Consumer Privacy Act (CCPA) can place limits on these monitoring practices, particularly if employees have a reasonable expectation of privacy. They also can require additional layers of security safeguards and administration of employee rights with respect to data collected and processed using the platform.
Practical Tip: Before deploying such tools, assess the necessity of data collection, ensure transparency by notifying employees, and restrict data collection to what is strictly necessary for business purposes. Implement policies that balance business needs with employee rights to privacy.
3. AI-Powered Dash Cams in Fleet Management
AI-enabled dash cams, often used for fleet management, combine video, audio, GPS, telematics, and/or biometrics to monitor driver behavior and vehicle performance, among other things. While these tools enhance safety and efficiency, they also present significant privacy and legal risks.
State biometric privacy laws, such as Illinois’s Biometric Information Privacy Act (BIPA) and similar laws in California, Colorado, and Texas, impose stringent requirements on biometric data collection, including obtaining employee consent and implementing robust data security measures. Employers must also assess the cybersecurity vulnerabilities of dash cam providers, given the volume of biometric, location, and other data they may collect.
Practical Tip: Conduct a legal review of biometric data collection practices, train employees on the use of dash cams, and audit vendor security practices to ensure compliance and minimize risk.
4. Assessing Vendor Cybersecurity for Employee Benefits Plans
Third-party vendors play a crucial role in processing data for retirement plans, such as 401(k) plan, as well as health and welfare plans. The Department of Labor (DOL) emphasized in recent guidance the importance of ERISA plan fiduciaries’ role to assess the cybersecurity practices of such service providers.
The DOL’s guidance underscores the need to evaluate vendors’ security measures, incident response plans, and data breach notification practices. Given the sensitive nature of data processed as part of plan administration—such as Social Security numbers, health records, and financial information—failure to vet vendors properly can lead to breaches, lawsuits, and regulatory penalties, including claims for breach of fiduciary duty.
Practical Tip: Conduct regular risk assessments of vendors, incorporate cybersecurity provisions into contracts, and document the due diligence process to demonstrate compliance with fiduciary obligations.
5. Biometrics for Access, Time Management, and Identity Verification
Biometric technology, such as fingerprint or facial recognition systems, is widely used for identity verification, physical access, and timekeeping. While convenient, the collection of biometric data carries significant privacy and cybersecurity risks.
BIPA and similar state laws require employers to obtain written consent, provide clear notices about data usage, and adhere to stringent security protocols. Additionally, biometrics are uniquely sensitive because they cannot be changed if compromised in a breach.
Practical Tip: Minimize reliance on biometric data where possible, ensure compliance with consent and notification requirements, and invest in encryption and secure storage systems for biometric information. Check out our Biometrics White Paper.
6. HIPAA Updates Affecting Group Health Plan Compliance
Recent changes to the HIPAA Privacy Rule, including provisions related to reproductive healthcare, significantly impact group health plans. The proposed HIPAA Security Rule amendments also signal stricter requirements for risk assessments, access controls, and data breach responses.
Employers sponsoring group health plans must stay ahead of these changes by updating their HIPAA policies and Notice of Privacy Practices, training staff, and ensuring that business associate agreements (BAAs) reflect the new requirements.
Practical Tip: Regularly review HIPAA compliance practices and monitor upcoming changes to ensure your group health plan aligns with evolving regulations.
7. Data Breach Notification Laws and Incident Response Plans
Many states have updated their data breach notification laws, lowering notification thresholds, shortening notification timelines, and expanding the definition of personal information. Employers should revise their incident response plans (IRPs) to align with these changes.
Practical Tip: Ensure IRPs reflect updated laws, test them through simulated breach scenarios, and coordinate with legal counsel to prepare for reporting obligations in case of an incident.
8. AI Deployment in Recruiting and Retention
AI tools are transforming HR functions, from recruiting to performance management and retention strategies. However, these tools require vast amounts of personal data to function effectively, increasing privacy and cybersecurity risks.
The EEOC and other regulatory bodies have cautioned against discriminatory impacts of AI, particularly regarding protected characteristics like disability, race, or gender. (As noted above, the EEOC recently withdrew its AI guidance under the ADA and Title VII following an Executive Order by the Trump Administration.) For example, the use of AI in hiring or promotions may trigger compliance obligations under the ADA, Title VII, and state laws.
Practical Tip: Conduct bias audits of AI systems, implement data minimization principles, and ensure compliance with applicable anti-discrimination laws.
9. Employee Use of AI Tools
Moving beyond the HR department, AI tools are fundamentally changing how people work. Tasks that used to require time-intensive manual effort—creating meeting minutes, preparing emails, digesting lengthy documents, creating PowerPoint decks—can now be completed far more efficiently with assistance from AI. The benefits of AI tools are undeniable, but so too are the associated risks. Organizations that rush to implement these tools without thoughtful vetting processes, policies, and training will expose themselves to significant regulatory and litigation risk.
Practical Tip: Not all AI tools are created equal—either in terms of the risks they pose or the utility they provide—so an important first step is developing criteria to assess, and then going through the process of assessing, which AI tools to permit employees to use. Equally important is establishing clear ground rules for how employees can use those tools. For instance, what company information are they permitted to use to prompt the tool; what are the processes for ensuring the tool’s output is accurate and consistent with company policies and objectives; and should employee use of AI tools be limited to internal functions or should they also be permitted to use these tools to generate work product for external audiences. 
10. Data Minimization Across the Employee Lifecycle
At the core of many of the above issues is the principle of data minimization. The California Privacy Protection Agency (CPPA) has emphasized that organizations must collect only the data necessary for specific purposes and ensure its secure disposal when no longer needed.
From recruiting to offboarding, HR professionals must assess whether data collection practices align with the principle of data minimization. Overcollection not only heightens privacy risks but also increases exposure in the event of a breach.
Practical Tip: Develop a data inventory mapping employee information from collection to disposal. Regularly review and update policies to limit data retention and enforce secure deletion practices.
Conclusion
The rapid adoption of emerging technologies presents both opportunities and challenges for employers. HR professionals and in-house counsel play a critical role in navigating privacy, cybersecurity, and AI compliance risks while fostering innovation.
By implementing robust policies, conducting regular risk assessments, and prioritizing data minimization, organizations can mitigate legal exposure and build employee trust. This National Privacy Day, take proactive steps to address these issues and position your organization as a leader in privacy and cybersecurity.

The AI Workplace: Understanding the EU Platform Work Directive [Podcast]

In this episode of our new podcast series, The AI Workplace, Patty Shapiro (shareholder, San Diego) and Sam Sedaei (associate, Chicago) discuss the European Union’s (EU) Platform Work Directive, which aims to regulate gig work and the use of artificial intelligence (AI). Patty outlines the directive’s goals, including the classification of gig workers and the establishment of AI transparency requirements. In addition, Sam and Patty address the directive’s overlap with the EU AI Act and the potential consequences of non-compliance.

The Growth of AI Law: Exploring Legal Challenges in Artificial Intelligence

Artificial intelligence (AI) is reshaping industries, decision-making processes, and creative fields. Its influence spans healthcare, transportation, communication, and entertainment, introducing unique challenges for existing legal systems.
Traditional laws often fail to address the complexities AI introduces, resulting in the rise of a specialized legal field: AI law. Attorneys in this area must tackle intricate issues, such as regulating machine-generated content, ensuring data privacy, and assigning accountability when AI systems fail.
What Is AI Law?
Generally, AI law deals with the legal implications of artificial intelligence. In practice, his specialty focuses on any legal areas that AI interacts with, including intellectual property disputes, privacy regulations, bias in algorithms, and liability concerns. AI’s integration into business and daily life drives the need for legal professionals with deep expertise in both law and technology. Lawyers in AI law often work with companies developing AI tools, governments crafting regulations, and individuals affected by AI-driven decisions.
AI law also bridges gaps between technological advancements and ethical considerations. For example, legal systems must decide how to handle decisions made by autonomous systems, which are neither human nor bound by the same rules. This evolving area provides a unique opportunity for legal professionals to influence the future of technology policy.
Key Challenges in AI Law
Ownership of AI-Generated Content
AI systems like ChatGPT and DALL-E generate creative works, but questions remain about who owns these outputs. Current copyright laws require human authorship for protection. For example, the U.S. Copyright Office recently adopted a policy that purely AI-generated art cannot be copyrighted. Under Copyright Office policy, applicants for registration have a “duty to disclose the inclusion of AI-generated content in a work submitted for registration.”
Ownership disputes complicate business operations. Developers, users, and organizations may all claim rights to AI-generated works. Attorneys must draft contracts clarifying these rights to prevent litigation. This issue also raises broader questions about whether existing intellectual property laws need reform to accommodate AI.
Data Privacy Issues
AI depends on vast amounts of data to function, much of which is personal and sensitive. For instance, AI-powered healthcare tools analyze patient data to predict diseases, while social media platforms use algorithms to infer user preferences. These applications expose gaps in current privacy laws, which were designed without AI’s capabilities in mind.
Lawyers specializing in AI law must address compliance with regulations like GDPR and CCPA while considering AI-specific risks. For example, an AI tool might infer health risks from social media activity, bypassing traditional privacy safeguards. Attorneys help organizations balance innovation with consumer trust by drafting policies that align with legal requirements and ethical standards.
Algorithmic Bias and Accountability
Bias in AI algorithms presents a serious legal and ethical challenge. Historical data used to train AI often reflects societal inequalities, which AI systems can perpetuate. For example, hiring algorithms may favor male candidates over females, while predictive policing tools disproportionately target minority communities.
Accountability for biased outcomes is unclear. Should the blame fall on developers, organizations deploying the AI, or those who provided the data? Attorneys working in this field advocate for greater transparency in AI decision-making processes. They also push for policies requiring regular audits of algorithms to identify and mitigate bias.
Liability for AI Failures
As AI systems gain autonomy, determining liability becomes increasingly complex. When a self-driving car causes an accident, who is responsible—the manufacturer, the software developer, or the owner? Similar dilemmas arise in healthcare, where AI tools assist in diagnosis and treatment but may provide harmful advice.
Current liability frameworks are not designed for these scenarios. Lawyers specializing in AI law must navigate these gaps, helping establish clear rules for assigning responsibility. They also work with insurers to develop policies that account for AI-related risks.
Why AI Law Requires Specialization
AI law requires a unique blend of legal expertise, technological knowledge, and ethical insight. Traditional legal training does not fully prepare attorneys to address AI’s complexities, making specialization essential. Lawyers must understand how AI systems work, interpret evolving regulations, and address ethical implications.
Education for AI Lawyers
Leading universities now offer courses focusing on AI and its legal challenges. For example, the University of California, Berkeley, provides specialized training to equip legal professionals with the skills needed in this emerging field through the Berkeley Law AI Institute and the Berkeley AI Policy Hub. Continuing education is also critical, because AI evolves rapidly, and attorneys must stay updated on technological advancements and regulatory changes. Seminars, certifications, and workshops help legal professionals remain effective in this dynamic area.
Ethics in AI
Ethics play a central role in AI law. The American Bar Association released its first ever guidance for lawyers on the use of AI on July 29, 2024. Beyond ensuring compliance, lawyers must advise clients on responsible AI use. This includes promoting fairness, preventing harm, and aligning technology with societal values. For instance, attorneys may recommend policies to increase transparency in decision-making algorithms, fostering trust between companies and users. Ethical considerations also influence regulatory frameworks. Governments and organizations are increasingly prioritizing ethical AI practices, making expertise in this area crucial for legal professionals.
Opportunities for Lawyers in AI Law
As AI continues to develop, knowledgeable AI lawyers become more necessary, and opportunities for lawyers to apply this specialization grow. “This is an emerging and necessary practice area in law, spurred by rapid development and integration of AI into society and business at all levels,” urges Jay McAllister, CEO of Paragon Tech, Inc. “Attorneys who opt to ignore these developments will find themselves at an ever-increasing disadvantage when compared to those who embrace AI and seek to understand its mechanics and implications.”
Advising Companies
Businesses adopting AI face complex legal and ethical challenges. From data privacy compliance to intellectual property disputes, companies need guidance to navigate these issues. Lawyers specializing in AI law help organizations develop governance frameworks, draft contracts, and manage risks. Startups and tech companies often seek legal advice during the development of AI tools. Attorneys play a key role in ensuring that these technologies comply with regulations while maintaining ethical standards. This advisory role is essential for fostering innovation in a responsible manner.
Resolving Legal Disputes
Disputes involving AI are becoming more common. These range from copyright claims over AI-generated content to liability cases involving autonomous vehicles. Lawyers with expertise in AI law handle these cases, often setting new legal precedents. For example, they may argue whether a user’s input into an AI system constitutes co-authorship, shaping how courts interpret intellectual property laws.
Shaping Policy
AI law is still in its infancy, and legal frameworks are far from complete. Lawyers have the opportunity to influence how these regulations are written. By participating in policy discussions, they help ensure that AI technologies are governed in a way that balances innovation with accountability. Policy work also includes advocating for greater transparency and fairness in AI systems. Legal professionals can contribute to creating guidelines that protect individual rights while fostering technological progress.
The Future of AI Law
AI law is a rapidly growing field with immense potential. It challenges lawyers to adapt traditional legal principles to a technology-driven world. Attorneys must combine legal expertise with technical literacy and ethical awareness to address AI’s unique challenges.
The demand for AI law specialists is only expected to grow as AI becomes more integrated into society. Legal professionals in this field have the chance to shape how AI is developed, regulated, and used. By addressing key issues in data privacy, bias, and liability, they ensure that AI serves society responsibly.
AI law represents a transformative opportunity for the legal profession. Attorneys who embrace this field can lead in creating policies and frameworks that protect human rights while enabling technological progress. The journey to commit to this developing area of legal practice requires dedication and collaboration, but a career in AI law could offer the chance to make a lasting impact on society.