Alibaba Launches Qwen 2.5 AI Model

Alibaba Launches Qwen 2.5 AI Model Alibaba’s latest innovation, Qwen 2.5, is a powerful upgrade to the company’s previously released Qwen model, designed to push the boundaries of generative AI. With impressive advancements in performance, data processing, and multi-modal capabilities, Qwen 2.5 is set to transform various industries by enabling businesses to harness the full […]

USPTO Issues Artificial Intelligence Strategy

Artificial Intelligence (AI) in intellectual property is as big – and as fast-changing – a topic as ever. On January 14, 2025, the U.S. Patent and Trademark Office (USPTO) published an Artificial Intelligence Strategy (“USPTO’s AI Strategy”) document which discusses how the USPTO “aim[s] to address AI’s promise and challenges across intellectual property (IP) policy, agency operations, and the broader innovation ecosystem.” 
The precise direction that the USPTO will take is still uncertain. The USPTO’s AI Strategy was developed in alignment with President Biden’s October 2023 Executive Order on AI, entitled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.”[1] However, the October 2023 Executive Order was revoked by President Trump’s January 20, 2025 Executive Order entitled “Initial Rescissions of Harmful Executive Orders and Actions.” On January 23, 2025, President Trump issued a new Executive Order on AI entitled “Removing Barriers to American Leadership in Artificial Intelligence,” which calls for the development of an Artificial Intelligence Action Plan within 180 days of the order. The January 23, 2025 Executive Order also calls for the suspension, revision, or rescinding of actions taken pursuant to President Biden’s October 2023 Executive Order that are inconsistent with a policy “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” At Mintz, we are closely following these developments and will continue to monitor how (and if) these policies and strategies indicated in the USPTO’s AI Strategy may be impacted by the institution of a new administration.[2] 
As it currently stands, the USPTO’s AI Strategy sets forth the USPTO’s AI vision and mission across the following five focus areas: 
1. Advance the development of IP policies that promote inclusive AI innovation and creativity.
The USPTO’s AI Strategy reiterates the USPTO’s commitment to advancing a positive future for AI and acknowledges that the USPTO plays a critical role in advancing emerging technologies such as AI by providing IP protection in the United States for AI-based inventions in a manner that incentivizes and supports innovation in AI. 
With this in mind, the USPTO discusses the need to anticipate and effectively respond to emerging AI-related IP policy issues such as the implications generative AI may play in the inventive process and its impacts on inventorship, subject matter eligibility, obviousness, enablement, and written description. The development and use of AI systems also impacts policy considerations for trademarks, copyrights, and trade secret laws. 
The USPTO also aims to study the interplay between AI innovation, economic activity and IP policy by conducting economic and legal research on the impacts of IP policy on AI-related innovation and through direct engagement with AI researchers, practitioners, and other stakeholders. The USPTO also encourages inclusion in the AI innovation ecosystem by fostering involvement with educational institutions and their participants and by contributing towards broader IP policymaking. 
2. Build best-in-class AI capabilities by investing in computational infrastructure, data resources, and business-driven product development.
The USPTO’s AI Strategy also involves using AI innovation to boost the USPTO’s IT portfolio in order to increase operational efficiencies and empower their workforce. 
In their AI Strategy, the USPTO discusses AI-driven systems that have been implemented at the USPTO including AI systems for analyzing nonprovisional utility patent applications to help identify patent classifications, assisting patent examiners in retrieving potential prior art, and providing virtual assistants to entrepreneurs interacting with the USPTO. The USPTO anticipates extending the use of AI tools into trademark examination and design patent examination processes. At Mintz we have been tracking the use of AI tools in patent examination, including the USPTO’s AI implemented “similarity search”[3] as well as the development of AI technology that could help improve productivity of the internal IP processes of your business.
To build upon its AI capabilities, the USPTO indicates that it will need to improve the USPTO’s computational infrastructure and IT systems. 
3. Promote the responsible use of AI within the USPTO and across the broader innovation ecosystem.
In view of the principles of responsible AI: safety, fairness, transparency, privacy, reliability and accountability, the USPTO aims to promote responsible use of AI within the USPTO through value-aligned product development, risk mitigation, and transparent stakeholder communication. To uphold public trust, the USPTO aims to ensure that sourcing, selection, and use of data across the USPTO’s AI initiatives is done while upholding equity, rights, and civil liberties in a manner that is lawful, ethical, and transparent. Similarly, the USPTO will put into place responsible AI development and clearly communicate the benefits and limitations of the AI systems to stakeholders. 
The USPTO will also work to promote respect for IP laws and policies as a part of responsible AI practice. 
4. Develop AI expertise within the USPTO’s workforce. 
The USPTO’s AI Strategy includes providing expanded training to USPTO Examiners in order to address AI-related subject matter in patent and trademark examination.[4] This will include developing foundational curricula that is made available to all Examiners – not just those who examine core AI technologies. Examiners will be provided with access to technical training to expand their AI knowledge and the USPTO will aim to attract and recruit Examiners with backgrounds in AI-related matters. Additional training will be provided to each USPTO business unit, including Patent Trial and Appeal Board (PTAB) judges, in order to support their individual needs. 
5. Collaborate with other U.S. government agencies, international partners, and the public on shared AI priorities.
The USPTO’s AI Strategy signals a dedication to a collaborative approach to developing the USPTO’s AI policy and technology. The USPTO aims to collaborate with the public, other agencies, and international partners on AI matters impacting the global IP system. 
Conclusion 
With these key focus areas, the USPTO’s AI Strategy emphasizes the USPTO’s vision to unleash American potential through the adoption of AI in order to drive and scale U.S. innovation, inclusive capitalism, and global competitiveness. The USPTO’s AI Strategy emphasizes the unique considerations that development in AI technology brings to policy and legal considerations. While the change in Presidential administrations is expected to affect how the USPTO’s AI Strategy is implemented, there is no question that this will continue to be an important topic for the foreseeable future.

[1] Biden’s Executive Order on Artificial Intelligence — AI: The Washington Report | Mintz
[2] President Trump Starts First Week with AI Executive Orders and Investments – AI: The Washington Report | Mintz
[3] Artificial Intelligence (AI) Takes a Role in USPTO Patent Searches | Mintz
[4] Navigating AI Integration: USPTO’s New Guidance for Patent and Trademark Practices | Mintz

The Impact of AI Executive Order’s Revocation Remains Uncertain, but New Trump EO Points to Path Forward

On January 20, 2025, President Trump revoked a number of Biden-era Executive Orders, including Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (“EO 14110”). We previously reported on EO 14110. The full impact of this particular revocation is still being assessed, but Trump’s newly published Executive Order on Removing Barriers to American Leadership in Artificial Intelligence (“Trump EO”), issued on January 23, specifically directs his advisors to “identify any actions taken pursuant to Executive Order 14110 that are or may be inconsistent with, or present obstacles to, the policy set forth in . . . this order.”
EO 14110, issued by President Biden in 2023, called for a plethora of evaluations, reports, plans, frameworks, guidelines, and best practices related to the development and deployment of “safe, secure, and trustworthy AI systems.” While much of the directive demanded action from federal agencies, it also directed private companies to share with the federal government the results of “red-team” safety tests for foundation models that pose certain risks.
Many EO 14110-inspired actions have already been initiated by both the public and private sectors, but it is unclear the extent to which any such actions should be or have already been halted. It is also unclear whether final rules based, even in part, on EO 14110’s directives—such as the Department of Commerce’s Framework for Artificial Intelligence Diffusion and Health & Human Services’ Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing—are or will be affected.
The as-yet unnumbered Trump EO, issued on January 23, directs the Assistant to the President for Science and Technology, the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs, to “review, in coordination with the heads of all agencies as they deem relevant, all policies, directives, regulations, orders, and other actions taken pursuant to the revoked Executive Order 14110 . . . and identify any actions taken pursuant to Executive Order 14110 that are or may be inconsistent with, or present obstacles to, the policy set forth in section 2 of this order.”
Section 2 of the Trump EO provides: “It is the policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” Hunton will continue to monitor for more specific indications associated with Executive Order 14110’s revocation and the Trump EO’s implementation and will share updates accordingly.

5 Key Takeaways | SI’s Downtown ‘Cats Discuss Artificial Intelligence (AI)

Recently, we brought together over 100 alumni and parents of the St. Ignatius College Preparatory community, aka the Downtown (Wild)Cats, to discuss the impact of Artificial Intelligence (AI) on the Bay Area business community.
On a blustery evening in San Francisco, I was joined on a panel by fellow SI alumni Eurie Kim of Forerunner Ventures and Eric Valle of Foundry1 and by my Mintz colleague Terri Shieh-Newton. Thank you to my firm Mintz for hosting us.
There are a few great takeaways from the event:

What makes a company an “AI Company”?  
The panel confirmed that you cannot just put “.ai” at the end of your web domain to be considered an AI company. 
Eurie Kim shared that there are two buckets of AI companies (i) AI-boosted and (ii) AI-enabled.  
Most tech companies in the Bay Area are AI-boosted in some way – it has become table stakes, like a website 25 years ago. The AI-enabled companies are doing things you could not do before, from AI personal assistants (Duckbill) to autonomous driving (Waymo).   
What is the value of AI to our businesses?
In the future, companies will be infinitely more interesting using AI to accelerate growth and reduce costs. 
Forerunner, who has successfully invested in direct-to-consumer darlings like Bonobos, Warby Parker, Oura, Away and Chime, is investing in companies using AI to win on quality. 
Eurie explained that we do not need more information from companies on the internet, we need the answer. Eurie believes that AI can deliver on the era of personalization in consumer purchasing that we have been talking about for the last decade.  
What are the limitations of AI?
The panel discussed that there is a difference between how AI can handle complex human problems and simple human problems.  Right now, AI can replace humans for simple problems, like gathering all of the data you need to make a decision. But, AI has struggled to solve for the more complex human problems, like driving an 18-wheeler from New York to California. 
This means that, we will need humans using AI to effectively solve complex human problems. Or, as NVIDIA CEO Jensen Huang says, “AI won’t take your job, it’s somebody using AI that will take your job.”  
What is one of the most unique uses of AI today? 
Terri Shieh-Newton shared a fascinating use of AI in life sciences called “Digital Twinning”. This is the use of a digital twin for the placebo group in a clinical trial. Terri explained that we would be able to see the effect of a drug being tested without testing it on humans. This reduces the cost and the number of people required to enroll in a clinical trial. It would also have a profound human effects because patients would not be disappointed at the end of the trial to learn that they were taking the placebo and not receiving the treatment. 
Why is so much money being invested in AI companies?
Despite the still nascent AI market, a lot of investors are pouring money into building large language models (LLMs) and investing in AI startups. 
Eric Valle noted that early in his career the tech market generally delivered outsized returns to investors, but the maturing market and competition among investors has moderated those returns. AI could be the kind of investment that could generate those returns 20x+ returns.  
Eric also talked about the rise of venture studios like his Foundry1 in AI. Venture studios are a combination of accelerator, incubator and traditional funds, where the fund partners play a direct role in formulating the idea and navigating the fragile early stages. This venture studio model is great for AI because the studio can take small ideas and expand them exponentially – and then raise the substantial amount of money it takes to operationalize an AI company.

Retailers: Questions About 2024’s AI-Assisted Invention Guidance? The USPTO May Have Answers

In February 2024, we posted about the USPTO’s Inventorship Guidance for AI-assisted Inventions and how that guidance might affect a retailer in New USPTO AI-Assisted Invention Guidance Will Affect Retailers and Consumer Goods Companies. With a year now having passed, it is likely you have questions about the guidance.
In mid-January 2025, the USPTO released a series of FAQs relating to this guidance that may answer certain questions. Specifically, there are three questions and responses in the FAQs. The USPTO characterized these FAQs as being issued to “provide additional information for stakeholders and examiners on how inventorship is analyzed, including for artificial intelligence (AI)-assisted inventions.” The USPTO further stated that “[w]e issued the FAQs in response to feedback from stakeholders. The FAQs explain that the guidance does not create a heightened standard for inventorship when technologies like AI are used in the creation of an invention, and the inventorship analysis should focus on the human contribution to the conception of the invention.” The FAQs appear to stem, at least in part, from written comments the USPTO received from the public on the guidance.
The FAQs serve to clarify key issues, including that:
1) there is no heightened standard for inventorship of AI-assisted inventions;
2) examiners do not typically make inquiries into inventorship during patent examination and the guidance does not create any new standards or responsibilities on examiners in this regard; and
3) there is no additional duty to disclose information, beyond what is already mandated by existing rules and policies.
A key statement by the USPTO in the FAQ responses is: “The USPTO will continue to presume that the named inventor(s) in a patent application or patent are the actual inventor(s).”
Regardless of whether the USPTO will (most likely) not make an inventorship inquiry during patent examination, IP counsel should still ensure that appropriate inventorship inquiries are made during the patent application drafting process. A best practice is to maintain all applicable records after drafting to support an inventorship inquiry, which may not come until after a patent issues, such as during litigation.
While the FAQs may not address all questions about the AI inventorship guidance, they are a step towards demonstrating how the USPTO will handle AI related patent issues moving forward.

Happy Privacy Day: Emerging Issues in Privacy, Cybersecurity, and AI in the Workplace

As the integration of technology in the workplace accelerates, so do the challenges related to privacy, cybersecurity, and the ethical use of artificial intelligence (AI). Human resource professionals and in-house counsel must navigate a rapidly evolving landscape of legal and regulatory requirements. This National Privacy Day, it’s crucial to spotlight emerging issues in workplace technology and the associated implications for data privacy, cybersecurity, and compliance.
We explore here practical use cases raising these issues, highlight key risks, and provide actionable insights for HR professionals and in-house counsel to manage these concerns effectively.
1. Wearables and the Intersection of Privacy, Security, and Disability Law
Wearable devices have a wide range of use cases including interactive training, performance monitoring, and navigation tracking. Wearables such as fitness trackers and smartwatches became more popular in HR and employee benefits departments when they were deployed in wellness programs to monitor employees’ health metrics, promote fitness, and provide a basis for doling out insurance premium incentives. While these tools offer benefits, they also collect sensitive health and other personal data, raising significant privacy and cybersecurity concerns under the Health Insurance Portability and Accountability Act (HIPAA), the Americans with Disabilities Act (ADA), and state privacy laws.
Earlier this year, the Equal Employment Opportunity Commission (EEOC) issued guidance emphasizing that data collected through wearables must align with ADA rules. More recently, the EEOC withdrew that guidance in response to an Executive Order issued by President Trump. Still, employers should evaluate their use of wearables and whether they raise ADA issues, such as voluntary use of such devices when collecting confidential medical information, making disability-related inquiries, and using aggregated or anonymized data to prevent discrimination claims.
Beyond ADA compliance, cybersecurity is critical. Wearables often collect sensitive data and transmit same to third-party vendors. Employers must assess these vendors’ data protection practices, including encryption protocols and incident response measures, to mitigate the risk of breaches or unauthorized access.
Practical Tip: Implement robust contracts with third-party vendors, requiring adherence to privacy laws, breach notification, and security standards. Also, ensure clear communication with employees about how their data will be collected, used, and stored.
2. Performance Management Platforms and Employee Monitoring
Platforms like Insightful and similar performance management tools are increasingly being used to monitor employee productivity and/or compliance with appliable law and company policies. These platforms can capture a vast array of data, including screen activity, keystrokes, and time spent on tasks, raising significant privacy concerns.
While such tools may improve efficiency and accountability, they also risk crossing boundaries, particularly when employees are unaware of the extent of monitoring and/or where the employer doesn’t have effective data minimization controls in place. State laws like the California Consumer Privacy Act (CCPA) can place limits on these monitoring practices, particularly if employees have a reasonable expectation of privacy. They also can require additional layers of security safeguards and administration of employee rights with respect to data collected and processed using the platform.
Practical Tip: Before deploying such tools, assess the necessity of data collection, ensure transparency by notifying employees, and restrict data collection to what is strictly necessary for business purposes. Implement policies that balance business needs with employee rights to privacy.
3. AI-Powered Dash Cams in Fleet Management
AI-enabled dash cams, often used for fleet management, combine video, audio, GPS, telematics, and/or biometrics to monitor driver behavior and vehicle performance, among other things. While these tools enhance safety and efficiency, they also present significant privacy and legal risks.
State biometric privacy laws, such as Illinois’s Biometric Information Privacy Act (BIPA) and similar laws in California, Colorado, and Texas, impose stringent requirements on biometric data collection, including obtaining employee consent and implementing robust data security measures. Employers must also assess the cybersecurity vulnerabilities of dash cam providers, given the volume of biometric, location, and other data they may collect.
Practical Tip: Conduct a legal review of biometric data collection practices, train employees on the use of dash cams, and audit vendor security practices to ensure compliance and minimize risk.
4. Assessing Vendor Cybersecurity for Employee Benefits Plans
Third-party vendors play a crucial role in processing data for retirement plans, such as 401(k) plan, as well as health and welfare plans. The Department of Labor (DOL) emphasized in recent guidance the importance of ERISA plan fiduciaries’ role to assess the cybersecurity practices of such service providers.
The DOL’s guidance underscores the need to evaluate vendors’ security measures, incident response plans, and data breach notification practices. Given the sensitive nature of data processed as part of plan administration—such as Social Security numbers, health records, and financial information—failure to vet vendors properly can lead to breaches, lawsuits, and regulatory penalties, including claims for breach of fiduciary duty.
Practical Tip: Conduct regular risk assessments of vendors, incorporate cybersecurity provisions into contracts, and document the due diligence process to demonstrate compliance with fiduciary obligations.
5. Biometrics for Access, Time Management, and Identity Verification
Biometric technology, such as fingerprint or facial recognition systems, is widely used for identity verification, physical access, and timekeeping. While convenient, the collection of biometric data carries significant privacy and cybersecurity risks.
BIPA and similar state laws require employers to obtain written consent, provide clear notices about data usage, and adhere to stringent security protocols. Additionally, biometrics are uniquely sensitive because they cannot be changed if compromised in a breach.
Practical Tip: Minimize reliance on biometric data where possible, ensure compliance with consent and notification requirements, and invest in encryption and secure storage systems for biometric information. Check out our Biometrics White Paper.
6. HIPAA Updates Affecting Group Health Plan Compliance
Recent changes to the HIPAA Privacy Rule, including provisions related to reproductive healthcare, significantly impact group health plans. The proposed HIPAA Security Rule amendments also signal stricter requirements for risk assessments, access controls, and data breach responses.
Employers sponsoring group health plans must stay ahead of these changes by updating their HIPAA policies and Notice of Privacy Practices, training staff, and ensuring that business associate agreements (BAAs) reflect the new requirements.
Practical Tip: Regularly review HIPAA compliance practices and monitor upcoming changes to ensure your group health plan aligns with evolving regulations.
7. Data Breach Notification Laws and Incident Response Plans
Many states have updated their data breach notification laws, lowering notification thresholds, shortening notification timelines, and expanding the definition of personal information. Employers should revise their incident response plans (IRPs) to align with these changes.
Practical Tip: Ensure IRPs reflect updated laws, test them through simulated breach scenarios, and coordinate with legal counsel to prepare for reporting obligations in case of an incident.
8. AI Deployment in Recruiting and Retention
AI tools are transforming HR functions, from recruiting to performance management and retention strategies. However, these tools require vast amounts of personal data to function effectively, increasing privacy and cybersecurity risks.
The EEOC and other regulatory bodies have cautioned against discriminatory impacts of AI, particularly regarding protected characteristics like disability, race, or gender. (As noted above, the EEOC recently withdrew its AI guidance under the ADA and Title VII following an Executive Order by the Trump Administration.) For example, the use of AI in hiring or promotions may trigger compliance obligations under the ADA, Title VII, and state laws.
Practical Tip: Conduct bias audits of AI systems, implement data minimization principles, and ensure compliance with applicable anti-discrimination laws.
9. Employee Use of AI Tools
Moving beyond the HR department, AI tools are fundamentally changing how people work. Tasks that used to require time-intensive manual effort—creating meeting minutes, preparing emails, digesting lengthy documents, creating PowerPoint decks—can now be completed far more efficiently with assistance from AI. The benefits of AI tools are undeniable, but so too are the associated risks. Organizations that rush to implement these tools without thoughtful vetting processes, policies, and training will expose themselves to significant regulatory and litigation risk.
Practical Tip: Not all AI tools are created equal—either in terms of the risks they pose or the utility they provide—so an important first step is developing criteria to assess, and then going through the process of assessing, which AI tools to permit employees to use. Equally important is establishing clear ground rules for how employees can use those tools. For instance, what company information are they permitted to use to prompt the tool; what are the processes for ensuring the tool’s output is accurate and consistent with company policies and objectives; and should employee use of AI tools be limited to internal functions or should they also be permitted to use these tools to generate work product for external audiences. 
10. Data Minimization Across the Employee Lifecycle
At the core of many of the above issues is the principle of data minimization. The California Privacy Protection Agency (CPPA) has emphasized that organizations must collect only the data necessary for specific purposes and ensure its secure disposal when no longer needed.
From recruiting to offboarding, HR professionals must assess whether data collection practices align with the principle of data minimization. Overcollection not only heightens privacy risks but also increases exposure in the event of a breach.
Practical Tip: Develop a data inventory mapping employee information from collection to disposal. Regularly review and update policies to limit data retention and enforce secure deletion practices.
Conclusion
The rapid adoption of emerging technologies presents both opportunities and challenges for employers. HR professionals and in-house counsel play a critical role in navigating privacy, cybersecurity, and AI compliance risks while fostering innovation.
By implementing robust policies, conducting regular risk assessments, and prioritizing data minimization, organizations can mitigate legal exposure and build employee trust. This National Privacy Day, take proactive steps to address these issues and position your organization as a leader in privacy and cybersecurity.

The AI Workplace: Understanding the EU Platform Work Directive [Podcast]

In this episode of our new podcast series, The AI Workplace, Patty Shapiro (shareholder, San Diego) and Sam Sedaei (associate, Chicago) discuss the European Union’s (EU) Platform Work Directive, which aims to regulate gig work and the use of artificial intelligence (AI). Patty outlines the directive’s goals, including the classification of gig workers and the establishment of AI transparency requirements. In addition, Sam and Patty address the directive’s overlap with the EU AI Act and the potential consequences of non-compliance.

The Growth of AI Law: Exploring Legal Challenges in Artificial Intelligence

Artificial intelligence (AI) is reshaping industries, decision-making processes, and creative fields. Its influence spans healthcare, transportation, communication, and entertainment, introducing unique challenges for existing legal systems.
Traditional laws often fail to address the complexities AI introduces, resulting in the rise of a specialized legal field: AI law. Attorneys in this area must tackle intricate issues, such as regulating machine-generated content, ensuring data privacy, and assigning accountability when AI systems fail.
What Is AI Law?
Generally, AI law deals with the legal implications of artificial intelligence. In practice, his specialty focuses on any legal areas that AI interacts with, including intellectual property disputes, privacy regulations, bias in algorithms, and liability concerns. AI’s integration into business and daily life drives the need for legal professionals with deep expertise in both law and technology. Lawyers in AI law often work with companies developing AI tools, governments crafting regulations, and individuals affected by AI-driven decisions.
AI law also bridges gaps between technological advancements and ethical considerations. For example, legal systems must decide how to handle decisions made by autonomous systems, which are neither human nor bound by the same rules. This evolving area provides a unique opportunity for legal professionals to influence the future of technology policy.
Key Challenges in AI Law
Ownership of AI-Generated Content
AI systems like ChatGPT and DALL-E generate creative works, but questions remain about who owns these outputs. Current copyright laws require human authorship for protection. For example, the U.S. Copyright Office recently adopted a policy that purely AI-generated art cannot be copyrighted. Under Copyright Office policy, applicants for registration have a “duty to disclose the inclusion of AI-generated content in a work submitted for registration.”
Ownership disputes complicate business operations. Developers, users, and organizations may all claim rights to AI-generated works. Attorneys must draft contracts clarifying these rights to prevent litigation. This issue also raises broader questions about whether existing intellectual property laws need reform to accommodate AI.
Data Privacy Issues
AI depends on vast amounts of data to function, much of which is personal and sensitive. For instance, AI-powered healthcare tools analyze patient data to predict diseases, while social media platforms use algorithms to infer user preferences. These applications expose gaps in current privacy laws, which were designed without AI’s capabilities in mind.
Lawyers specializing in AI law must address compliance with regulations like GDPR and CCPA while considering AI-specific risks. For example, an AI tool might infer health risks from social media activity, bypassing traditional privacy safeguards. Attorneys help organizations balance innovation with consumer trust by drafting policies that align with legal requirements and ethical standards.
Algorithmic Bias and Accountability
Bias in AI algorithms presents a serious legal and ethical challenge. Historical data used to train AI often reflects societal inequalities, which AI systems can perpetuate. For example, hiring algorithms may favor male candidates over females, while predictive policing tools disproportionately target minority communities.
Accountability for biased outcomes is unclear. Should the blame fall on developers, organizations deploying the AI, or those who provided the data? Attorneys working in this field advocate for greater transparency in AI decision-making processes. They also push for policies requiring regular audits of algorithms to identify and mitigate bias.
Liability for AI Failures
As AI systems gain autonomy, determining liability becomes increasingly complex. When a self-driving car causes an accident, who is responsible—the manufacturer, the software developer, or the owner? Similar dilemmas arise in healthcare, where AI tools assist in diagnosis and treatment but may provide harmful advice.
Current liability frameworks are not designed for these scenarios. Lawyers specializing in AI law must navigate these gaps, helping establish clear rules for assigning responsibility. They also work with insurers to develop policies that account for AI-related risks.
Why AI Law Requires Specialization
AI law requires a unique blend of legal expertise, technological knowledge, and ethical insight. Traditional legal training does not fully prepare attorneys to address AI’s complexities, making specialization essential. Lawyers must understand how AI systems work, interpret evolving regulations, and address ethical implications.
Education for AI Lawyers
Leading universities now offer courses focusing on AI and its legal challenges. For example, the University of California, Berkeley, provides specialized training to equip legal professionals with the skills needed in this emerging field through the Berkeley Law AI Institute and the Berkeley AI Policy Hub. Continuing education is also critical, because AI evolves rapidly, and attorneys must stay updated on technological advancements and regulatory changes. Seminars, certifications, and workshops help legal professionals remain effective in this dynamic area.
Ethics in AI
Ethics play a central role in AI law. The American Bar Association released its first ever guidance for lawyers on the use of AI on July 29, 2024. Beyond ensuring compliance, lawyers must advise clients on responsible AI use. This includes promoting fairness, preventing harm, and aligning technology with societal values. For instance, attorneys may recommend policies to increase transparency in decision-making algorithms, fostering trust between companies and users. Ethical considerations also influence regulatory frameworks. Governments and organizations are increasingly prioritizing ethical AI practices, making expertise in this area crucial for legal professionals.
Opportunities for Lawyers in AI Law
As AI continues to develop, knowledgeable AI lawyers become more necessary, and opportunities for lawyers to apply this specialization grow. “This is an emerging and necessary practice area in law, spurred by rapid development and integration of AI into society and business at all levels,” urges Jay McAllister, CEO of Paragon Tech, Inc. “Attorneys who opt to ignore these developments will find themselves at an ever-increasing disadvantage when compared to those who embrace AI and seek to understand its mechanics and implications.”
Advising Companies
Businesses adopting AI face complex legal and ethical challenges. From data privacy compliance to intellectual property disputes, companies need guidance to navigate these issues. Lawyers specializing in AI law help organizations develop governance frameworks, draft contracts, and manage risks. Startups and tech companies often seek legal advice during the development of AI tools. Attorneys play a key role in ensuring that these technologies comply with regulations while maintaining ethical standards. This advisory role is essential for fostering innovation in a responsible manner.
Resolving Legal Disputes
Disputes involving AI are becoming more common. These range from copyright claims over AI-generated content to liability cases involving autonomous vehicles. Lawyers with expertise in AI law handle these cases, often setting new legal precedents. For example, they may argue whether a user’s input into an AI system constitutes co-authorship, shaping how courts interpret intellectual property laws.
Shaping Policy
AI law is still in its infancy, and legal frameworks are far from complete. Lawyers have the opportunity to influence how these regulations are written. By participating in policy discussions, they help ensure that AI technologies are governed in a way that balances innovation with accountability. Policy work also includes advocating for greater transparency and fairness in AI systems. Legal professionals can contribute to creating guidelines that protect individual rights while fostering technological progress.
The Future of AI Law
AI law is a rapidly growing field with immense potential. It challenges lawyers to adapt traditional legal principles to a technology-driven world. Attorneys must combine legal expertise with technical literacy and ethical awareness to address AI’s unique challenges.
The demand for AI law specialists is only expected to grow as AI becomes more integrated into society. Legal professionals in this field have the chance to shape how AI is developed, regulated, and used. By addressing key issues in data privacy, bias, and liability, they ensure that AI serves society responsibly.
AI law represents a transformative opportunity for the legal profession. Attorneys who embrace this field can lead in creating policies and frameworks that protect human rights while enabling technological progress. The journey to commit to this developing area of legal practice requires dedication and collaboration, but a career in AI law could offer the chance to make a lasting impact on society.

Trump Alters AI Policy with New Executive Order

On January 23, 2025, President Trump issued an Executive Order entitled “Removing Barriers to American Leadership in Artificial Intelligence.” The Executive Order seeks to maintain US leadership in AI innovation. To that end, the Order “revokes certain existing AI policies and directives that act as barriers to American AI innovation,” but does not identify the impacted policies and directives. Rather, it appears those policies and directives are to be identified by the Assistant to the President for Science and Technology, working with agency heads. The Order also requires the development of a new AI action plan within 180 days. Although the details of the new AI action plan are forthcoming, the Order states that the development of AI systems must be “free from ideological bias or engineered social agendas.”
Earlier in the week, Trump also signed an executive order revoking 78 executive orders signed by President Biden, including Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, issued on October 30, 2023. Biden’s Executive Order sought to regulate the development, deployment, and governance of artificial intelligence within the United States, and the document offered insight into the types of issues that concerned the previous Administration (specifically, AI security, privacy and discrimination). More information on Biden’s Executive Order can be found here.
As relevant to employers and developers of AI tools for employers, the revocation of Biden’s Executive Order is largely symbolic, because it did not directly impose requirements on employers who use AI. Instead, it directed federal agencies to prepare reports or publish non-binding guidance on topics such as:

“the labor-market effects of AI,”
“the abilities of agencies to support workers displaced by the adoptions of AI and other technological advancements,” and
“principles and best practices for employers” to “mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits.”

Biden’s Executive Order had also directed agencies to provide anti-discrimination guidance to federal benefits programs and federal contractors over their use of AI algorithms and to coordinate on best practices for investigating and enforcing civil rights violations related to AI.
While employers may not experience any immediate effects from the two new Executive Orders this week, taken together, they lend support to predictions that the new Administration would take a more hands-off approach to regulating AI. We will continue monitor how the AI legal landscape evolves under the new Administration and continue to report on AI developments that affect employers.

Don’t Forget the EU: Italy Issued First GenAI Fine of €15 Million Alleging GDPR Violations

At the end of 2024 the Italian Data Protection Authority issued a 15 million euro fine in the first generative AI-related case brought under GDPR. According to Garante (the Italian authority), OpenAI trained ChatGPT with users’ personal data without first identifying a proper legal basis for the activity, as required under GDPR. The Order also alleges that OpenAI failed to notify Garante about a data breach the company experienced in March 2023. Additionally, the Order states that OpenAI did not provide proper age verification mechanisms for users under age 13. 
In addition to the fine, OpenAI must also conduct a six-month public education campaign on how ChatGPT works and how data is used to train AI products. The campaign must also provide individuals with information about their rights and how to exercise their rights. OpenAI intends to appeal the decision.
This decision follows March 2023 temporary ban of ChatGPT in Italy. And in July 2023, the FTC issued a Civil Investigative Demand to OpenAI.
Putting it into Practice: While it is unclear the extent to which AI will receive the same type of scrutiny in the US that it did under the prior administration, this decision is a reminder that the EU regulators are keeping a close eye on AI activities, especially when personal data is used to train the tool.
Listen to this post 

Energy Demand for AI Drives the Midwest’s Focus on Resource Adequacy

As presidential administrations change and policy priorities shift, the steady hum of electricity demand from artificial intelligence (AI) and data centers presses forward. Last week, the President signed several executive orders to realign federal energy priorities. One recent executive order is crucial to data centers and artificial intelligence: Declaring a National Energy Emergency.
The executive order focuses on improving grid reliability and ensuring a reliable supply of energy (though not wind- or solar-powered energy). The impetus for declaring an emergency was, in part, “due to a high demand for energy and natural resources to power the next generation of technology.” The emergency declaration unlocks several powers for the President. Executive agencies were directed to exercise those powers to facilitate the siting, production, transportation and generation of domestic energy resources on federal (or even private) lands. These resources include fossil fuels, uranium, geothermal heat, hydropower and certain critical minerals. Agency heads are permitted to recommend the use of federal eminent domain authority if necessary to achieve these objectives.
This order comes at a time when AI-driven technology is rapidly developing. Some of the most popular AI models require massive computational resources. Training these models involves processing enormous amounts of data across thousands of servers, each consuming significant amounts of electricity to keep their hardware cool. Data centers, which house these servers, are at the heart of the AI revolution. As the executive order notes, “the United States’ ability to remain at the forefront of technological innovation depends on a reliable supply of energy and the integrity of our Nation’s electrical grid.”
Reliable, abundant, and affordable electricity is a critical reason why data centers are targeting the Midwest region for future development. The region has a diverse energy mix, including coal, natural gas, and nuclear, along with an increasing share of wind and solar. However, the boom in demand from AI and associated technology has complicated the region’s reliability and affordability picture. As data centers proliferate across the plains, demand during peak periods is intensified.
Investor-owned utilities report on the present circumstances in their public statements. The average project size in Ameren Missouri’s service territory increased from 3.2 megawatts (MW) in 2019 to 181.2 MW in 2024. Oklahoma Gas & Electric expects over 20% growth in its energy forecast for the next five years. In Evergy’s service territory in Missouri and Kansas, roughly 6 gigawatts (GW) of projects sit in its economic development queue. For context, the Wolf Creek nuclear plant in Kansas has a nameplate capacity of roughly 1.2 GW.
Increasing data center demand comes at a time when the region is also experiencing growth in other energy-intensive industries, such as electric vehicle manufacturing and semiconductor production. The electrical grid needs to be able to manage these surges in demand without compromising reliability, which poses a challenge for regulators and grid operators. More data centers operating in the region means that the peak demand could shift in new directions, with potential implications for the overall energy system.
Regulators, customers and developers must consider rate design and cost allocation to manage this new demand picture and ensure resource adequacy. While the federal government is staking out its position, state regulators, data center developers and utilities can also approach this task with several strategies:

Effective Rate Design: Managing increased demand will require significant investments in new energy infrastructure. State regulators should ensure developers can access reliable energy at a just and reasonable rate when data centers need it without expecting other customers to cover more than their fair share of new upgrades. Utilities and developers should craft tariffs that balance these needs.
Investment in Grid Infrastructure: Upgrading and modernizing the electrical grid will be essential to handle increased demand. Additional development of electric transmission infrastructure is vital to dispatch regional generation resources and meet growing demand. Smart grid technologies, which use digital communications to monitor and manage electricity flow, can also help improve efficiency and resilience.
Energy Efficiency in Data Centers: Data center operators can reduce their impact on peak demand by investing in energy-efficient technologies and practices. Many data center operators are already pursuing advanced cooling systems and optimizing server workloads to mitigate their electricity consumption. As the technology behind AI continues to evolve, the efficiency of the infrastructure supporting it will need to improve.
Demand Response Programs: Utilities can implement demand response programs, which incentivize consumers—including data centers—to reduce their electricity usage during peak periods. This could help balance the grid during times of high demand, ensuring that the system remains reliable.

The increasing demands placed on the electricity grid by AI and new data centers represent a significant challenge for resource adequacy in the Midwest region of the United States. However, with thoughtful planning, strategic investments in infrastructure and energy efficiency, the region can continue to support its technology-driven economy while ensuring the reliability and sustainability of its energy supply.

2025 Outlook: Recent Changes in Construction Law, What Contractors Need to Know

The construction industry is at a crossroads, influenced by shifting economic landscapes, technological advancements, and evolving workforce dynamics. With 2025 under way, businesses must stay ahead of key trends to remain competitive and resilient. Understanding these industry shifts is critical—not just for growth, but for long-term sustainability and safety.
Here’s what to expect in 2025:
Job Market
According to the Michael Bellaman, President and CEO of Associated Builders and Contractors (“ABC”) trade organization, the U.S. construction industry will need to “attract about a half million new works in 2024 to balance supply and demand.” This estimate considers the 4.6% unemployment rate, which is the second lowest rate on record, and the nearly 400k average job openings per month. A primary concern as we enter 2025 is to grow the younger employee pool, as 1 in 5 construction workers are 55 or older and nearing retirement.
While commercial construction has not yet been as heavily impacted as residential construction by the lack of workers, the demand for commercial will increase as more industries are anchored on U.S. soil. Think of bills such as the CHIPS and Science Act that allocated billions in tax benefits, loan guarantees, and grants to build chip manufacturing plants here. This is true regardless of political party; investing in American goods and manufacturing seems to be a bipartisan opinion.
AI and Robotics
At the end of 2024, PCL Construction noted that AI will be an integral part of the construction industry. Demand for control centers will drive up commercial production, though the workforce lack may present some challenges when it comes to a construction company’s productivity and workload capacity.
AI will not just change the supply and demand market, but also will be integrated in the day-to-day mechanics and sensors for safety measures within a construction zone. On top of the demand for microchips catalyzed by the CHIPS and Sciences Act, AI is used to “monitor real-time activities to identify safety hazards.” AI-assisted robotics can take on meticulous work such as “bricklaying, concrete pouring, and demolition while drones assist in surveying large areas.” We will start to see where the line is drawn between which jobs require a skilled worker and which can be handled by AI without disrupting the workforce.
Economic Factors
The theme of the years following COVID-19 has been to return the economy to what it was pre-pandemic, including slashing interest rates and controlling inflation. With this favorable economic outlook for 2025, construction companies can look to increasing their projects. On the residential side, the economic boom may drive housing construction to meet demand. On the commercial side, less inflation and lower interest rates for the business can lead to more developmental projects such as megaprojects and major public works. Economist Anirban Basu believes that construction companies may not reap these benefits until 2026 due to the financing and planning required.
Bringing production supply chains back to U.S. soil can help alleviate some of the global concerns such as the crisis in the Red Sea, international wars, and the high tariffs proposed by the Trump Administration. Again, economists are predicting this bountiful harvest in a few years rather than immediately.
Environmental Construction
Trends toward sustainability are leading the construction industry toward greener initiatives such as modular and prefab structures. Both options find the construction agency developing their structures outside of the building sites.
AI can also play a hand in developing Building Information Modeling (“BIM”) to better understand the nuances, possible pitfalls, and visualization of the project before construction begins. Tech-savvy construction agencies are already using programs such as The Metaverse or Unreal Engine for BIM and can significantly reduce project time, resources, and operational costs.
Employee Safety and PPE: Emphasis on employee safety – smart PPE and “advanced monitoring systems”
PPE requirements will far surpass the traditional protective gear (such as helmets, masks, and gloves). Construction sites may soon be required to supply smart PPE products that can scan a worker’s biometrics and environment to prevent medical anomalies or hazardous environmental conditions. Smart PPE devices will be enabled with Internet of Things (“IoT”) to ensure real-time data transmission and to use data analytics to track patterns or predict risks.
Conclusion
The construction industry’s future hinges on adaptability and innovation. By addressing workforce shortages, integrating AI-driven solutions, and adopting sustainable practices, companies can position themselves for success in a dynamic market. Whether it’s preparing for the long-term economic upswing or enhancing employee safety through smart PPE, proactive measures today can lead to stronger, more resilient operations tomorrow. Staying informed and prepared will be crucial for navigating the challenges and seizing the opportunities ahead.