We Get AI for Work™: Where to Start When Evaluating AI Tools [Video, Podcast]
Although it is tempting to rush to implement the newest AI tools, taking inventory of what tools your organization uses, which laws you are subject to and which obligations flow from those laws are all critical steps to maintain legal compliance.
CNIPA November 2025 Press Conference: More Details on Examination of AI Patents and the Chinese IP Firm Rectification Campaign

On November 28, 2025, China’s National Intellectual Property Administration (CNIPA) held a press conference that provided more details about the recently revised Guidelines for Patent Examination for Artificial Intelligence (AI) and the recently announced Rectification Campaign on IP firms and IP practitioners that includes the Ministry of Public Security. Excerpts follow.
Regarding the Examination Guidelines:
Q: Lizhi News Reporter:
Director Jiang just mentioned that the recent revisions to the Patent Examination Guidelines further improve the examination standards for patent applications in the field of artificial intelligence. We have noticed that the Guidelines have been revised in several recent revisions, all of which have addressed related content. What is the relationship between this revision and previous revisions? What is the specific significance of these adjustments and changes?
A: Jiang Tong, Director of the Examination Business Management Department of the Patent Office
Thank you for your question. Accelerating the development of next-generation artificial intelligence is a strategic issue concerning whether my country can seize the opportunities presented by the new round of technological revolution and industrial transformation. Intellectual property plays a crucial dual role in serving the development of the artificial intelligence industry, providing both institutional and technological support. To promptly respond to the urgent needs of innovation entities for the protection of AI-related technologies, we have revised and improved the examination standards for patent applications in the field of artificial intelligence three times, and clarified them in the “Patent Examination Guidelines.”
The 2019 revision added a dedicated chapter on algorithmic features and clarified the standards for examining the subject matter eligibility, novelty, and inventiveness. The 2023 revision clarified the standards for examining the subject matter eligibility of artificial intelligence and big data, enriched the types of protectable topics, and for the first time introduced “user experience enhancement” as a factor in assessing inventiveness. This revision further improves the examination standards in the field of artificial intelligence, establishing a dedicated chapter for “artificial intelligence and big data” for the first time, mainly including three aspects:
First, we must strengthen ethical review of artificial intelligence. We should enhance the role of policy and legal safeguards, and in accordance with Article 5 of the Patent Law, which addresses the legality and ethical requirements of patent authorization, clarify that the implementation of AI-related technical solutions such as data collection and rule setting should comply with legal, social, and public interest requirements, thus solidifying the safety bottom line and guiding the development of AI towards “intelligent good.”
Second, the requirements for disclosing technical solutions in application documents are clarified. Addressing the “black box” nature of artificial intelligence models—that is, the public only knows the model’s inputs and outputs but not the logical relationship between them—and the potential problem of insufficient disclosure of technical solutions, the requirements for writing detailed descriptions in scenarios such as model construction and training are clarified, and the criteria for judging full disclosure are refined, thereby promoting the dissemination and application of artificial intelligence technologies.
Third, improve the rules for judging inventiveness. Clarify the inventiveness review standards for artificial intelligence technologies in different application scenarios and for different objects of processing, and use case studies to illustrate how to consider the contribution of algorithmic features to the technical solution, so as to further improve the objectivity and predictability of the review conclusions.
Going forward, we will continue to track the development of emerging industries and new business models such as artificial intelligence, and continue to improve our review standards in a timely manner to better adapt to the needs of technological development, providing a solid institutional guarantee for supporting the national strategic emerging industries layout and serving high-level scientific and technological self-reliance. Thank you.
Regarding the Rectification Campaign
Q: China Intellectual Property News Reporter:
Recently, the CNIPA, the Ministry of Public Security, and the State Administration for Market Regulation jointly launched a special campaign to rectify the intellectual property agency industry. What is the background to this special campaign? What are the main rectification measures to be taken?
A: Wang Peizhang, Director of the Department of Intellectual Property Utilization and Promotion
Thank you for your question. The intellectual property agency industry is a core component of the intellectual property service industry. High-quality agency services are crucial for achieving high-quality creation, efficient utilization, and high-standard protection of intellectual property. In recent years, the intellectual property agency industry has continued to develop and its service level has been continuously improving, making significant contributions to building a strong intellectual property nation. However, driven by profit, illegal and irregular activities in the intellectual property agency industry have also become increasingly frequent, seriously damaging the legitimate rights and interests of innovation entities, disrupting normal market order, and hindering the healthy development of the intellectual property cause. In response, the CNIPA attaches great importance to this issue and, in conjunction with the Ministry of Public Security and the State Administration for Market Regulation, has launched a three-month special rectification campaign.
This special rectification campaign adheres to the principles of problem-oriented approach, addressing both symptoms and root causes, implementing differentiated policies, and striving for practical results. It deploys 14 tasks across three aspects: severely cracking down on illegal and irregular activities, focusing on rectifying non-standard professional practices, and strengthening source governance. First, it severely cracks down on seven prominent illegal agency behaviors, including falsifying patent applicant information, fabricating patent applications, acting as an agent for a large number of abnormal patent applications, engaging in fraud, acting as an agent for malicious trademark applications, acting as an unqualified patent agent, and soliciting agency business through improper means. It also strengthens the connection between administrative and criminal law, transferring cases constituting crimes to public security organs for legal prosecution. Second, it focuses on rectifying the practice of agencies and personnel renting or lending their qualifications, accelerating the cleanup of agencies that have obtained agency qualifications through fraudulent means or no longer meet the conditions for practicing, and promoting the streamlining and improvement of the industry. Third, it comprehensively regulates the application, transfer, and operation behaviors of three types of entities: innovation entities, intellectual property buyers, and transaction operation platforms, and accelerates the optimization and adjustment of various assessment and evaluation policies related to patents. By strictly investigating and punishing illegal patent agencies and practitioners, rectifying and eliminating irregular practices, and publicly exposing typical illegal cases, a rapid deterrent effect is achieved. Simultaneously, the campaign emphasizes coordinated efforts from the application, examination, and agency ends to combat improper patent trading, curbing patent applications not based on genuine invention and patent trading not aimed at industrialization at the source. Overall, this special rectification campaign is not only a precise rectification of the problems exposed in the industry but also a powerful impetus for the long-term healthy development of the industry.
Next, we will strengthen organization and implementation, intensify rectification efforts, deepen source governance, and improve publicity and guidance to ensure that the rectification work achieves tangible results, promote the development of the intellectual property agency industry through standardization and improvement, and provide strong support for the high-quality development of the intellectual property cause. Thank you.
A full transcript of the press conference is available here (Chinese only).
Protecting Personal Data in the Age of AI- Lessons from the Latest EDPS Guidance
The European Data Protection Supervisor (EDPS) AI guidance for EU institutions has lessons for businesses. This includes when inputting personal information into these tools. The recommendations from the guidance fall into five categories, which businesses can take as potential principles. Namely:
Do your diligence. Know where personal information enters AI processes. Personal information can show up in training, during use, and in the results the AI gives. It is important to check every step for risks to personal data.
Be transparent. Do not just use public data and hope for the best. Privacy laws impose obligations to tell people why their information is being collected and how it will be used. They also require telling people who will handle their personal data.
Be accountable. This means making it clear who is responsible for decisions about personal data and keep accurate records. In the guide, the EDPS reminds EU Institutions that as AI changes, security risks like hacking become more common. So, businesses need to update their defenses often.
Respect the rights of individuals. Let people see, fix, or remove their data, even if the data is hidden in AI systems. This can be technically demanding, but the burden is on the business to make it possible.
Be thoughtful. Do not use a check-the-box approach to risk assessments. Before deploying a new generative AI system, conduct a full Data Protection Impact Assessment, question whether all data collection is genuinely necessary, and prefer anonymized or synthetic data where possible. Keeping up with regular checks for accuracy and bias, plus open communication with staff and users, helps build compliance.
Putting it into Practice: These recommendations were directed to EU Institutions, not private businesses. However, they may signal what regulators expect of businesses when implementing AI tools. As AI laws and obligations continue to develop, consider basing your privacy program on these principles from diligence to thoughtfulness. Taking a principle-based approach to compliance can allow your company to more nimbly react as laws develop and change.
A Look at Current Healthtech VC Trends
There is some good news in the healthtech space, with PitchBook’s new Emerging Tech Research showing a rebound in venture capital (VC) funding for the sector. Startups in this space raised an impressive $3.9 billion in Q3 of this year. While this was a bit lower than the previous two quarters, it was enough to move the YTD total ahead of 2024 values. According to PitchBook, this signals a strong rebound for healthtech.
Below is an overview of some key takeaways from the report and trends to watch in the healthtech sector.
Increase in Deal Volume and Size
There was a slight increase in deal volume, up 12% over the previous quarter. The analytics, operations, and telehealth segments were standouts, with each bringing in more than $800 million in funding. Even with the elevated deal count, median healthtech VC deal size still hit a record of $7.7 million, an indication that higher valuations are leading to larger deal size.
A Mixed Report on Exit Activity
While there was a sharp increase in exits in Q3 (a record of 42), the total exit value came in at $200 million. PitchBook correlates this gap between exit count and value to the significant number of acquisitions of smaller, early-stage startups. There were also no major IPOs in the sector this quarter, and their analysts are not expecting to see any other major healthtech listings as we wrap up the year. However, they are tracking several as we move into 2026.
The Continued Impact of AI
As with many other sectors, artificial intelligence (AI) continues to define healthtech. One of the areas that is quickly expanding in terms of commercial use is ambient scribes. These AI-powered tools listen to conversations between patients and providers and generate clinical notes. This is already being adopted across the healthcare space, from physicians’ offices to large health systems, demonstrating a strong demand for these kinds of AI solutions. AI-driven revenue cycle management (RCM) vendors were also a focus for investors. These tools help to reduce claim denials and improve cash flow for healthcare providers.
PitchBook notes that AI adoption for this sector is mostly on the provider side, rather than the payor side, and providers are already “realizing meaningful AI-driven revenue gains” as they deploy these tools. They do expect payors to increasingly utilize AI tools in the future as they seek to identify overpayments and make prior-authorization workflows more efficient.
This sector is expected to gain significant attention as we enter the new year and the adoption of AI grows increasingly prevalent. Notable advancements have the potential to reshape the industry, and investors are likely prepared to support technologies that will drive the future of healthcare.
Illinois Becomes the First State to Regulate the Use of AI Mental Health Therapy Services
In early August, Illinois enacted the Wellness and Oversight for Psychological Resources Act (HB 1806, or the “Act”), making it the first state to pass a law regulating the use of AI[1] in the delivery of therapy and psychotherapy services. The Act, which took immediate effect, imposes guardrails on the use of AI to provide decision-making therapeutic support services, but permits the use of AI for administrative and supplementary tasks, subject to certain consent requirements. This blog post summarizes the Act and addresses its potential implications for the use of agentic AI by Illinois therapy providers.
Scope of the Act
Under the Act, only licensed professionals may provide, advertise, or otherwise offer therapy or psychotherapy in Illinois.[2] A “licensed professional” includes any individual who is licensed in Illinois to provide therapy or psychotherapy, such as clinical psychologists, social workers, professional counselors, and marriage and family therapists.[3] The Act addresses three categories of AI-supported services: (i) administrative support; (ii) supplementary support; and (iii) independent therapeutic decision-making or therapeutic communication.
Permitted Uses of AI in Therapy Services
The Act allows Illinois licensed professionals to use AI to provide administrative support, such as scheduling appointments, processing insurance and billing claims, and drafting general communications related to therapy logistics that do not include therapeutic advice.[4] Additionally, such professionals may continue to use AI to provide supplementary support, such as maintaining client records including therapy notes, analyzing anonymized data to track client progress, and identifying referrals.[5] However, if a client session is recorded or transcribed, the Act requires licensed professionals to obtain written consent from clients to use AI for supplementary support.[6] The client or their legally authorized representative must be informed in writing that AI will be used and the specific purpose of the AI tool or system.[7] The written consent must be affirmative and unambiguous – a general terms of use agreement (e.g., a general consent to treatment form) incorporating information about the use of AI is insufficient to establish consent under the Act.[8]
Prohibited Uses of AI in Therapy Services
Most significantly, the Act prohibits the use of AI to: (i) make independent therapeutic decisions; (ii) directly provide therapeutic communication to clients; (iii) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (iv) detect emotions or mental states.[9] “Therapeutic communication” is defined broadly, to include “any verbal, non-verbal, or written interaction conducted in a clinical or professional setting that is intended to diagnose, treat, or address an individual’s mental, emotional, or behavioral health concerns.”[10] Any individual, corporation, or entity found to be in violation of the Act will be subject to a civil penalty of up to $10,000 per violation, and may be subject to an investigation by the Illinois Department of Financial and Professional Regulation.[11] The Act also specifies that therapy or psychotherapy records may not be disclosed except as required under the Mental Health and Developmental Disabilities Confidentiality Act.[12]
Implications for Agentic AI
Agentic AI – autonomous AI systems capable of performing a wide range of tasks, including providing lab results, recognizing emotions and mental health concerns, and even contacting emergency services if a user is in crisis – is being deployed in therapy and psychotherapy practices across the country. The Act’s prohibition on independent therapeutic decision-making by AI poses challenges for providers and businesses looking to use agentic AI in Illinois to recognize and act upon a user’s mental health status. Providers and businesses using this technology will need to ensure that their agentic AI’s capabilities do not fall under independent therapeutic decision-making or therapeutic communication. Moreover, businesses will need to obtain clear and affirmative written consent from clients to use these tools for supplementary support tasks. While agentic AI has immense promise for the future, providers and businesses must ensure their use falls within the bounds of Illinois’s novel restrictions.
FOOTNOTES
[1] The Act references the Illinois Human Rights Act to define “artificial intelligence”: “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.” 775 Ill. Comp. Stat. 5/2-101(M).
[2] HB 1806 § 20(a).
[3] HB 1806 § 10.
[4] HB 1806 § 15(a).
[5] HB 1806 § 10.
[6] HB 1806 § 15(b).
[7] HB 1806 § 15(b).
[8] HB 1806 § 10.
[9] HB 1806 § 20(b).
[10] HB 1806 § 10.
[11] HB 1806 § 30.
[12] HB 1806 § 25.
Key Takeaways from the Life Sciences Investment Forum 2025
On November 13, 2025, the Life Sciences Investment Forum brought together more than 180 leaders in the life sciences investment community for a day of dynamic conversations and strategic insights. Top decision-makers shared emerging trends and opportunities, how they’re navigating market headwinds, and what they expect for the industry in 2026 and beyond. Below are four takeaways from the discussions.
Growth is the dominant narrative and the key valuation driver
Across biopharma, private equity (PE), and public markets, growth was repeatedly emphasized as the most important factor for valuation, fundraising, and multiple expansion.
Large-cap pharma has driven most recent market value expansion, fueled by demand for large-population therapies (e.g., GLP-1s).
Investors expect outsized returns for companies with predictable, rapid top-line growth.
“Keep your eye on the top line” emerged as the #1 message to investors heading into 2026.
Structural headwinds are mounting: Policy, pricing pressure, and healthcare costs
Regulation – not science – will be the primary gating factor in investment decisions over the next 12 to 24 months:
Healthcare spending outpacing gross domestic product (GDP) is pushing pressure for cost controls.
Most Favored Nation (MFN) pricing, drug price negotiations, and reimbursement shifts are creating real concern for biotech P&Ls, driving the need for tight forecasting and clean data.
As transitions at the US Food and Drug Administration (FDA) are affecting predictability, companies must plan for worst-case regulatory scenarios and establish multiple backup plans.
Early-stage is taking center stage – and rising costs are accelerating the shift
A scarcity of late-stage, commercial-ready assets—combined with escalating clinical and manufacturing costs—is pushing investors and strategics further upstream. This environment is driving:
Intense competition and high valuations for Phase 1 and 2 “best-in-class” assets.
Pipeline “herding,” especially in oncology, as buyers converge on similar early plays.
More programmatic mergers and acquisitions (M&A) and “shots on goal” earlier in the pipeline, particularly as Big Pharma seeks to fill looming patent-cliff gaps.
Greater reliance on structured and synthetic royalty financing as equity remains expensive and uncertain pricing environments make long-duration capital harder to commit.
AI is no longer optional, it’s becoming foundational across the value chain
Artificial intelligence (AI) brings transformational promise, but investors are demanding proof of utility and robust data infrastructure, not just “AI-enabled” labels:
AI tools have the potential to transform domains across the life sciences value chain.
For AI to succeed, the data training and retraining models needs to be highly curated and fit for purpose and investors need to carefully interrogate the adequacy of the data
Regulatory friction and clinical validation remain among the biggest development, deployment and adoption barriers.
The New Legal Code- Hiring in the Age of Artificial Intelligence
Throughout 2025, artificial intelligence has shifted from a buzzword to infrastructure critical to workflows across nearly every industry. Generative-AI tools have moved from experiments to everyday workplace utilities, with Microsoft integrating Copilot into Office, and Google and OpenAI rolling out enterprise-grade assistants. NVIDIA has become the poster child for the AI boom as governments and Fortune 500 companies race to secure computing power for their own initiatives, making them the world’s most valuable company. Analysts view AI investments as driving the stock market, but will there be a similar bump in job growth, particularly in the legal professions? Against this backdrop, The National Law Review spoke with Garrett Rosen, a senior vice president of legal recruiting at Larson Maddox, about how AI’s rapid ascent is reshaping the legal hiring market.
Eli: To start us off, could you give a quick introduction to who you are and what you do?
Garrett: Sure. I’m a recruiter focused on in-house legal roles across tech, media, and telecom, and I effectively cover all major U.S. markets. Most of my work is in New York, San Francisco, Los Angeles, Denver, Chicago, Dallas/Austin, and Boston, among others. At Larson Maddox, we have offices in New York, Charlotte, Tampa, Dallas, Chicago, and Los Angeles, and I work closely with colleagues in those locations on things like product counsel roles, privacy, IP/compliance, and adjacent positions in the broader tech and infrastructure space.
Eli: Thank you for taking the time to speak with me. From where you sit, how would you describe the legal hiring market right now, especially for tech and AI-related roles? Are things looking good? Not so good? What’s going on?
Garrett: It’s a loaded question.
The last few years have been a real swing. Coming out of 2020–2022 we had a very candidate-friendly market. There was heavy investment, tons of growth hiring, and it felt like people could move around pretty freely.
Then we hit a cooldown—you saw the big downturn, a lot of uncertainty, and waves of layoffs. Over the last 12 months or so, it’s started to stabilize and pick back up. I’m definitely busier this year across industries than I was last year.
That said, it’s still competitive. You have people who were laid off, people who moved during that 2020–2022 window who are now questioning those moves, and people who are employed but cautiously testing the market. So there’s a lot of candidate activity relative to the number of truly great roles.
Eli: You mentioned product counsel, privacy, IP, and compliance. When you look at AI-related legal roles right now—AI product counsel, AI governance, “ethical tech” officers, that sort of thing—what are companies actually hiring for? What are you seeing on the ground?
Garrett: A lot of what I’m seeing is hybrid.
There’s growing demand for AI-focused product counsel roles—people who sit at the intersection of product development, privacy, and regulatory risk. You also see roles framed as AI governance, AI policy, or broader “responsible AI” positions, but often they’re essentially privacy-plus: privacy and data protection, plus AI policy and compliance layered on top.
In many companies, especially those earlier in their AI product development, AI is being folded into existing privacy, product, or commercial roles. Larger, more mature organizations, are more likely to spin out explicit “AI” or “governance” titles, particularly where there’s a heavy regulatory or compliance overlay.
Eli: Are employers mostly trying to upskill their existing in-house lawyers into AI roles, or are they going out and hiring people with deeper AI backgrounds?
Garrett: It’s a mix, but the market has definitely shifted toward wanting proven experience.
There are companies willing to say, “We’ll take a really strong tech or privacy lawyer who’s genuinely interested in AI and help them grow into it.” That happens.
But right now, I’d say more clients want someone who can demonstrate they’ve already done the work—that they’ve handled AI-related issues, worked with product and engineering teams on these questions, and can hit the ground running.
The risk tolerance is lower than it was a few years ago. Candidates who want to pivot into AI from adjacent fields need to make a very strong case for how their existing experience translates, and they need to show they’ve invested in learning—not just that they’re “curious about AI.”
Eli: That ties into my next question. When companies talk to you about ideal candidates, what are they looking for in terms of background, undergrad, practice area, certifications?
Garrett: There are a few layers to it.
First, there’s the core legal training: strong law school credentials, time at a reputable firm, and experience with privacy, product counseling, consumer protection, or regulatory work. That’s still the baseline.
Then there’s the cross-functional piece. The best product and AI lawyers are deeply plugged into the business. They work closely with product, engineering, data science, marketing, and trust & safety. They understand how the product actually works, who the internal stakeholders are, and what their day-to-day looks like. That cross-functional experience is huge.
On the credential side, we’re seeing more interest in things like privacy certifications—CIPP and related certifications—and the same will likely happen with AI. AI-specific certifications or structured coursework signal that someone has put in the effort to formalize their knowledge rather than just reading headlines. Over the next couple of years, I expect those to become a more common way for candidates to stand out.
Eli: Let’s say a candidate already has some of that background. From your perspective, what do recruiters and hiring managers look for most when as a sign of true technical fluency?
Garrett: It’s less about writing code and more about being able to “speak the language.”
Hiring managers want lawyers who can sit in a room with product and engineering and ask intelligent questions—who understand data flows, how a model is trained and deployed at a high level, where the data is coming from, and what the user journey looks like.
The candidates who do well are the ones who can tell concrete stories:
“Here’s a product I supported.”
“Here’s how we were using machine learning or AI.”
“Here are the risks we identified and the guardrails we put in place.”
It’s that ability to articulate the work, connect it to business outcomes, and explain how their legal advice actually shaped the product. That kind of narrative really resonates.
Eli: Another thing I’m curious about: given all the buzz and opportunity, do you feel like the market is too hot for candidates, in the sense that people might be tempted to jump around a lot? Or is it actually more constrained than it looks from the outside?
Garrett: I don’t think we’re in an “everyone’s job-hopping constantly” phase right now.
If anything, job-hopping has cooled compared to five years ago. With the macro environment and recent layoffs, there’s more caution on both sides. Companies are wary of candidates who look like they’ve bounced too much, and candidates are more thoughtful about whether they really want to move.
There are lots of interesting roles, especially in AI and privacy, but it’s not unlimited. And because there’s so much interest in the space, the bar is higher. You’re competing not only with folks who are actively unemployed, but also with well-credentialed people who are secure in their jobs and just selectively looking for the “right” next step.
Eli: You mentioned geography earlier—markets like New York, the Bay Area, LA, and so on. How has remote work changed the competition for AI and tech-adjacent legal roles?
Garrett: Remote and hybrid work have definitely reshaped things.
On one hand, remote roles let companies tap into broader talent pools. Someone sitting in, say, the Midwest can now compete for jobs in a Bay Area or New York company that used to hire only locally. On the other hand, that also means a candidate in a smaller market is now competing with people from every major tech hub.
Some clients are still committed to particular hubs—they want people in-office in New York, San Francisco, or Seattle a certain number of days a week. Others are more flexible and will hire fully remote if they find the right person.
So for candidates, the question is often: “Am I willing to relocate or commit to a hub city?” If not, they can still find opportunities, but the competition for fully remote, high-end AI/privacy roles is intense.
Eli: For law students or early-career lawyers who are watching all this AI change and feeling anxious about their careers, what advice would you give them?
Garrett: First, don’t panic.
AI is changing a lot, but the fundamentals still matter: strong training, solid writing and analysis, good judgment, and the ability to work well with people. If you build that core skill set, you’ll be able to adapt as the technology evolves.
Second, be intentional about exposure. If you’re at a firm, try to get staffed on matters involving privacy, data, product counseling, or emerging tech. If you’re in-house, volunteer for projects that touch AI or data governance.
Third, show that you’re investing in yourself. That could mean taking relevant courses, getting a privacy or AI-related certification, writing or speaking about the issues, or just building a thoughtful point of view about the space.
The candidates who will do best are the ones who can say, “I understand the basics, I’ve seen some of this work up close, and I’m genuinely engaged with how AI is reshaping my practice area.”
Eli: My final big question is about the road ahead. We’ve talked about uncertainty. We’ve talked about growth. What do you think the path forward looks like for AI-adjacent legal roles over the next few years?
Garrett: I think we’re still in a growth phase, but it won’t be a straight line.
There are a lot of forces at play: regulation catching up, companies figuring out sustainable business models around AI, and startups competing hard for market share. Before we ever hit a true “bubble bursting,” I suspect we’ll see a wave of consolidation—more M&A as larger players acquire smaller ones with strong technology or teams.
For legal, that means continued demand for people who understand AI, privacy, and regulatory frameworks—especially around safety, consumer protection, and data. I’d expect 2026 and 2027 to be pretty active years from an M&A and regulatory standpoint, which usually translates to sustained need for strong in-house counsel in this space.
Could things slow at some point? Sure. But I don’t see AI-related legal work disappearing. I see it becoming more embedded in how companies operate over time.
DISCLAIMER:
The views and opinions expressed in this interview are those of the speaker and not necessarily those of The National Law Review (NLR). The NLR does not answer legal questions, nor will we refer you to an attorney or other professional if you request such information from us. If you require legal or professional advice, kindly contact an attorney or other suitable professional advisor. Please see NLR’s terms of use.
Trump’s Executive Order on AI and Pediatric Cancer Creates New EB-2 NIW Opportunities
On September 30, 2025, President Donald J. Trump signed the Executive Order “Unlocking Cures for Pediatric Cancer with Artificial Intelligence,” establishing AI-driven pediatric cancer research as a national priority. The Order directs federal agencies and private partners to accelerate research and empower clinicians and researchers with the tools to translate data into improved diagnoses, treatment, and cures.
In the context of U.S. immigration policy, this development opens new opportunities for professionals seeking classification under the EB-2 National Interest Waiver (NIW) and other employment-based categories. By affirming the national importance of work in AI, medical research, data science, and biotechnology, the Order provides strong policy support for applicants seeking to demonstrate eligibility under the EB-2 NIW.
Understanding the Executive Order
Pediatric cancer remains the leading cause of disease-related death among children in the United States, with incidences increasing significantly over the past four decades. Traditional treatment has seen limited progress, highlighting the need for new and innovative approaches.
To address this, the Executive Order calls for the use of artificial intelligence to drive advancements in pediatric cancer care. It builds on federal efforts, such as the Childhood Cancer Data Initiative (CCDI), which collects and integrates pediatric cancer data to accelerate breakthroughs, and it encourages greater collaboration between public and private sectors.
The Order mobilizes agencies, including the Department of Health and Human Services (HHS), National Institutes of Health (NIH), and the Make America Healthy Again (MAHA) Commission to accelerate AI-driven medical research and infrastructure.
Key Provisions of the Executive Order
The Order directs federal agencies to:
Invest in AI-Driven Biomedical Innovation: Enhance research infrastructure and accelerate AI integration in cancer data analysis.
Fund Research with National Cancer Institute (NCI)-Designated Centers: Prioritize projects involving predictive analytics, multi-omics research, and therapeutic optimization.
Advance Data Sharing and Interoperability: Improve access to privacy-protected clinical datasets to support research and clinical trial recruitment.
Promote Public-Private Collaboration: Encourage biotechnology firms, digital health companies, and AI startups to contribute tools and solutions.
Strengthen U.S. Leadership in Health Technology: Position the United States as a global center for AI-enabled medical discovery.
How the Executive Order Supports EB-2 NIW Petitions
The EB-2 NIW allows foreign nationals with advanced degrees or exceptional ability to self-petition for permanent residency without requiring an employer sponsor or labor certification. Petitioners must demonstrate that their work serves the national interest of the United States. In recent years, the “national importance” element has become the area USCIS most consistently challenges in these petitions, requiring substantial documentation and strategic argumentation to overcome increased scrutiny.
The new Executive Order provides a new, powerful avenue to define the national importance of the work of professionals in AI, Data Science, Oncology Research, Biotechnology, and other related fields. It establishes clear and direct policy evidence that:
AI research and applications in healthcare are strategic national priorities.
Pediatric cancer innovation is a public health objective of the United States.
AI and data science professionals, not only medical doctors, contribute to national healthcare goals.
There is a national interest in attracting and retaining advanced researchers and AI innovators.
Fields Strengthened by the Executive Order
The Executive Order has implications across numerous professional fields, offering a powerful pathway to articulate and align their work with the national policy priority.
Field
Examples
Artificial Intelligence
Machine learning researchers, AI engineers building healthcare tools
Biomedical Research
Cancer biologists, immunotherapy and drug discovery scientists.
Health Data Analytics
Bioinformaticians, clinical data scientists, data architects.
Medical Innovation
Digital diagnostics developers, AI-enabled imaging specialists.
Clinical Practice
Pediatric oncologists, hematology researchers.
Computational Sciences
Predictive modeling experts, cloud computing in healthcare.
Biotechnology
Translational researchers, precision medicine developers.
Professionals in these fields can align their EB-2 NIW petitions with the Executive Order by demonstrating how their work can contribute to advancing AI innovation and data science to improve pediatric oncology diagnostic, treatment and cure in line with the goal of the Order.
EB-2 NIW Strategy: Building a Strong Legal Case
While the Executive Order sets the policy context, working in the implicated fields alone does not automatically establish the requisite national importance. Petitioners must still establish eligibility under the three-prong framework set forth in Matter of Dhanasar: (1) the proposed endeavor has substantial merit and national importance, (2) the petitioner is well positioned to advance the endeavor, and (3) it would benefit the United States to waive the job offer and labor certification requirements.
While the Executive Order provides a favorable policy backdrop for demonstrating national importance of the work related to advancing pediatric oncology care through AI, it is still crucial to present clear and detailed plans and strategies for implementing this work.
Defining a Clear, Specific and Innovative Proposed Endeavor
USCIS frequently denies NIW cases when the proposed endeavor is broad or vague. Under this Executive Order, general statements like “I will use AI to improve cancer research” are not enough. The petitioner must describe a specific, credible plan of work that aligns with the U.S. priorities. Examples include:
Developing AI tools to improve early diagnosis of pediatric brain tumors
Designing predictive analytics to optimize pediatric chemotherapy dosing
Building data platforms facilitating nationwide pediatric cancer trials
Developing machine learning tools for rare childhood cancer genomics
Petitioners should be able to describe what they will do, how they will do it, and why it represents an advancement in a field recognized as a U.S. national priority in a detailed yet concise manner.
Demonstrating Record of Success with a Broad Impact
Recent USCIS trends place increasing emphasis on whether the petitioner has a demonstrated record of success that has contributed to the broader advancement of their field. While the list below does not represent rigid requirements, strong evidence of broad impact may include:
Published, peer-reviewed research demonstrating impact on the field
Adoption or replication of Proven AI or data science models by others in the healthcare field
Roles held in collaborative or interdisciplinary initiatives related to pediatric oncology or AI in healthcare
Presentations or invited talks at conferences
Letters from recognized experts in AI, oncology, or medical research attesting to the influence and the widespread dissemination of the work
Presenting Concrete Plans for Advancing the Proposed Endeavor in the U.S.
USCIS has increasingly focused on the feasibility and scalability of the proposed endeavor. Petitioners must present credible, detailed plans showing how their work will be implemented and scaled within the United States. Examples of strong evidence of concrete plans include:
A clear, step-by-step plan for collaborating with U.S.-based institutions and organizations in the field of developing AI or data solutions to advance oncology research
Letters from institutions and organizations in the field expressing interest in collaborating with you to develop AI solutions that advance oncology research
Specific mechanisms for broad dissemination of your work such as professional presentations and open-source initiatives
Resource support from institutions and organizations demonstrating the feasibility of expanding the work
Recognition by U.S. experts or professional organizations through letters validating your ability to contribute to the advancement of the fields in the United States
Key Takeaways
The Executive Order has explicitly recognized AI-driven innovation and health data modernization as national priorities. In the immigration context, the Order establishes a powerful policy backdrop that opens a new strategic pathway for EB-2 NIW petitioners working at the intersection of medical research, AI and data science by providing a clear framework for demonstrating national importance in these fields.
As the United States advances its leadership in medical innovation, it will increasingly rely on researchers, engineers, physicians, bioinformaticians and technologies capable of delivering measurable impact. The Order’s integration of AI, health data, and biomedical research makes interdisciplinary expertise a strategic advantage, positioning such candidates as valuable to U.S. national interests.
Those developing AI solutions that improve patient outcomes, accelerate cancer discovery, or advance the integration of health data can now benefit from both a national mission and policy environment that recognize the significance of their work.
USPTO’s Revised Inventorship Guidance for AI-Assisted Inventions- What Changed, What Stayed, and What Practitioners Should Do Now
The U.S. Patent and Trademark Office (USPTO) has issued updated examination guidance (“New Guidance”) on inventorship in applications involving artificial intelligence (AI). The document rescinds and replaces the February 13, 2024 guidance and clarifies how inventorship should be determined when AI is used in the inventive process. The New Guidance jettisons the Pannu test for this purpose, which focused on joint inventorship issues, and instead focuses on conception. This action is another step by the new USPTO leadership to bolster the patent system. It remains to be seen whether the courts will agree with this approach. It is possible that some patents will be granted by the USPTO under this guidance but be found invalid by the courts. This will remain highly fact dependent. Below is a detailed breakdown of the key changes and practical implications for patent strategy across utility, design, and plant filings.
The Big Reset: Prior AI Inventorship Guidance Withdrawn
The USPTO expressly rescinds the February 13, 2024 “Inventorship Guidance for AI-Assisted Inventions” in its entirety and withdraws its application of the Pannu joint inventorship factors to AI-assisted inventions as a general inventorship framework. The Office emphasizes that Pannu was and remains a doctrine to determine joint inventorship among multiple natural persons; it does not apply where the only other “participant” is an AI system, which by definition is not a person. In short, the prior approach is off the table, and the agency has refocused the analysis on traditional legal principles of conception and inventorship.
No “Joint Inventor” Question When AI Is the Only Other Actor
The New Guidance squarely states that when a single natural person uses AI during development, the joint inventorship inquiry does not arise because AI systems are not “persons” and therefore cannot be joint inventors. This resolves prior confusion over whether to run a joint-inventorship analysis in single-human-plus-AI scenarios; the answer is no. The presence of AI tools does not, by itself, trigger multi-inventor analysis.
One Uniform Standard: Traditional Conception Governs AI-Assisted Work
The USPTO underscores that the legal standard for inventorship is the same for all inventions—there is no separate or modified standard for AI-assisted inventions. Only natural persons can be inventors under Federal Circuit precedent. Accordingly, AI—regardless of sophistication—cannot be named as an inventor or joint inventor. The touchstone of inventorship remains “conception,” defined as the formation in the inventor’s mind of a definite and permanent idea of the complete and operative invention. Conception requires a specific, settled idea and a particular solution to the problem at hand; a general goal or research plan is insufficient.
Conception Requires Particularity and Possession
The New Guidance reiterates that inventorship is a highly fact-intensive inquiry focused on whether the inventor possessed knowledge of all limitations of the claimed invention such that only ordinary skill would be necessary to reduce it to practice, without extensive research or experimentation. The analysis turns on the ability to describe the invention with particularity; absent such a description, an inventor cannot objectively substantiate possession of a complete mental picture. This framing reinforces the benefit of robust, contemporaneous documentation of the inventor’s mental steps and the concrete claims that flow from them—especially when AI has been used to generate or refine inputs that inform the claimed features.
AI as a Tool: Presumptions, Naming Practices, and Rejections
As a practical matter, the USPTO maintains its presumption that inventors named on the application data sheet or oath/declaration are the actual inventors. However, the Office instructs that a rejection under 35 U.S.C. §§ 101 and 115 (or other appropriate action) should be applied to all claims in any application that lists an AI system or other non-natural person as an inventor or joint inventor. Conceptually, AI systems—including generative AI and computational models—are “instruments” used by human inventors, analogous to laboratory equipment, software, and databases. Inventors may use the “services, ideas, and aid of others” without those sources becoming co-inventors; the same principle applies to AI systems. When a single natural person is involved, the only question is whether that person conceived the invention under the traditional conception standard.
Multiple Human Contributors: Joint Inventorship Still Uses Pannu
When multiple natural persons are involved in creating an invention with AI assistance, traditional joint inventorship principles, including the Pannu factors, apply to determine whether each person qualifies as a joint inventor. Each purported inventor must: (1) contribute in some significant manner to the conception or reduction to practice; (2) make a contribution that is not insignificant in quality relative to the full invention; and (3) do more than merely explain well-known concepts or the current state of the art. Importantly, the mere involvement of AI tools does not alter the joint inventorship analysis among human contributors. Practitioners should evaluate and document each human’s inventive contribution against the claims, as usual.
Beyond Utility: Design and Plant Patents Are Covered
The New Guidance confirms that the same inventorship inquiry applies to design patents and utility patents. For plant patents, the statute and Federal Circuit precedent require that the inventor contributed to the creation of the plant (not just discovered and asexually reproduced it). The USPTO clarifies that these principles apply equally when AI assists the development of designs or plant varieties. As a result, applicants should treat AI involvement as they would any other tool across all patent types—demonstrating human conception and contribution consistent with the governing statutes and case law.
Priority and Benefit Claims: Inventor Identity Must Align
For applications and patents claiming benefit or priority (U.S. or foreign) under 35 U.S.C. §§ 119, 120, 121, 365, or 386, the named inventors must be the same—or at least one named joint inventor must be in common—and must be natural persons. Priority claims to foreign applications that name an AI tool as the sole inventor will not be accepted. Where a foreign application names both a natural person and a non-natural person (e.g., an AI) as joint inventors, the U.S. filing must list only the natural person(s), including at least one in common with the foreign filing. The same approach applies to national stage entries under 35 U.S.C. § 371: name the natural person(s) as inventor(s) in the application data sheet accompanying the initial U.S. submission.
Practical Implications and Action Items for Patent Strategy
Document human conception thoroughly. Maintain detailed inventor notebooks (electronic or paper) that describe, in the inventor’s own words, the definite and permanent idea of the complete and operative invention—especially the claim limitations—before or contemporaneous with AI use. Capture prompts, inputs, iterations, and the human analysis that distills AI outputs into specific claim language or design features. This helps establish the inventor’s possession and particularity.
Treat AI outputs like lab results, where feasible. If AI “conceives” part of the “invention” to be claimed, conduct careful analysis of whether there is sufficient human conception. Position AI as an instrument that provides data or suggestions. The decisive step is the human’s mental formation of the complete and operative invention—a specific solution and claimable features—not the mere receipt of AI-generated content.
Adjust cross-border filing strategies. In jurisdictions that allow non-human inventors, plan ahead to avoid misalignment with U.S. requirements. Ensure that priority chains retain at least one natural-person inventor in common and that U.S. filings exclude non-natural persons from inventorship. Coordinate with foreign counsel on naming conventions to preserve U.S. priority.
Prepare for examiner scrutiny. Anticipate that examiners may question inventorship in filings that discuss AI extensively. Have ready documentation demonstrating human conception and the particular claim limitations conceived by the human inventor(s), as well as appropriate inventorship declarations.
What Did Not Change
Despite evolving capabilities, AI cannot be named as an inventor in the U.S., and the inventorship standard is unchanged: it centers on human conception.
Traditional inventorship doctrines continue to apply. Conception, joint inventorship, and priority requirements remain grounded in statutory and Federal Circuit principles. AI is treated as a tool, not as a co-inventor.
Claims drive the analysis. Whether one inventor or many, the question is who (or what) conceived the limitations of the claimed invention. The presence of AI does not alter the claim-centric nature of inventorship assessments.
Bottom Line
The USPTO’s New Guidance simplifies and clarifies the path forward: there is no AI-specific inventorship test, AI cannot be named as an inventor, and traditional conception standards govern. For single-inventor, AI-assisted cases, focus on documenting human conception with particularity. For multi-inventor scenarios, continue to apply Pannu among human contributors. Extend these principles across utility, design, and plant filings, and ensure priority claims align with natural person inventorship to avoid fatal defects. With careful documentation and claim-focused analysis, AI can be a powerful instrument in innovation while maintaining clear, defensible human inventorship.
SILENT SWITCH? New Lawsuit Alleges Google Uses Gemini AI to “Secretly” Read Gmail, Chat, and Meet Conversations
The latest in a spate of lawsuits targeting AI tools, a new putative class action filed in the Northern District of California alleges that tech giant Google activated its Gemini AI features across its portfolio of services without obtaining user consent, in violation of the California Invasion of Privacy Act (“CIPA”).
According to the complaint, Google previously offered Gemini “Smart features” as an opt-in tool, but allegedly switched this setting on for all Gmail, Chat, and Meet accounts on or around October 10, 2025, enabling its AI to track users’ private communications in those platforms without knowledge or consent in violation of CIPA Section 632, which prohibits the recording of confidential communications without consent. The filing states that Google tracks these private communications with Gemini by default, requiring users to affirmatively find this data privacy setting and shut it off, despite never “agreeing” to such AI tracking in the first place. The complaint alleges that despite this setting being in default “opt out” status since October 10, the setting is still worded as an “opt in” feature: “When you turn this setting on, you agree . . .” According to the complaint, this renders the privacy settings offered by Google effectively meaningless.
The plaintiff, Thomas Thele, alleges he did not turn on this setting, was not notified of the change, and did not consent to the collection or analysis of information contained in his communications. While Thele does not identify the precise Gmail, Chat, and/or Meet communications that he sent or received with the “Smart features” setting turned on, the complaint identifies the categories of information that could allegedly be derived from these communications, including financial records, employment information, medical information, political and religious affiliations, the identities of family members and contacts, and social habits and activities,.
Plaintiff purports to represent the following potentially massive class: “All natural persons residing in the United States with Google accounts whose private communications in Gmail, Chat, and/or Meet were tracked by Google’s Gemini AI after Google turned on “Smart features” in those persons’ data privacy account settings.”
In response to viral social media posts accusing Google of automatically opting Gmail users into AI model training through its “smart features,” Google has issued a statement refuting claims that it uses Gmail content to train the Gemini AI model. However, the sufficiency and truth of Plaintiff Thele’s allegations are yet to be tested. We’ll keep a close eye on this one.
The case is Thele v. Google, LLC, No. 5:25-CV-09704 (N.D. Cal. Nov. 11, 2025).
New Inventorship Guidance on AI-Assisted Inventions- AI Can’t Be an Inventor, But AI Can Be a Tool in the Inventive Process (For Now…)
As readers may recall, in February 2024, the USPTO issued guidance on inventorship in AI-assisted inventions, which we wrote about here. On November 26, 2025, the USPTO rescinded that guidance and replaced it with new guidance.
By way of background, the February 2024 Guidance analyzed the naming of inventors for AI-assisted inventions using the Pannu factors, which state that an inventor must (1) contribute in some significant manner to the conception or reduction to practice of an invention, (2) make a contribution to the claimed invention that is not insignificant in quality when measured against the full invention, and (3) do more than merely explain well-known concepts and/or the current state of the art. In the February 2024 Guidance, the USPTO noted that in the context of AI-assisted inventions, the Pannu factors were informed by the following considerations:
Use of an AI system in creating an AI-assisted invention does not categorically negate the natural person’s contributions as an inventor, if the natural person “contributes significantly” to the invention
Providing a recognized problem or a general goal or research plan to an AI system does not rise to the level of conception
Reduction of the output of an AI system to practice alone is not a significant contribution that rises to the level of inventorship
A natural person who designs, builds, or trains an AI system in view of a specific problem to generate a particular solution could be an inventor
Owning or overseeing an AI system used in the creation of an invention does not constitute a significant contribution to conception
The November 2025 Guidance withdraws the analysis of the Pannu factors, indicating that the Pannu factors only apply when determining when multiple natural persons qualify as inventors. The November 2025 Guidance further emphasizes that the same legal standard for determining inventorship applies, whether or not AI systems were used in the inventive process. AI systems are described as tools, analogous to laboratory equipment, computer software, research databases, and the like. And under well-established practice, inventors can use the services of others – including AI systems – without those sources becoming co-inventors.
This Guidance significantly simplifies the analysis of how AI systems can be used in the inventive process. In removing the Pannu factors from the analysis of whether an AI system is an inventor, the November 2025 Guidance allows inventors to extensively use AI systems in developing an invention without needing to consider the extent to which an inventor used an AI system – such as tracking the prompts used by an inventor to prompt an AI system to generate an invention, characterizing the nature of the prompts as describing a general problem, directing an AI system towards a particular solution, or the like.
This change may also reduce the likelihood of AI creators being deemed co-inventors, because they are only considered as tool-builders. For example, in an AI-assisted drug discovery collaboration between AI experts and drug discovery scientists, under the new Guidance, the drug discovery scientists may use the services, ideas, and aid of the AI system without any concern that the AI system or its creators could become co-inventors. However, whether the AI experts are inventors should be assessed.
In short, the November 2025 Guidance reiterates that AI systems themselves cannot be named as an inventor on a patent application or issued patent. This is in line with the Federal Circuit’s decision in Thaler v. Vidal, 43 F.4th 1207 (Fed. Cir. 2022), in which the Federal Circuit held that only natural persons could be listed as inventors on a patent application or issued patent. Thus, past policy involving the rejection of claims under 35 U.S.C. § 101 and 115 for claims in any application in which an AI system is listed as an inventor or joint inventor remains in place. Relatedly, because only natural persons can be listed as inventors on a patent application or an issued patent, priority claims to foreign applications that name an AI tool as a sole inventor will not be accepted, and the ADS for US patent applications should only name natural persons even when claiming priority to a foreign application including natural persons and AI tools as joint inventors.
We would observe that this is an administrative change, subject to recission by subsequent notice and comment. Because this may not be a permanent change, it may still be worth preserving evidence of how inventors use AI systems in the inventive process to defend against future challenges to patent validity.
Navigating GPU Export Controls and AI Use Restrictions in Data Center Operations
Within data centers, Graphics Processing Units (GPUs) have emerged as key components, transforming how complex computations are handled. GPUs are employed for their ability to perform parallel data processing, making them ideal for a range of tasks, including scientific computations, machine learning algorithms, and processing large-scale data. As demand for infrastructure capable of supporting AI model training and inference has grown, the ability to host GPU servers has become increasingly important for data centers.
The increase in processing power that GPUs provide as compared to central processing units (or CPUs) has, however, given rise to disquiet amongst Western governments. In particular, the United States — where the biggest producers of GPUs are based — has expressed concern over their potential application for military and malign uses, and the Biden administration in January 2025 introduced comprehensive restrictions on the export and use of GPUs (the January 2025 AI Diffusion Rule). The Trump administration has also emphasized as a policy imperative the continuation (and even tightening) of these restrictions and has revoked the Biden-era restrictions and indicated that it will be replacing them with new restrictions, which as of the date of this GT Advisory have not yet been issued. This regulatory uncertainty leaves industry in an interim phase questioning how best to manage current and possible future restrictions on GPU exports and use.
Historically, data center operators that merely hosted the GPU servers of their tenants (rather than exporting or providing GPUs as a service) may have assumed U.S. export controls were not a material compliance concern. That assumption, however, may no longer be appropriate. U.S. export controls apply to the GPU hardware in perpetuity— meaning that even non-U.S. operators may face liability under the Export Administration Regulations (EAR) if restricted GPUs, controlled technology, or sanctioned end users are present in their facilities, even indirectly through tenants or sub-tenants. As regulators focus increasingly on the downstream use and custody of advanced computing hardware, data center operators should be prepared to demonstrate robust compliance measures and control frameworks. This includes knowing what GPUs are being hosted, where they were developed and manufactured, who owns and accesses them, and for what purposes they are used. This GT Advisory considers how data center operators who merely host GPU servers might navigate this hugely sensitive area.
We have produced this GT Advisory to give an overview of the current U.S. export and use restrictions and to offer insights that participants in this sector may want to consider to mitigate regulatory and reputational risk and prepare for future regulatory changes.
Continue reading the full GT Advisory.