AI Patent Law: Navigating the Future of Inventorship
As a patent attorney experienced in transformer-based AI architectures and large language models (LLMs), I want to share insights on the evolving landscape of AI-assisted inventions. This is particularly relevant in view of the 2024 publication of the USPTO’s AI Inventorship Guidance (“Inventorship Guidance for AI-Assisted Inventions,” published February 13, 2024, 89 FR 10043, available at https://www.federalregister.gov/documents/2024/02/13/2024-02623/inventorship-guidance-for-ai-assisted-inventions), which provides guidance to inform practitioners and the public about inventorship for AI-assisted patent claims.
To paraphrase the AI Inventorship Guidance, all patent claims must have significant contribution from a human inventor, with each claim requiring at least one natural person inventor who made a significant contribution to the claimed invention. When AI systems are used in claim drafting, practitioners must be particularly vigilant if the AI introduces alternate embodiments not contemplated by the named inventors, as this requires careful reevaluation of inventorship to ensure proper attribution. Additionally, if any claims are found to lack proper human inventorship, where no natural person made a significant contribution, then those claims must be either canceled or amended to reflect proper inventorship by a human.
Human Contribution to Invention
The USPTO requires that at least one human inventor demonstrates significant involvement in the invention process. This contribution must extend beyond presenting a problem to the AI or merely recognizing the AI’s output.
Compliance with Pannu Factors
To qualify as an inventor, a person must meet the Pannu Factors:
Significant contribution to conception or reduction to practice of the invention.*
Contributions that are not insignificant relative to the entire invention.
Activities beyond explaining well-known concepts or reiterating prior art.
Substantial Contribution to the Claimed Invention
The human inventor’s input must be meaningful when evaluated against the complete scope of the claimed invention. Examples of substantial contributions include:
Constructing specific AI prompts designed to solve targeted problems.
Expanding on AI-generated outputs to develop a patentable invention.
Designing or training AI systems tailored for specific problem-solving purposes.
What Constitutes Inventorship in AI-Assisted Innovations?
Several activities can establish inventorship in AI-assisted technologies:
Creating detailed prompts intended to generate targeted solutions from AI systems.
Contributing substantively beyond AI outputs to finalize the invention.
Conducting experiments based on AI results in unpredictable fields and recognizing the inventive outcomes.
Designing, building, or training AI systems to address specific challenges.
What Does Not Constitute Inventorship?
Certain activities fail to meet the threshold for inventorship, such as:
Recognizing a problem or presenting a general goal to the AI.
Providing only basic input without significant engagement in problem-solving.
Simply reducing AI-generated outputs to practice.
Claiming inventorship based solely on oversight or ownership of the AI system.
Practical Strategies for Patent Practitioners
Document Human Contributions: Maintain detailed records of human involvement in the invention process to establish inventorship.
Evaluate Claim Scope: Ensure each claimed element is supported by sufficient human input to meet the USPTO’s requirements.
Correct Inventorship Issues Promptly: Address discrepancies in inventorship to protect the patent’s validity and enforceability.
Drawing from my experience guiding AI-assisted innovations through the patent process, I have seen how vital these strategies are for robust IP protection.
* While the Pannu factors do mention reduction to practice, the Federal Register clarifies that “[t]he fact that a human performs a significant contribution to reduction to practice of an invention conceived by another is not enough to constitute inventorship” (https://www.federalregister.gov/documents/2024/02/13/2024-02623/inventorship-guidance-for-ai-assisted-inventions). Reduction to practice without simultaneous conception (such as in unpredictable arts) is insufficient to demonstrate inventorship. Inventorship continues to require human conception of the invention.
More States Ban Foreign AI Tools on Government Devices
Alabama and Oklahoma have become the latest states to ban from state-owned devices and networks certain AI tools with links to foreign governments.
In a memorandum issued to all state agencies on March 26, 2025, Alabama Governor Kay Ivey announced new policies banning from the state’s IT network and devices the AI platforms DeepSeek and Manus due to “their affiliation with the Chinese government and vast data-collection capabilities.” The Alabama memo also addressed a new framework for identifying and blocking “other harmful software programs and websites,” focusing on protecting state infrastructure from “foreign countr[ies] of concern,” including China (but not Taiwan), Iran, North Korea, and Russia.
Similarly, on March 21, 2025, Oklahoma Governor Kevin Stitt announced a policy banning DeepSeek on all state-owned devices due to concerns regarding security risks, regulatory compliance issues, susceptibility to adversarial manipulation, and lack of robust security safeguards.
These actions are part of a larger trend, with multiple states and agencies having announced similar policies banning or at least limiting the use of DeepSeek on state devices. In addition, 21 state attorneys general recently urged Congress to pass the “No DeepSeek on Government Devices Act.”
As AI technologies continue to evolve, we can expect more government agencies at all levels to conduct further reviews, issue policies or guidance, and/or enact legislation regarding the use of such technologies with potentially harmful or risky affiliations. Likewise, private businesses should consider undertaking similar reviews of their own policies (particularly if they contract with any government agencies) to protect themselves from potential risks.
Uncertainty Means AI Training Can Continue
Over 30 lawsuits challenging the training of Generative AI on copyrighted materials are pending, most in federal courts across the country. The copyrighted materials range from news stories to photographs to music. The law is unsettled whether such training violates copyright law.
However, the uncertainty means that training can continue to until we get final guidance from the courts (or the legislature), both of which take time. For example, Thomson Reuters, which provides Westlaw, sued a competitor for copyright infringement in May of 2020. Last month, the Court partially granted summary judgment finding that the headnotes and numbering were copied. The case is still pending trial.
Of course, it will be impossible to “untrain” the GenAI engines, which are and will be in use. This may leave the courts to grapple with the difficult question of what remedy is appropriate, if and when it is determined that such training does not constitute “fair use” or fall within an exception for data mining.
In decision issued Tuesday, Judge Eumi K. Lee ruled that it remained an “open question” whether using copyrighted materials to train AI is illegal – meaning UMG and other music companies could not show that they faced the kind of “irreparable harm” necessary to win such a drastic remedy.
www.billboard.com/…
The Carbon Cost of AI and the Legal Risk of Data Center Energy Use

The Carbon Cost of AI and the Legal Risk of Data Center Energy Use. Artificial intelligence is advancing at breakneck speed, transforming sectors from healthcare to finance. But behind the innovation lies a lesser-known consequence: AI’s enormous energy appetite. In 2023 alone, U.S. data centers consumed roughly 4% of the country’s electricity. By 2030, that […]
Ghosted by Your Insurer? The Truth Behind Instant Claim Rejections

Ghosted by Your Insurer? The Truth Behind Instant Claim Rejections. Article written by: JJ Palmer, Consumer Law Specialist: Lawyer Monthly – Updated April 2025 More than a year ago, UnitedHealthcare Group Inc.’s CEO, Brian Thompson, faced intense backlash after it emerged that the company had implemented an artificial intelligence (AI) system designed to automatically reject […]
FTC Alleges Fintech Cleo AI Deceived Consumers
On March 27, 2025, the Federal Trade Commission (FTC) filed a lawsuit and proposed settlement order resolving claims against Cleo AI, a fintech that operates a personal finance mobile banking application through which it offers consumers instant or same-day cash advances. The FTC alleges that Cleo deceived consumers about how much money they could get and how fast that money could be available, and that Cleo made it difficult for consumers to cancel its subscription service.
Pointing to those allegations, the FTC alleges Cleo (1) violated Section 5 of the Federal Trade Commission Act (FTC Act) by misrepresenting that consumers would receive—or would be likely to receive—a specific cash advance amount “today” or “instantly” and (2) violated the Restore Online Shoppers’ Confidence Act (ROSCA) by failing to conspicuously disclose all material transaction terms before obtaining consumers’ billing information and by failing to provide simple mechanisms to stop recurring charges.
“Cleo misled consumers with promises of fast money, but consumers found they received much less than the advertised hundreds of dollars promised, had to pay more for same day delivery, and then had difficulty canceling,” said Christopher Mufarrige, Director of the FTC’s Bureau of Consumer Protection.
The FTC cites to consumer complaints in support of its action against Cleo, including one stating: “There’s no other way for me to say it. I need my money right now to pay my rent. I have no other option I can’t wait 3 days. I can’t wait 1 day I need it now. I would never have used Cleo if I would have thought I would ever be in this situation.”
The FTC’s Allegations
In its complaint, filed in the U.S. District Court for the Southern District of New York, the FTC alleges that Cleo violated Section 5 of the FTC Act by:
“Up To” Claims. Advertising that its customers would receive “up to $250 in cash advances,” and then, only afterthe consumer subscribes to a plan and Cleo sets the payment date for the subscription, is the consumer informed of the cash advance amount they can actually receive. For “almost all consumers, that amount is much lower than the amount promised in Cleo’s ads.”
Undisclosed Fees. Advertising that its customers would obtain cash advances “today” or “instantly,” when Cleo actually charges an “express fee”—sometimes disclosed in a footnote—of $3.99 to get the cash same-day, and, even then, the cash may not arrive until the next day.
In addition, the FTC’s complaint alleges that Cleo violated Section 4 of ROSCA by:
Inadequate Disclosures. Failing to clearly and conspicuously disclose all material terms before obtaining customers’ billing information.
Inadequate Cancellation Mechanisms. Failing to permit consumers with an outstanding cash advance to cancel their subscriptions through the app.
Proposed Consent Agreement
The FTC’s proposed consent order would be in effect for 10 years and require that Cleo pay $17 million to provide refunds to consumers harmed by the company’s practices. The consent order would restrict Cleo from misleading consumers about material terms of its advances and require that it obtain consumers’ express, informed consent before imposing charges. More specifically, the consent order:
Prohibits Cleo from misrepresenting the amount of funds available to a consumer, when funds will be available, any applicable fees (including the nature, purpose, amount, or use of a fee), consumers’ ability to cancel charges, or the terms of any negative option feature.
Requires Cleo to clearly and conspicuously disclose, prior to obtaining the consumer’s billing information, all material terms, including any charges after a trial period ends, when a consumer must act to prevent charges, the amount the consumer will be charged unless steps are taken to prevent the charge, and information for consumers to find the simple cancellation mechanism.
Requires Cleo provide a simple mechanism for a consumer to cancel the negative option feature, avoid being charged, and immediately stop recurring charges. Such cancellation method must be through the same medium the consumer used to consent to the negative option feature.
The Commission voted 2-0 to issue the Cleo complaint and accept the proposed consent agreement.
Takeaways
The FTC has increased enforcement activities for negative options, such as last year’s enforcement action against Dave, Inc., another cash advance fintech company, which we wrote about previously. This attention on negative options, and consumers’ ability to easily cancel negative options, may provide insight into the FTC’s regulatory agenda, given that the remainder of its Click-to-Cancel Rule takes effect on May 14, 2025.
The FTC recently filed a brief in defense of its Click-to-Cancel Rule, vigorously defending the FTC’s rulemaking against trade association challenges consolidated in the Eighth Circuit. The FTC’s brief puts an end to speculation that the Commission may rethink or roll back the rule given the recent administration change and shifts in FTC leadership.
Businesses should be preparing to adopt changes to implement the Click-to-Cancel Rule, to the extent not already in process. The FTC’s complaint against Cleo should also serve as a reminder that businesses that employ “up to” claims, complex fee structures, or negative option offers should be careful to monitor their conduct in light of developments within the FTC and the other federal and state agencies that police advertising and marketing practices.
Virginia Governor Vetoes AI Bill As States Struggle to Approve Regulations
Virginia Governor Glenn Youngkin vetoed an artificial intelligence (“AI”) bill on March 24 that would have regulated how employers used automation in the hiring process. While the veto relieves employers of a new layer of regulation, the bill represented one of several state-level efforts to prevent potential harmful uses of AI in the employment context.
The Virginia General Assembly passed the “High-Risk Artificial Intelligence Developer and Deployer Act” during the 2025 legislative session. The bill would have regulated both creators and users of AI technology across multiple use cases, including employment. It defined “high-risk artificial intelligence” to cover any AI systems intended to make autonomous consequential decisions, or serve as a substantial factor in making consequential decisions. As relevant to the employment context, “consequential decisions,” included decisions about “access to employment.”
The law would have required Virginia employers to implement safeguards to prevent potential harm from “high-risk” AI, including adopting a risk management policy and conducting an impact assessment for the use of the technology. It also would have required users of covered AI systems to disclose their use to affected consumers, including employment applicants. The bill called for enforcement by the Virginia Attorney General only, with designated civil penalties for violations and no private right of action. But it also specified that each violation would be treated separately, so it created the potential for significant penalties if, for example, an employer failed to disclose its use of AI to a large group of applicants, resulting in a $1,000 penalty for every applicant impacted.
Youngkin said he vetoed the bill because he feared it would undermine Virginia’s progress in attracting AI innovators to the Commonwealth, including thousands of new tech startups. He also said existing laws related to discrimination, privacy and data use already provided necessary consumer protections related to AI. Had the bill avoided the governor’s veto pen, Virginia would have joined Colorado as the first two states to approve comprehensive statutes specifically governing the use of AI in the employment context. The Colorado law, passed in 2024, will become effective on February 1, 2026 and has many similarities to the bill Youngkin vetoed, including requirements that users of high-risk AI technology exercise reasonable care to prevent algorithmic discrimination.
Other states have laws that touch on AI-related topics, but lack the level of detail and specificity contained in the Colorado law. In several more states, attempts to regulate the use of AI in the employment context are meeting similar fates to Virginia’s law. For example, Texas legislators recently abandoned efforts to pass an AI bill modelled after the Colorado legislation. Similar bills have failed or appear likely to fail in Georgia, Hawaii, Maryland, New Mexico and Vermont. And even in states with more employment-related regulations like Connecticut, Democratic Governor Ned Lamont has resisted efforts by lawmakers to push through AI regulations. The exception to the trend may be California, where legislators are continuing to pursue legislation – A.B. 1018 – that closely resembles both the Colorado and Virginia bills with even steeper penalties.
In all, states remain interested in regulation of emerging AI tools, but have yet to align on the best way to handle such regulation in the employment context. Still, employers should use caution when using automated tools or outsourcing decision-making to third parties that use such technology. Existing laws, including the Fair Credit Reporting Act and Title VII of the Civil Rights Act, still apply to these new technologies. And while momentum for new state-level AI regulation seems stalled, employers should monitor state level developments as similar proposed laws proceed through state legislatures.
Cybernetic Teammate: A Practical Playbook for Harnessing Generative AI for Your Law Firm
Can generative AI function not merely as a tool but as a genuine teammate? This question was answered in a recently published Harvard Business School working paper1, led by researchers from HBS, Procter & Gamble, and The Wharton School, including Professor Ethan Mollick, among others.
The key results were striking. In a large-scale experiment with 776 professionals at Procter & Gamble (P&G), AI-assisted individuals performed at a level comparable to two-person teams without AI. Moreover, AI seemed to “bridge silos,” helping participants produce solutions outside their usual domain of expertise, and it also had a surprisingly positive impact on users’ emotional experience. These findings suggest that generative AI may transform how professionals collaborate, share expertise, and innovate.
In this article, I distill the core insights from the study, then explore how these lessons might be generalized for the legal sector. Finally, I propose an actionable roadmap for law firm partners. At a time when legal services are becoming ever more specialized, these findings offer a glimpse into how AI can help lawyers accelerate research, navigate complexity, and even increase overall job satisfaction.
Key Lessons from the Study
AI as a Substitute for Team Collaboration. Traditionally, P&G employees work in small cross-functional teams (e.g., pairing R&D with Commercial) to develop new product ideas. This environment mirrors law firms’ multi-practice teams, which often bring together litigators, transactional attorneys, compliance specialists, and more. In the experiment, individuals with AI support produced outputs whose quality matched that of two-person teams without AI. Put plainly, AI replaced part of the collaborative “benefit” that usually arises when more than one human is involved. Although human teamwork still has intrinsic value, the study showed that GenAI could function much like a teammate, an ever-present collaborator available on demand.
AI as a Knowledge and Expertise “Equalizer.” The paper also found that AI helped non-specialists produce specialized solutions. For instance, professionals lacking R&D experience still performed at high levels when aided by GenAI. Equally striking, the AI encouraged more balanced thinking: R&D employees generated commercial-focused proposals just as frequently as Commercial employees did, and vice versa. AI effectively bridged domain gaps. For law firms, a similar dynamic may be at play. Young associates, or attorneys venturing beyond their typical focus areas, may leverage GenAI to expand their range: drafting, refining, or ideating on topics outside their usual “comfort zone.”
Emotional and Motivational Boost. Contrary to common fears that technology can depersonalize work, participants using AI reported significantly higher positive emotions (e.g., enthusiasm, energy) and fewer negative emotions (e.g., frustration). The interactive nature of large language models (LLMs) may explain why. Simply put, workers felt more supported and less isolated. For the legal profession, where high stress and burnout are all too familiar, this aspect may hold particular promise. If AI can absorb some of the more routine or painstaking aspects of legal work, attorneys may feel more energized to handle the complex, human-centric tasks that truly require their expertise.
Why This Matters for Law Firms
Law firms today confront a complex blend of client demands, rapid regulatory changes, and cost pressures. Major client matters often require specialized skill sets across multiple legal domains: mergers and acquisitions, IP, antitrust, data privacy, environmental law, and so forth. Teams spend countless hours coordinating each piece of the puzzle. Moreover, the business side of law, marketing, client development, and knowledge management, often runs in parallel silos. In many firms, partners and practice leaders lament that cross-practice synergy does not happen as seamlessly as they would like.
The P&G study signals that GenAI can potentially step in as a “universal collaborator.” Instead of simply churning out boilerplate text, AI can:
Offer real-time expertise: Summarize new legislation, check case law, or offer multiple lines of argument as an “always-on collaborator.”
Enhance cross-practice synergy: Encourage a more balanced approach by exposing attorneys to perspectives outside their usual specialty.
Streamline work, reduce stress: Save hours of routine drafting or research and “hand off” partial tasks to the AI, freeing attorneys to focus on higher-level strategy or client engagement.
From Theory to Action: A Playbook for Law Firm Partners
Great insights, but how to put them into practice? Below is a suggested roadmap, based on the study’s findings, for law firms looking to integrate AI into their practice in a way that enhances teamwork, expertise, and job satisfaction.
Start Small: Pilot AI in a Targeted Practice Group. Begin by running a carefully designed pilot in a discrete practice area, say, corporate or litigation support, where tasks are high-volume and somewhat repeatable (e.g., drafting standard agreements, initial research, or summarizing depositions).
Assign an AI “Champion”: Choose a partner (or senior associate) who believes in the potential of AI and can serve as the go-to resource.
Define “Success” Metrics: For instance, measure (1) average time saved, (2) quality of deliverables, and (3) user satisfaction.
Redesign Workflows to Treat AI as a Team Member
If the P&G experiment is any indication, GenAI can step into roles traditionally filled by junior attorneys or paralegals. That doesn’t mean replacing people; rather, it means reshaping how tasks are allocated or how their job duties are defined.
Task Decomposition: Break down legal work into smaller chunks. Let AI handle discrete elements (e.g., searching for relevant cases, generating multiple versions of a clause).
Iteration & Review: Humans add the final legal judgment and personal expertise. As the study showed, the best outputs came from iterative, back-and-forth engagement with AI.
Invest in Prompt Crafting and AI Training
Those who benefited the most from AI in the P&G study were those who engaged iteratively. “Ask AI a question, incorporate its feedback, refine the prompt, and ask again.” This is a skill that can be honed.
Workshops and Templates: Provide attorneys with example prompts for different tasks: brief writing, interrogatories, contract drafting, client memos.
Build Confidence & Caution: Encourage lawyers to double-check references, confirm sources, and use AI outputs as a springboard rather than a final answer.
Foster an “AI + Human” Culture to Bridge Silos
The study’s participants found that AI broadened their horizons, making them more comfortable addressing topics beyond their functional expertise. Law firms can build on this by designing cross-practice “collaboration labs.”
Virtual Collaboration Sprints: Pair attorneys from different specialties, equip them with AI, and let them co-develop novel solutions, for instance, a “privacy + employment” approach to compliance.
Sharing Success Stories: Publicize how AI helped cross-practice collaboration, for example, a corporate lawyer who used AI to weigh in on IP issues or a litigator who used AI to refine transactional language. This normalizes cross-practice learning.
Monitor Emotional Engagement
One of the study’s most surprising findings is that AI can actually increase enthusiasm and decrease frustration. Law firm leadership might consider adding a short “emotional check-in” to pilot programs or post-project reviews.
Simple Pulse Surveys: Ask attorneys whether AI tools eased or exacerbated stress.
Emphasize Mentoring: If junior lawyers feel more energized using AI, ensure the firm invests that extra energy into training, professional development, and mentorship, rather than merely piling on more work.
Plan for Ethical and Risk Mitigation
Finally, as attorneys well know, legal practice demands rigorous adherence to client confidentiality, privilege, and ethical guidelines. Incorporate the following safeguards:
Confidentiality Protocols: Ensure AI platforms are approved, secure, and configured to avoid inadvertent disclosure of client data.
Transparent Boundaries: Partners should clarify how AI was used in drafting or research, especially for documents shared externally.
Ongoing Supervision: Even the best AI can produce flawed or “hallucinated” references. A structured quality-control protocol is essential for accuracy and ethical compliance.
Closing Thoughts
The Cybernetic Teammate study offers an eye-opening glimpse of how AI might reshape legal work. While lawyers have long relied on collaboration (juniors, seniors, experts, paralegals, librarians, and more) this new research suggests that generative AI can serve as a flexible, on-demand collaborator.
Far from merely automating tasks, AI’s language-based interface can stimulate creativity, bridge silos, and encourage greater engagement. In a legal world where time and expertise are invaluable, the potential to standardize routine tasks, extend specialized knowledge, and reduce stress is profound.
Yet, the lawyers and the firms that stand to gain the most from generative AI will be those that treat it as a genuine teammate. That means investing in prompt-crafting skills, rethinking staffing structures, systematically incorporating AI into everyday workflows, and ensuring ethical guardrails. By combining the best of human judgment with the scale and speed of AI, law firms can position themselves at the vanguard of modern legal practice.
In short, the arrival of the “cybernetic teammate” points not just to incremental improvement but to a chance for genuine transformation. As the study shows, AI may not only amplify the business side of law firms but also improve the experience of practicing attorneys. Done right, it can help create a culture of continuous learning, broader expertise, and higher-impact client service, creating a win-win-win: benefiting partners, associates, and clients alike.
1Dell’Acqua, Fabrizio, Charles Ayoubi, Hila Lifshitz, Raffaella Sadun, Ethan Mollick, Lilach Mollick, Yi Han, Jeff Goldman, Hari Nair, Stew Taub, and Karim R. Lakhani. “The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise.” Harvard Business School Working Paper, No. 25-043, March 2025.
Driving AI Adoption at Top Law Firms: Candid Insights from Innovation Leaders
In an industry known for tradition and caution, generative AI is rapidly transforming the legal profession. While the promise of great efficiency is compelling, concerns around AI ethics, data security, billable hours, and hallucinations persist across many law firms. This tension between innovation and resistance is playing out in real-time—and at the center of it are Chief Innovation Officers and Chief Information Officers, tasked with guiding law firms through this fast-evolving landscape.
These innovation leaders are responsible for evaluating the risks and benefits of new technology, deciding which tools are worth exploring, testing, and potentially deploying to attorneys across a firm.
It’s no small feat.
They must align innovation strategy with firm economics, coordinate closely with leadership, assess use cases at both the firmwide and practice-group levels, evaluate vendor offerings and security protocols, manage pilot programs, obtain attorney buy-in, and help assess ROI.
To better understand the internal dynamics behind AI adoption at some of the top law firms, we spoke directly with innovation leaders from Blank Rome LLP, Fisher Phillips LLP, Honigman LLP, and Brownstein Hyatt Farber Schreck.
At Blank Rome LLP, AI was a central focus at the firm’s most recent partner retreat, according to Chief Innovation & Value Officer Ashton Batchelor and Chief Information Officer Frank Spadafino. Attorneys had the opportunity to experiment with AI tools in a low-stakes environment and heard directly from the founder of a leading generative AI platform.
Blank Rome also operates an internal Innovation Lab dedicated to evaluating generative AI tools. Last year, the Lab conducted pilot programs with four AI tools, enlisting 250 of its attorneys in the process. The firm continues to reinforce education and adoption through initiatives like “AI Saturdays,” practice-group-specific AI meetings, and private AI learning sessions.
Fisher Phillips has long been on the cutting-edge of AI testing and adoption. According to Evan Shenkman, the firm’s Chief Knowledge and Innovation Officer, Fisher Phillips was involved in the early design and testing of CoCounsel and became its first law firm customer. The firm also holds the distinction of being the first law firm client for both Hebbia and Trellis AI.
But the journey from vendor pitch to firmwide deployment is far from simple. It’s a deliberate, strategic process requiring deep use-case analysis, internal buy-in, security vetting, robust pilot testing, and thoughtful implementation.
Esther Bowers, Chief Practice Innovation Officer at Honigman LLP, noted that not all pilots make it through that process. “The pilots that succeed,” she explained, “are the ones backed by strong change management, clear communication about the ‘why,’ and use cases that are firmly rooted in how lawyers actually work.”
For Bowers, successful adoption doesn’t come from a single announcement. “Attorneys often need to hear about something seven times before it sticks,” she said, emphasizing the importance of clear, consistent messaging delivered across multiple channels.
Andrew Johnson, Chief Information Officer at Brownstein Hyatt Farber Schreck, highlighted the importance of close collaboration between AI innovation teams and practice group leadership. Timing, he added, is also a critical factor. “A vendor may have an outstanding product or service,” Johnson remarked, “but it may just be the wrong time for the firm.”
Shenkman added that the most important factor is simple: “The absolute most important thing is to have a great GenAI tool that adds a ton of value, and is easy to use.”
When asked what advice they would give legal tech vendors, these innovation leaders were candid: do your homework—and respect the process.
Blank Rome’s Batchelor and Spadafino noted that “broad solicitations” and direct pitches to attorneys are “rarely effective.” The most successful vendors, they emphasized, are those who “respect the procurement and onboarding process,” making it easier for the innovation team to properly evaluate the tool.
Bowers of Honigman LLP echoed this sentiment, expressing a desire for legal tech vendors to invest more time in understanding the firm’s internal procurement procedures, decision-making dynamics, and key stakeholders.
But even a deep understanding of firm dynamics isn’t enough if a vendor cannot meet essential security and governance standards. Shenkman of Fisher Phillips was direct: failure to meet those requirements is a dealbreaker. “We can’t proceed—and most law firms won’t either,” he stated.
The insights below provide a rare window into how law firm innovation leaders are driving change and navigating the real-world challenges of AI adoption in the legal industry:
What is your firm’s current approach to adopting AI? Are there any specific tools or use cases you’re most excited about?
Andrew Johnson, Chief Information Officer @ Brownstein Hyatt Farber Schreck
Similar to others, we are interested in automating rote, low-risk tasks while freeing attorneys and policy professionals to work on more high-level client issues. The perception of value and the relative risk and reward of AI is not fixed across the industry, however, so we encourage dialogue with clients.
Esther Bowers, Chief Practice Innovation Officer @ Honigman LLP
Our firm takes a phased, practice-informed approach to adopting AI, recognizing that groups have different needs and are at varying levels of readiness—from those that have systematized AI into their workflows to others just beginning to explore its potential. We’re supporting all practices with training, governance, and strategic guidance to ensure responsible and meaningful adoption.
What excites us most isn’t a single tool, but the evolution toward agentic AI—systems that can reason, automate complex legal tasks, and eventually work proactively within the flow of legal work. We’re already seeing promising use cases in contract review, redlining, and approval routing, where AI can replicate demonstrated processes without coding or complicated setup. This is an inflection point where thoughtful integration of AI has the potential to reshape not just efficiency, but the entire model of legal service delivery.
Ashton Batchelor, Chief Innovation & Value Officer @ Blank Rome LLP | Frank Spadafino, Chief Information Officer @ Blank Rome LLP
Blank Rome’s AI adoption is driven by strong collaboration between business teams and attorneys, focusing on maximizing client value and optimizing operations in a responsible and scalable way. Last year, the firm’s Innovation Lab focused exclusively on evaluation of generative AI. The Lab vetted leading AI applications for law firms through four large-scale pilots involving over 250 lawyers and business professionals, focused on defining use cases and putting the capabilities of each application to the test. The pilots succeeded due to a use-case-first approach, empowering attorneys to integrate AI practically into their practices. Following the pilots, the firm adopted three AI technologies for long-term study and workflow integration. As we enter the next phase, we look forward to evolving from prompt-based AI interactions to advanced workflow integration through agentic AI and creation of practice-specific playbooks.
Evan Shenkman, Chief Knowledge and Innovation Officer @ Fisher Phillips
Since late 2022, our firm has been committed to leveraging GenAI to deliver higher-quality legal work more efficiently for our clients. We moved quickly—not only to get our attorneys directly engaged with the technology, but also to help shape the responsible development of leading GenAI tools. We collaborated with Casetext to help design and test CoCounsel, and became the first firm in the world to deploy it. We then became the first law firm customer of both Hebbia and TrellisAI, as well as a design partner and first customer of Verbit’s GenAI-powered LegalVisor. With over two years of hands-on experience designing and applying GenAI in legal practice, our approach remains the same: keep our attorneys at the forefront of innovation, partner with vendors to help deploy the best new products, and turn that advantage into smarter, faster, and more effective service for our clients.
When evaluating AI tools, what criteria matter most to your team?
Andrew Johnson, Chief Information Officer @ Brownstein Hyatt Farber Schreck
The questions we must answer affirmatively before adopting any tool are: 1) Based on the sensitivity of the data processed by this tool, do we trust the related security controls? 2) Do we understand how the tool works and can we conceive of appropriate steps to monitor accuracy and completeness? 3) Can we articulate the benefits of this tool to internal and external stakeholders?
Esther Bowers, Chief Practice Innovation Officer @ Honigman LLP
The top criteria that tend to matter most to our firm include: 1) platform security and maintaining the integrity of our client’s data, which is paramount in any legal technology decision; 2) whether the AI has been trained on legal precedents and grounded in case law to minimize the risk of hallucinations and ensure reliable outputs; 3) the ability to demonstrate a clear business case for procurement, showing how the tool will drive efficiency, support better outcomes, and deliver measurable value to the practice and our clients; and 4) ease of use and low barriers to adoption—specifically, how intuitive the interface is and whether the tool can integrate seamlessly into existing workflows and platforms. We look for solutions that not only meet our technical and ethical standards but also support practical, real-world use by our lawyers and teams.
Ashton Batchelor, Chief Innovation & Value Officer @ Blank Rome LLP | Frank Spadafino, Chief Information Officer @ Blank Rome LLP
AI applications must create measurable value to our attorneys and clients by streamlining processes or improving outcomes. We do not pursue new technology solely for its own sake. Blank Rome has an established and carefully considered technology strategic plan, selecting best-in-class applications for our attorneys and business professionals. Synergies between applications amplify benefits and reduce friction, so we focus on AI tools that complement our existing applications and can benefit from close integrations. Finally, given the rapid pace of AI advancement, we look for vendors that will partner and collaborate with us, helping drive adoption and considering our input for their roadmap and future direction.
Evan Shenkman, Chief Knowledge and Innovation Officer @ Fisher Phillips
When evaluating new GenAI tools, the most important criteria to me are: (1) functionality – does the tool align with our business needs and use cases; (2) security – does it comply with our firm’s data protection, governance, and regulatory standards; (3) value – what is the total cost of ownership, and does the return justify the investment; (4) differentiation – is the tool built on proprietary content or capabilities that are difficult to replicate, rather than being just a basic GPT wrapper; and (5) stability – is the vendor trustworthy, and committed to the long-term support of the product.
What are your top lessons or best practices for successfully selecting and implementing AI tools within a law firm environment?
Andrew Johnson, Chief Information Officer @ Brownstein Hyatt Farber Schreck
Attorneys and law firms are typically risk averse. AI is disruptive. It’s not an easy marriage. Beginning the journey at the most basic level, building up institutional awareness and trust, is essential. The next step is partnering with tech adept and innovative personalities to find discreet use cases and demonstrate success, while showcasing how related risks can be identified and managed. This creates momentum for more opportunities.
Esther Bowers, Chief Practice Innovation Officer @ Honigman LLP
For us, successful AI adoption really starts with attorney leadership—teams identifying needs in their practice, spotting client-facing opportunities, and thinking through how AI can solve real problems or make work better for their people. We’ve found that a hands-on, well-structured pilot makes all the difference: feedback is collected in multiple ways, shared across participants, and creates a sense of community and momentum. The pilots that stick are the ones with strong change management, clear communication around the “why,” and use cases that are actually grounded in the way lawyers work. At the end of the day, tech alone doesn’t drive success—it’s the people, process, and willingness to lean in. And just as important, we look for vendors who act like true partners—invested in our success, not just in selling us something.
Ashton Batchelor, Chief Innovation & Value Officer @ Blank Rome LLP | Frank Spadafino, Chief Information Officer @ Blank Rome LLP
You need an experienced team at the table, bringing a broad spectrum of attorneys and business professionals. Making decisions about AI tools in isolation by individual departments is a recipe for failure. Blank Rome’s deeply ingrained culture of collaboration is a significant strength in an effort that truly requires a multi-disciplinary focus. We make decisions about AI as a team and collectively commit to the success of our chosen direction. We also commit to the feedback and follow-up process ensuring we remain nimble enough to make adjustments as our client needs and AI landscape continue to evolve.
What approaches have you found most effective in driving internal buy-in and sustained use of AI tools within your firm?
Andrew Johnson, Chief Information Officer @ Brownstein Hyatt Farber Schreck
Like anything, you need to start with why. Connecting the dots between technology, client service, and value gets attention and may result in budget allocations, but adoption and effective use will suffer if you aren’t aligned with leadership at the practice level and getting people to believe I can do this.
Esther Bowers, Chief Practice Innovation Officer @ Honigman LLP
Driving internal buy-in and sustained use of AI tools takes more than just rolling something out and making an announcement—we know it doesn’t work that way. I often say people need to hear about something seven times before it sticks, and it has to come through different channels with value-based messaging that speaks directly to what they care about. That takes creativity, active listening, and a team that’s fully aligned around a shared mission. Everyone on our team, even those outside of practice technology, is expected to understand how these tools support the work so they can confidently “sell” the benefits to attorneys. We reach people from multiple angles—training, case studies, client panels, innovation events, and day-to-day engagement—to make sure adoption is both meaningful and lasting.
Ashton Batchelor, Chief Innovation & Value Officer @ Blank Rome LLP | Frank Spadafino, Chief Information Officer @ Blank Rome LLP
Coordination is essential to attorney buy-in and adoption. The firm prioritized making space for learning and discussion about AI across all levels of the organization, including bringing in industry experts to help lawyers evaluate the intersection of AI in legal. AI was a key topic at the firm’s partner retreat with an engaging session by the founder of a leading legal generative AI application. Attorneys experimented with AI in a low-stakes environment and shared use cases through various mediums, fostering peer-driven buy-in. Further, a multi-modal training approach, including practice group sessions, “AI Saturdays,” and private coaching, were designed to meet lawyers where they were on their AI journey. Training focused on practical use cases, making it easier for attorneys to connect AI to their work.
Evan Shenkman, Chief Knowledge and Innovation Officer @ Fisher Phillips
The absolute most important thing is to have a great GenAI tool, that adds a ton of value, and is easy to use. Once you have that, start by mandating firm-wide training that focuses on practical, high-impact use cases. Next find attorney champions—both associates and partners—and periodically share their success stories. To win over remaining skeptics, point directly to RFPs and outside counsel guidelines showing how the firm’s clients increasingly expect innovation and responsible GenAI use.
What are some things you wish more AI or legal tech vendors understood before reaching out to your team?
Andrew Johnson, Chief Information Officer @ Brownstein Hyatt Farber Schreck
You may have an outstanding product or service, but it may just be the wrong time. The best partnership will occur if we can realize sustained success with your platform, and that requires connecting at the right time on our AI journey.
Esther Bowers, Chief Practice Innovation Officer @ Honigman LLP
I wish more AI and legal tech vendors took the time to understand our internal procurement process, how we make decisions, and who our key stakeholders are. It would go a long way if they came in with a better understanding of our goals, integration requirements, and the real pain points we’re trying to solve. Every firm is different, and so are our practice strengths—what works for one may not work for us. A little upfront homework can make the conversation far more productive.
Ashton Batchelor, Chief Innovation & Value Officer @ Blank Rome LLP | Frank Spadafino, Chief Information Officer @ Blank Rome LLP
Sending broad solicitations or directly approaching attorneys with a sales pitch is rarely effective. Our best vendors respect the procurement and onboarding process, making it easier for us. The business team at a firm is your best ally in a partnership, but they manage competing priorities. Ethical, contractual, and regulatory requirements create unique challenges, especially with AI-based technologies. Vendors who haven’t prioritized learning the business of law are often the most challenging to partner with, making collaboration and understanding the firm’s process essential.
Evan Shenkman, Chief Knowledge and Innovation Officer @ Fisher Phillips
First, if you can’t meet our security and information governance standards, we can’t proceed—and most law firms won’t either. If you’re serious about selling to legal, get that in order up front. Also, take time to understand my firm, our practice areas, and our GenAI experience before reaching out. A thoughtful, tailored approach is far more likely to get my attention—and a response. Generic pitches are far less effective.
Many attorneys express hesitancy around using AI due to concerns like hallucinations, quality control, and potential impacts on billable hours. How do you think firms should weigh the tradeoffs between efficiency, accuracy, and economics when adopting AI?
Andrew Johnson, Chief Information Officer @ Brownstein Hyatt Farber Schreck
There is no question our industry will continue to evolve, but the related pieces will advance at different speeds, creating tension along the way. Most clients are becoming educated on these topics and forming their own opinions. We should be prepared to listen and align while doing our part to craft novel solutions and demonstrate increasing value.
Esther Bowers, Chief Practice Innovation Officer @ Honigman LLP
Concerns like hallucinations, quality control, and the impact on billable hours are valid—but they can’t be the reason we stall progress. The real risk lies in not adopting AI and new ways of working, which will leave firms behind as the industry moves forward. Name an industry that isn’t exploring or implementing AI in some facet of their business—it’s hard to find one. At Honigman, we want to lead and partner with clients on this journey, not find ourselves playing catch-up because we were overly cautious. To address the concerns listed we firmly believe that this is why human talent remains essential to delivering high-quality legal services, and why we must build the right processes and guardrails to manage risk. Additionally, firm and practice strategies need to thoughtfully integrate AI into both short- and long-term planning—because standing still isn’t a viable business strategy.
Ashton Batchelor, Chief Innovation & Value Officer @ Blank Rome LLP | Frank Spadafino, Chief Information Officer @ Blank Rome LLP
“Ready, fire, aim” is a recipe for disaster when it comes to AI in a law firm. Having a roadmap to guide your AI strategy will keep you focused on the most relevant use cases and applications. Establishing rules for each AI application is crucial, including considerations like appropriate use, application selection, training, data governance, and compliance. When exploring ROI, evaluate whether the application saves costs, manages risk, or creates new revenue streams. Answering these questions before making long-term commitments will help align your AI strategy with business priorities. This approach ensures your AI initiatives are both effective and sustainable.
Evan Shenkman, Chief Knowledge and Innovation Officer @ Fisher Phillips
Prudent firms should weigh the tradeoffs in favor of responsibly using GenAI. When it comes to accuracy and quality, neither human lawyers nor GenAI tools are flawless, and recent, credible case studies show that attorneys aided by well-vetted legal GenAI consistently outperform those working manually, in both accuracy and speed. On the efficiency/economic front, at the end of the day our obligation is to deliver the highest quality work as efficiently as possible, and that includes using GenAI if it adds value. Most clients already expect—or will soon expect—their lawyers to use these tools. That said, I can report that our attorneys have been even busier and more productive in the GenAI era than in the years preceding it, and I expect that trend to continue.
Legal AI Unfiltered: Legal Tech Execs Speak on Privacy and Security
With increasing generative AI adoption across the legal profession, prioritizing robust security and privacy measures is critical. Before using any generative AI tool, lawyers must fully understand the underlying technology, beginning with thorough due diligence of legal tech vendors.
In July 2024, the American Bar Association issued Formal Opinion 512, which provides some guidance on the proper review and use of generative AI in legal practice. The opinion underscores some of the ABA Model Rules of Professional Conduct that are implicated by lawyers’ use of generative AI tools. This includes the duty to deliver to competent representation, keep client information confidential, communicate generative AI use to clients, properly supervise subordinates in their use of generative AI, and to only charge reasonable fees.
Even before deploying generative AI tools, however, lawyers must understand a vendor’s practices. This includes verifying vendor credentials and fully reviewing policies related to data storage and confidentiality.
According to Formal Opinion 512, “all lawyers should read and understand the Terms of Use, privacy policy, and related contractual terms and policies of any GAI tool they use to learn who has access to the information that the lawyer inputs into the tool or consult with a colleague or external expert who has read and analyzed those terms and policies.” Lawyers may also need to consult IT and cybersecurity professionals to understand terminology and assess any potential risks.
In practice, this means carefully reviewing vendor contract terms related to a vendor’s limitation of liability, understanding if a vendor’s tool “trains” on your client’s data, assessing data retention policies (before, during, and after using the tool), and identifying appropriate notification requirements in the event of a data breach.
To further explore these ethical guidelines in practice, we spoke with legal technology executives about the security and privacy measures they implement, as well as best practices for lawyers when evaluating and vetting legal tech vendors.
What security measures do you take to protect client data?
Troy Doucet, Founder @ AI.Law
Enterprise-expected security measures including SOCII, HIPAA, and robust encryption at rest and in transit for data. We also follow ABA guidance on AI, including confidentiality, not training our models on our users’ data, and making it clear that we do not own the data users input.
Jordan Domash, Founder & CEO @ Responsiv
The foundation must be traditional security and privacy controls that have always been important an enterprise software. On top of that, we’ve built a de-identification process to strip out PII and corporate identifiable content before processing by an LLM. We also have a commitment to not have access to or train on client questions and content.
Michael Grupp, CEO & Co-founder @ BRYTER
We have an entire team focused on security and compliance so the answer is of course, all of them: SOC 2 Type II, ISO27001, GDPR, CCPA, EU AI Act etc. And, BRYTER does not use client data to develop, train or fine-tune the AI models we use.
Gil Banyas, Co-Founder & COO @ Chamelio
Chamelio safeguards client data through industry-standard encryption, SOC 2 Type II certified security controls, and strict access management with multi-factor authentication. We maintain zero data retention arrangements with third-party LLMs and employ continuous security monitoring with ML-based anomaly detection. Our comprehensive security framework ensures data remains protected throughout its entire lifecycle.
Khalil Zlaoui, Founder & CEO @ CaseBlink
Client data is encrypted in transit and at rest, and is not used to train AI models. We enforce a strict zero data retention policy – no user data is stored after processing. A SOC 2 audit is nearing completion to certify that our security and data handling practices meet industry standards, and customers can request permanent deletion of their data at any time.
Dorna Moini, CEO & Founder @ Gavel
Gavel was built for legal documents, so our security standards exceed those typical of software platforms. We use end-to-end encryption, private AI environments, and enterprise-grade access controls—backed by SOC II databases and third-party audits. Client data is never used for training, and our retention policies give firms full control, ensuring compliance and peace of mind.
Ted Theodoropoulos, CEO @ Infodash
Infodash is built on Microsoft 365 and Azure and deployed directly into each customer’s own tenant, which means we host no client data whatsoever. This unique architecture ensures that law firms always maintain full control over their data. Microsoft’s enterprise-grade security includes encryption at rest and in transit, identity management via Azure Active Directory, and compliance with certifications like ISO/IEC 27001 and SOC 2.
Jenna Earnshaw, Co-Founder & COO @ Wisedocs
Wisedocs uses services that implement strict access controls, including role-based access control (RBAC), multi-factor authentication (MFA), and regular security audits to prevent unauthorized access to your data. Our organization employs configurable data retention policies as agreed upon with our clients. Wisedocs has achieved our Soc 2 Type 2 attestation, as well as established information security and privacy program in accordance with SOC 2, HIPPA, PIPEDA, PHIPA, as well as annual risk assessments and continual vulnerability scans.
Daniel Lewis, CEO @ LegalOn
Security and privacy are top priorities for us. We are SOC 2 Type II, GDPR, and CCPA compliant, follow industry-standard encryption protocols, and use state-of-the-art infrastructure and practices to ensure customer data is secure and private.
Gila Hayat, CTO & Co-Founder @ Darrow
Darrow is working mostly on the open web realm, utilizing as much as publicly available data as possible, surfacing potential matters from the open web. Our clients confidentiality and privacy is a must, therefore we adhere to security standards and regulations, and collect minimal data as possible to maintain trust. We take client confidentiality and privacy very seriously.
Sigge Labor, CTO & Co-Founder @ Legora | Jonathan Williams, Head of France @ Legora
We exclusively use reputable, secure providers and AI models that never store or log data, with no human review or monitoring permitted. All vendors are contractually bound to ensure data is never retained or used for training in any form. This, in combination with ISMS certifications and adherence to industry standards, ensures robust data security and privacy.
Gary Sangha, CEO @ LexCheck Inc.
We are SOC 2 compliant and follow rigorous cybersecurity standards to ensure client data is protected. Our AI tools do not retain any personally identifiable information (PII), and all data processing is handled securely within Microsoft Word, leveraging Azure’s built-in data protection. This ensures client data remains encrypted, confidential, and under the highest level of enterprise-grade security.
Tom Martin, CEO & Founder @ Lawdroid
As a lawyer myself, I understand the fiduciary responsibility we have to handle our client data responsibly. At LawDroid, we use bank-grade data encryption, do not train on your data, and provide you with fine grain control over how long your data is retained. We also just implemented browser-side masking of personally identifiable information to prevent it from ever being seen.
Lawyers are very concerned about data privacy. What would you tell a lawyer who doesn’t use legal-specific AI tools due to privacy concerns?
Troy Doucet, Founder @ AI.Law
You have control over what you input into AI, so do not input data that you do not feel comfortable inputting. AI products vary in their functionality too, meaning different levels of concern. For example, asking AI about the difference between issue and claim preclusion is a low-risk event, versus mentioning where Jonny buried mom and dad in the woods.
Jordan Domash, Founder & CEO @ Responsiv
You’re right to be skeptical and critically consider a vendor before giving them confidential or privileged information! The risk is vendor-specific – not with the category. The right vendor designs the platform with robust data privacy measures in mind.
Michael Grupp, CEO & Co-founder @ BRYTER
We have been working with the biggest law firms and corporates for years, and we know that trust is earned, not given. This means that first, we try to be over-compliant – so this means agreements with providers to protect attorney-client privilege. Second, we make compliance transparent. Third, we provide references to those who are already advanced in the journey.
Gil Banyas, Co-Founder & COO @ Chamelio
Adopting new technology inevitably involves some privacy trade-offs compared to staying offline, but this calculated risk enables lawyers to leverage significant competitive advantages that AI offers to legal practice. Finding the right risk-reward balance means embracing innovation responsibly by selecting vendors who prioritize security, maintain zero data retention policies, and understand legal confidentiality requirements. Success comes from implementing AI tools strategically with appropriate safeguards rather than avoiding valuable technology that competitors are already using to enhance client service.
Khalil Zlaoui, Founder & CEO @ CaseBlink
Not all AI tools treat data the same, and legal-specific platforms like ours are built with strict safeguards and guardrails. Data is never used to train models, and everything is encrypted, access-controlled, and siloed. Only clients can access their own data. They retain full ownership and control at all times, with the ability to keep information private even across internal teams.
Dorna Moini, CEO & Founder @ Gavel
With consumer AI tools, your data may be stored, analyzed, or even used to train models—often without clear safeguards. Professional-grade and legal-specific tools like Gavel are built with attorney-client confidentiality at the core: no data sharing, no training on your client data inputs, and full control over retention. Avoiding AI entirely isn’t safer—it’s just riskier with the wrong tools (and that’s not specific to AI!).
Ted Theodoropoulos, CEO @ Infodash
Legal-specific platforms like Infodash are purpose-built with confidentiality at the core, unlike general-purpose consumer AI tools. These solutions are built with the privacy requirements of legal teams in mind. With new competitors like KPMG entering the market, delaying AI adoption poses a real competitive risk for firms.
Jenna Earnshaw, Co-Founder & COO @ Wisedocs
Legal-specific AI tools are designed to be both secure and transparent, helping legal professionals understand and trust how AI processes their data while maintaining strict privacy controls. With human-in-the-loop (HITL) oversight, AI becomes a tool for efficiency rather than a risk, ensuring that outputs are accurate and reliable. By adopting AI solutions that follow strict security protocols such as SOC 2 Type 2, HIPAA, PIPEDA, and PHIPA compliance standards, legal teams can confidently leverage technology while maintaining control over their data through role-based access control (RBAC), multi-factor authentication (MFA), and configurable data retention policies.
Daniel Lewis, CEO @ LegalOn
Ask questions about how your data may be used — will it touch generative AI (where, without the right protections, your content could display to others), or non-generative AI? If it’s being processed by LLMs like OpenAI, understand whether your data is being used to train those models and if it’s being used in non-generative AI use cases, understand how. The use of your data might make the product you use better, so consider the risk/benefit trade-offs.
Gila Hayat, CTO & Co-Founder @ Darrow
Pro-tip for privacy preservation and worry-free experimentation with various AI tools: Have a non-sensitive or redacted document or use-case ready that you know the answers that you wouldn’t expect – and benchmark the various tools against it to have a fair comparison and no stress over leaking random work documents.
Sigge Labor, CTO & Co-Founder @ Legora | Jonathan Williams, Head of France @ Legora
Make sure to use a trusted vendor where no model training or fine-tuning is happening on client input.
Gary Sangha, CEO @ LexCheck Inc.
Lawyers should first understand what information they are actually sharing when using legal specific AI tools, often it is not personally identifiable information or sensitive client data. In many cases, you are not disclosing anything subject to confidentiality, especially when working with redlined drafts or standard contract language. That said, if you are sharing sensitive information, it is important to review your firm’s protocols, but depending on what you are sharing, it may not be a concern.
Tom Martin, CEO & Founder @ Lawdroid
Lawyers should be concerned about data privacy. But, steering away from legal-specific AI tools due to privacy concerns would be a mistake. If anything, legal AI vendors take greater security precautions than consumer-facing tools, given our exacting customer base: lawyers.
For security and privacy purposes, what should lawyers and law firms know about a legal AI vendor before using their product?
Troy Doucet, Founder @ AI.Law
Knowing what they do to protect data, how they use your data, certifications they have, and encryption efforts are smart. However, knowing what your privacy and security needs are before using the product is probably the best first step.
Jordan Domash, Founder & CEO @ Responsiv
I’d start with a traditional security and privacy review process like you’d run for any enterprise software platform. On top of that, I’d ask: Do they train on your data? Do they have access to your data? What is your data retention policy?
Michael Grupp, CEO & Co-founder @ BRYTER
Even the early-adopters and fast-paced firms ask their vendors three questions: Where is the client data stored? Do you use the firm’s data, or client data, to train or fine-tune your models? How is legal privilege protected?
Gil Banyas, Co-Founder & COO @ Chamelio
Before adopting legal AI tools, lawyers should verify the vendor has strong data encryption, clear retention policies, and SOC 2 compliance or similar third-party security certifications. They should understand how client data flows through the system, whether information is stored or used for model training, and if data sharing with third parties occurs. Additionally, they should confirm the vendor maintains appropriate legal expertise to understand attorney-client privilege implications and offers clear documentation of privacy controls that align with relevant bar association guidance.
Dorna Moini, CEO & Founder @ Gavel
I did a post on what to ask your vendors here: https://www.instagram.com/p/C9h5jVYK5Zc/. Lawyers need clear answers on what happens to their data and how it’s being used. When choosing a vendor, it’s also important to understand output accuracy and the AI product roadmap as it relates to legal work – you are engaging in a marriage to a software company you know will continue to improve for your purposes.
Ted Theodoropoulos, CEO @ Infodash
Firms should ask where and how data is stored, whether it’s isolated by client, and if it’s used for training. Look for vendors that run on secure environments like Microsoft Azure and support customer-managed encryption keys. Transparency around data flows and integration with existing infrastructure is essential.
Jenna Earnshaw, Co-Founder & COO @ Wisedocs
Lawyers and law firms should ensure that any legal AI vendor follows strict security protocols, such as SOC 2 Type 2, HIPAA, PIPEDA, and PHIPA compliance, along with role-based access control (RBAC), multi-factor authentication (MFA), and regular security audits to protect sensitive legal data. They should ensure the AI vendor is not using third party models or sharing data with AI model providers and the deployment of their AI is secure and limited. Additionally, firms should assess whether the AI system includes human-in-the-loop (HITL) oversight to mitigate hallucinations and organizational risks, ensuring accuracy and reliability in legal workflows.
Gila Hayat, CTO & Co-Founder @ Darrow
When choosing a legal AI vendor, it’s important to make sure it follows top-tier security standards and has a solid track record when it comes to protecting data.Don’t forget the contract: make sure it includes strong confidentiality terms so your clients’ data stays protected and compliant. Trusting the human and knowing the team: the legal tech scene is tight and personal, hop on a call with one of the team members to make sure you’re doing business with a trustworthy partner.
Sigge Labor, CTO & Co-Founder @ Legora | Jonathan Williams, Head of France @ Legora
You should understand whether a vendor’s AI models are trained on user data, this is a critical distinction. Vendors that fine-tune or improve their models using client input may pose significant privacy risks, especially if sensitive information is involved. It’s important to evaluate whether specially trained or fine-tuned models offer enough added value to justify the potential trade-off in privacy.
Gary Sangha, CEO @ LexCheck Inc.
Lawyers and law firms should understand what information they are sharing through the AI tool, as it is often personally identifiable information or subject to confidentiality. They should confirm whether the vendor is compliant with frameworks like SOC-2 which ensures rigorous controls for data protection and ensure that data is encrypted and securely processed. Reviewing how the tool handles data protection helps ensure it aligns with the firm’s security and privacy policies.
Tom Martin, CEO & Founder @ Lawdroid
Lawyers need to ask questions: 1) Do you employ encryption? 2) Do you train on data I submit to you? 3) Do you take precautions to mask PII? 4) Can I control how long the data is retained?
By carefully evaluating security credentials, vendor practices, and model usage policies, lawyers can responsibility and confidently employ generative AI tools to improve their delivery of legal services. As these technologies evolve, best practices for security and implementation will also evolve, making it important for lawyers to continue following industry updates and new best practices.
“No Robo Bosses Act” Proposed in California to Limit Use of AI Systems in Employment Decisions
A new bill in California, SB 7, proposed by State Senator Jerry McNerney, seeks to limit and regulate the use of artificial intelligence (AI) decision making in hiring, promotion, discipline, or termination decisions. Also known as the “No Robo Bosses Act,” SB 7 applies a broad definition of “automated decision system,” or “ADS,” as: any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decision making and materially impacts natural persons. An automated decision system does not include a spam email filter, firewall, antivirus, software, identity and access management tools, calculator, database, dataset, or other compilation of data.
Specifically, SB 7 would:
Require employers to provide a plain-language, standalone notice to employees, contractors, and applicants that the employer is using ADS in employment-related decisions at least 30 days before the introduction of the ADS (or by February 1, 2026, if the ADS is already in use).
Require employers to maintain a list of all ADS in use and include that list in the notice to employees, contractors, and applicants.
Prohibit employers from relying primarily on ADS for hiring, promotion, discipline, or termination decisions.
Prohibit employers from using ADS that prevents compliance with or violates the law or regulations, obtains or infers a protected status, conducts predictive behavior analysis, predicts or takes action against a worker for exercising legal rights, or uses individualized worker data to inform compensation.
Allow workers to access the data collected and correct errors.
Allow workers to appeal an employment-related decision for which ADS was used, and require an employer to have a human reviewer.
Create enforcement provisions against discharging, discriminating, or retaliating against workers for exercising their rights under SB 7.
Similar to SB 7, the California Civil Rights Council has proposed regulations that would protect employees from discrimination, harassment, and retaliation related to an employer’s use of ADS. The Civil Rights Council identifies several examples, such as predictive assessments that measure skills or personality trainings and tools that screen resumes or direct advertising, that may discriminate against employees, contractors, or applicants based on a protected class. The proposed rule and SB 7 would work in tandem, if both are passed through their respective government bodies.
The bill is still in the beginning stages. It is set for its first committee hearing — Senate Labor, Public employment, and Retirement Committee — on April 9, 2025. How the bill may transform before (and if) it becomes law is still unknown, but because of the potential reach of this bill and the possibility other states may emulate it, SB 7 is one to watch.
SEC’s Approach to Artificial Intelligence Begins to Take Shape
On 27 March 2025, the US Securities and Exchange Commission (SEC) hosted a roundtable on Artificial Intelligence (AI) in the financial industry that was designed to solicit feedback on the risks, benefits and governance of AI.
The roundtable served, in part, to “reset” the SEC’s approach to AI after the prior administration’s highly-criticized attempt to regulate the use of predictive data analytics by broker-dealers and investment advisers. Acting Chair Uyeda emphasized the importance of fostering a “commonsense and reasoned approach to AI and its use in financial markets and services.”
The Roundtable discussion focused on a few common themes:
The Commissioners as well as many panel participants emphasized the need to take a technology-neutral approach to regulation and to avoid placing unnecessary barriers on the use of innovative technology.
While generative AI presents tremendous opportunities, there are various risks, including around fraud, market manipulation, authentication, privacy and data security, and cybersecurity. Many of the benefits of generative AI (e.g., the ability to access and synthesize enormous amounts of data and to hyper-personalize content) also make it an effective tool for fraud.
Governance and risk management of AI is critical and there are different approaches to managing and mitigating risk, including through data management, sensitivity analysis, bias testing, and keeping a “human in the loop” to validate the output of generative AI models.
Any control structure should be risk-based and should take into consideration the type of AI and the specific use cases. In particular, advisers should consider the risks of employing “black box” algorithms, where it’s not always clear how inputs are weighed or outputs derived.
This is the first public engagement regarding AI under the current administration and although the SEC is taking a deliberative approach, Commissioners Uyeda’s and Peirce’s statements suggest that the SEC will act if it sees gaps in current regulation or the need for guidance in this area.