On September 11–12, 2025, at the Hyatt Regency in Reston, Virginia, The Sedona Conference (TSC) Working Group 13 held its sold-out Midyear Meeting with more than 135 participants. The event brought together judges, legal scholars, practitioners, technologists, ethicists, and policymakers to examine artificial intelligence’s growing impact on legal systems, regulatory structures, and societal norms.
Organized to spark dialogue across disciplines, the conference featured diverse panels on a wide range of topics aimed at identifying emerging risks, spotlighting gaps, and brainstorming work products that could provide practical guidance to practitioners, courts, and legislators. The meeting also included smaller workgroup sessions focused on developing consensus definitions of AI, governance frameworks, regulatory crosswalks, and TSC’s own AI tool.
Over the course of two days, participants emphasized that AI is here to stay, yet continuously evolving—adding to the complexity of addressing its impact on the legal profession. The conference underscored both the benefits of AI and the importance of determining how TSC can best help move the law forward through the creation and publication of consensus guidance for practitioners, courts, and legislators.
Benefits of AI in the Legal Profession
From a cast of subject-matter experts, participants learned about emerging technologies such as agentic artificial intelligence, transformers, AI-generated video tools, and the growth of open-access models. Several potential benefits were highlighted:
- Expanded Access to Justice – GenAI may help open doors to courts and legal services. For example, properly formulated patent applications at the U.S. Patent and Trademark Office have increased as non-legal professionals leverage GenAI.
- Enhanced Legal Research and Writing – Case law-trained models now provide analytical outputs beyond traditional research engines.
- Improved Accuracy – Advances in GenAI have reduced “hallucinations” and improved reliability.
- Firm-Level Integration – Some firms are building structured processes to integrate AI, with guardrails to protect conventional associate training while improving client service.
Concerns About Overreliance and Professional Skills
Participants warned that overuse of GenAI could erode lawyers’ critical reasoning skills. One participant noted that GenAI outputs are predictive, not reasoned. TSC was left considering whether guidance should be developed for law schools and practitioners on integrating AI while safeguarding the cultivation of core legal reasoning.
Concerns About Development Speed and Transparency
The pace of AI development raised significant unease:
- Products are often released without adequate testing, leaving harm to be identified post-deployment.
- Bias and hallucinations remain concerns, especially when models are trained on incomplete or skewed datasets.
- Many systems remain “black boxes,” with no clear explanation of outputs. Lawyers face client-counseling challenges, and judges face admissibility questions.
- Transparency mechanisms were seen as essential to align AI with legal principles of accountability.
- Participants called for robust evaluation methods to assess harms and mitigation strategies before wide release.
Debate on AI-Specific Laws
A panel posed the question: “Do we need AI-specific laws?”
- Skeptics argued existing statutes (e.g., Title VII, ADEA, Equal Pay Act) already address many issues, and new laws could create confusion.
- Innovation Risks – Others warned that premature regulation could stifle innovation and add compliance costs.
- Harmonization – A jurist dissented, urging federal agencies to use existing authority to issue AI regulations or guidance in areas such as intellectual property, rather than leaving harmonization to patchwork judicial decisions.
- Industry Standards – Some suggested industry-led standards could fill gaps, but cautioned against premature schemes that create costs for consumers or fuel a “cottage industry” of AI-safety certifications.
Gaps in Current Legal Frameworks
Participants flagged areas where existing law may be insufficient:
- Intellectual Property Rights – Questions of authorship, ownership, and infringement remain unsettled when AI contributes to content creation.
- Tort Liability – Responsibility for AI-caused harm remains unclear under current tort frameworks.
Here, many agreed, targeted statutes, regulations, or standards may be needed.
Additional Cross-Cutting Concerns
Recurring concerns included:
- Data Privacy and Security – Complex compliance challenges across state, federal, and international privacy and security frameworks.
- Judicial System Impact – From AI-generated briefs citing non-existent cases to evaluating AI-produced evidence, jurists noted inconsistent approaches across lower courts.
- Environmental Impact – The demand from data centers powering AI could strain natural resources without proper oversight.
In sum, while AI presents real opportunities for the legal community, significant risks and unknowns remain. These discussions set the stage for the conference’s other major focus: the concrete work products being developed to guide practitioners, courts, and legislators in navigating AI’s challenges.
Project Takeaways and Proposed Work Products
The conference was not limited to surfacing issues. A central aim was to review work products under development by four chartered subgroups:
- Consensus Definitions
- Governance Framework
- Regulatory Crosswalk
- TSC AI Tool
Each subgroup’s work is intended to serve as a resource for the broader legal community.
Practice Guides
- Consensus Definitions – A survey confirmed no common definition of “AI.” Participants agreed that TSC is on the right track in developing a consensus definition for courts, practitioners, and legislators.
- Governance Framework – A practical guide for deployers to integrate AI systems into organizations was identified as a priority.
- Regulatory Crosswalk – Input emphasized the need for resources serving multiple audiences—courts, legislators, and practitioners—either separately or in one unified tool.
- Additional Resources – Practical checklists and best-practice manuals (e.g., on prompt development) were also recommended for lawyers and clients.
Judicial Guidance
Judges described two categories of AI-related evidential problems:
- Acknowledged AI (court filings or proposed evidence).
- Unacknowledged AI (forged evidence or deepfakes).
Judges agreed that guidance to help standardize approaches to authentication would be valuable, especially on disclosure requirements and standing orders. However, concerns were raised about creating rigid rules amid AI’s rapid evolution. TSC will explore establishing a subgroup to develop court guidance on deepfakes and other areas.
Cross-Disciplinary Collaboration
Participants emphasized the importance of sustained collaboration across disciplines. The Midyear Meeting confirmed the value of the work already underway and highlighted the need for additional working groups and drafting committees to address new challenges as they arise.
Conclusion
AI is rapidly transforming the practice of law, bringing both opportunities and profound challenges. TSC’s Working Group 13 Midyear Meeting provided a vital forum for identifying risks and charting solutions. By spotlighting concerns ranging from lack of harmonization to evidentiary reliability—and by advancing tangible work products such as practice guides and judicial resources—the event delivered a roadmap for the profession to move forward responsibly.