Key Takeaways

The Secretary of the Interior, Doug Burgum, recently issued a Secretarial Order on Artificial Intelligence that addresses the use of AI across a number of domains, which include “energy and resource development” and “permitting efficiency.” The Order asserts that DOI is “already seeing results,” which include “streamlined environmental reviews.” It directs DOI staff to ensure that DOI retains “oversight and accountability” and requires a “human-in-the-loop,” a safeguard often applied in AI systems.

The Administration’s efforts to expedite agency reviews and expand resource development and infrastructure projects have created increasing strain on agency resources, especially as agencies cut staff. DOI’s AI initiative aligns with other Administration efforts to bridge this gap by streamlining agency processes and seeking to make those processes more efficient. It is also part of a broader Administration effort to support and enhance American AI dominance, as set forth in Executive Order 14179 and accompanying OMB guidance

In April, a Presidential Memorandum, “Updating Permitting Technology for the 21st Century,” directed agencies to “make maximum use of technology in environmental review and permitting processes.” The Council on Environmental Quality (CEQ) then released a “Permitting Technology Action Plan.” That document built upon CEQ’s earlier “E-NEPA Report to Congress,” which recommended technological options for streamlining NEPA processes. Several agencies have invested in related technologies, including the Department of Energy, the Federal Permitting Council, and the Air Force. States are also experimenting with AI tools, including a Minnesota project to streamline environmental permitting and a California project focused on permits for reconstruction after the Los Angeles fires

AI models promise to simplify document drafting, data analysis, and review of public comments, potentially shortening federal review timelines. But their adoption raises concerns about error rates, bias, and explainability. For example, commenters suggested 1 that the Trump administration’s high-profile “Make America Healthy Again” report contained errors likely attributable to use of AI tools. Even in the absence of such errors, project opponents may seek to exploit concerns about the use of AI tools in litigation. It remains to be seen if and how courts will give deference to agency decision-making reliant on AI. Early adopters should ensure that contractors and agencies have safeguards in place and well documented so that the administrative record for any litigation will provide sufficient data and explanation of (human) decision-making to survive judicial review.

Next Steps

As public and private actors alike integrate AI into the NEPA process, businesses and project proponents should engage with agencies and support the use of recognized best practices 2 to mitigate legal and technical risks. These best practices include:


White House Acknowledges Problems in RFK Jr’s “Make America Healthy Again” Report. (2025, May 29). NPR. Retrieved from https://www.npr.org/2025/05/29/nx-s1-5417346/white-house-acknowledges-problems-in-rfk-jr-s-make-america-healthy-again-report

For example, the ABA has released a formal ethics opinion on generative AI and professional responsibility.  See https://www.americanbar.org/content/dam/aba/administrative/professional_responsibility/ethics-opinions/aba-formal-opinion-512.pdf.

Leave a Reply

Your email address will not be published. Required fields are marked *