Bipartisan AI Bill Reintroduced; Senate Challenges China Chip Reversal — AI: The Washington Report
On July 30, Unleashing AI Innovation in Financial Services Act, a bipartisan, bicameral bill was reintroduced to “establish regulatory guardrails at financial regulatory agencies for regulated entities to test AI projects.” The bill also directs each of the financial regulatory agencies to create in-house AI Innovation Labs, “to enable regulated entities to experiment with AI test projects.”
The reintroduction of this bill is consistent with the greater deregulatory stance the federal government has taken, aligning with President Trump’s recently released AI Action Plan, which champions regulatory flexibility as key to maintaining US leadership in AI, as we’ve previously covered.
Separately, on July 28, top Senate Democrats voiced their “grave concern” in a joint letter to the Commerce Secretary, denouncing “the Trump administration’s decision to reverse course and allow US companies to sell certain advanced semiconductors to the People’s Republic of China (PRC), despite evidence that these chips have proved critical for artificial intelligence (AI) development in the PRC.” The reversal of the previously imposed sales restrictions against China underscores the tension and shifting balance between market access and national security concerns in the ongoing US-China AI race.
While the letter expresses anxiety over China’s access to AI chips, it emphasizes enduring concerns in Congress about technological advantage and national security issues in the AI race. The bipartisan support in Congress for “regulatory sandboxes” in financial services reflects a growing consensus within the federal government to deregulate and adopt AI use.
Bipartisan, Bicameral Bill Reintroduced to Promote AI in Financial Institutions
On July 30, Unleashing AI Innovation in Financial Services Act, a bipartisan, bicameral bill was reintroduced to “establish regulatory guardrails at financial regulatory agencies for regulated entities to test AI projects.” The bill seeks to create regulatory sandboxes that would allow financial institutions to “test AI-enabled products and services without immediate risk of enforcement action, as long as they meet transparency, consumer protection and national security requirements.”
“By creating a safe space for experimentation, we can help firms innovate and regulators learn without applying outdated rules that don’t fit today’s technology,” said Senator Mike Rounds (R-SD) reintroducing the bill in the Securities, Insurance, and Investment Subcommittee hearing.
Sen. Rounds specifically criticized rules, such as the proposed 2023 Predictive Data Analytics Rule put forth by the Securities and Exchange Commission (SEC), arguing that it “would have imposed sweeping, unclear restrictions on financial firms developing or deploying AI, without a workable framework” and “would have slowed innovation, raised compliance costs, and locked out smaller players.”
In contrast, the bill seeks to establish AI Innovation Labs within each of the financial regulatory agencies “to enable regulated entities to experiment with AI test projects without unnecessary or unduly burdensome regulation or expectation of enforcement actions.” The seven financial regulatory agencies — the Securities and Exchange Commission (SEC), Consumer Financial Protection Bureau (CFPB), Office of the Comptroller of the Currency (OCC), National Credit Union Administration (NCUA), Federal Housing Finance Agency (FHFA), Federal Deposit Insurance Corporation (FDIC), and Federal Reserve — would “evaluate and potentially waive or modify existing rules for approved AI test projects.”
“This commonsense bill will allow for experimentation while putting guardrails in place to strengthen,” said Representative Josh Gottheimer (D-NJ-5), a leading House co-sponsor.
The reintroduction of this bill echoes the federal government’s greater deregulatory stance, exemplified by President Trump’s recently released AI Action Plan, which champions regulatory flexibility as key to maintaining US leadership in AI. The plan explicitly endorses the use of regulatory sandboxes across federal agencies, framing them as essential to innovation and global competitiveness.
Private sector leaders in finance, one of the most advanced adopters of AI, have largely welcomed the approach, emphasizing that existing laws can address misuse without stifling innovation by overregulating the technology itself. Committee hearings and markups in both chambers have yet to be scheduled; despite its bipartisan sponsorship, its ultimate fate in Congress is uncertain.
Top Senate Democrats Denounce Resuming AI Chip Sales to China
On July 28, top Senate Democrats voiced their “grave concern” in a joint letter to the Commerce Secretary Howard Lutnick, denouncing “the Trump administration’s decision to reverse course and allow US companies to sell certain advanced semiconductors to the People’s Republic of China (PRC), despite evidence that these chips have proved critical for artificial intelligence (AI) development in the PRC.”
The reversal of the previously imposed sales restrictions on a leading AI chipmaker serving the Chinese market underscores the tension between market access and national security concerns in the ongoing US-China AI race. The reversal, Democrats argue, directly contradicts the administration’s AI Action Plan, “which purports to strengthen export control efforts on AI compute.”
“Restricting access to leading-edge chips has been the defining barrier for the PRC’s efforts to achieve AI parity,” the senators wrote, underscoring Congress’s bipartisan stance on limiting strategic technologies to geopolitical rivals.
At the heart of the criticism is a broader concern that the administration’s evolving approach to export controls is undermining US strategic priorities in AI. The administration’s decision to reverse the initial April ban, along with its previous decision to repeal the AI Diffusion Rule, reflects the administration’s shift toward a more industry-friendly stance that may address concerns from stakeholders worried that export restrictions could stifle domestic innovation and free trade.
While the letter expresses anxiety over China’s access to AI chips emphasizes enduring concerns in Congress about technological advantage and national security issues in the AI race, the bipartisan support in Congress for regulatory sandboxes in financial services reflects a growing consensus, within the federal government, to deregulate and adopt AI use.
Privacy Tip #454 – Students Sue Kansas School District Over AI Surveillance Tool
Current and former students at Lawrence High School and Free State High School, located in Lawrence, Kansas, have sued the school district, alleging that its use of an AI surveillance tool violates their privacy.
The allegations revolve around the school district’s use of Gaggle, which is an AI tool that mines the district’s Google Workspace, including Gmail, Drive, and other Google products used by students through the public schools’ network. Gaggle is designed to “flag content it deems a safety risk, such as allusions to self-harm, depression, drug use and violence.”
The plaintiffs are student journalists, artists, and photographers who reported on Gaggle or had their work flagged and removed by the AI tool. They allege that Gaggle could access their notes, thereby allowing access by the district, which they allege is a violation of journalists’ legal protections. They allege that “[s]tudents’ journalism drafts were intercepted before publication, mental health emails to trusted teachers disappeared, and original artwork was seized from school accounts without warning or explanation.”
They further allege that the district’s use of Gaggle is a “sweeping, suspicionless monitoring program” that “violated student rights by flagging and seizing student artwork.” They allege that “Gaggle undermines the mental health goals it attempts to address by intercepting appeals for help students may send to teachers or other trusted adults.”
The lawsuit requests a permanent injunction to stop the use of Gaggle in the district, along with compensatory, nominal, and punitive damages as well as attorney’s fees.
AI tools have their place in today’s business environment, but without careful protocols implemented to protect user privacy, organizations can find themselves in lawsuits that will drain resources and time away from more critical areas of need.
Use Natural Intelligence Before Artificial Intelligence
The cutting-edge technology encompassing Artificial Intelligence (AI) solutions is astonishing, and this technology has led to a steady increase in organizations adopting or developing their own AI solutions.
Several healthcare and customer service organizations are using AI technology to streamline business processes by mimicking or replacing humans with robotics, and this has led to noteworthy cost savings as a byproduct of early AI adoption.
Not all early adopters of AI were able to reap these rewards. Because of AI’s inherent security risks, some organizations experienced unplanned business disruptions, significant reputational damage, and financial loss. For example, when Samsung employees used ChatGPT for internal code review purposes, they accidentally leaked confidential information, which resulted in Samsung banning the use of generative AI tools.
Is It Time to Embrace AI and Does Its Strengths Outweigh Its Weaknesses?
According to a publicly accessible AI solution, its most significant information security risks are:
Phishing Attacks
Ransomware
Advanced Persistent Threats (APTs)
Zero-Day Exploits
Man-in-the-Middle (MitM) Attacks
Insider Threats
DDoS Attacks
Misconfigured Access Controls
SQL Injection Vulnerabilities
AI-Enabled Attacks
Most of these attack methods have been around for years and each should not be taken lightly, as their high-risk significance can expose an organization to unauthorized access to its network and information systems. In turn, unexpected information system downtime, significant disruptions to business service, reputational damage, and financial loss could result.
Moreover, AI’s mainstream usage has increased the likelihood of greater data privacy and security risks that result from deceptive practices. Take for example ‘AI-Enabled Attacks’ which leverage unpredictability to create deepfake news, videos, and audio to mislead people into thinking that real events have occurred when in fact they have not.
Other types of AI-enabled attack methods use weaponized malware which mimics legitimate network traffic, making it much harder for entities’ network operations teams to detect and defend against. The byproduct of these efforts can include accidental misconfiguration of security controls (e.g., antivirus software), with an increased susceptibility to malware that can allow an adversary to gain unauthorized access to Protected Health Information (PHI) and perform data exfiltration through illicit means.
How Does an Organization Defend Against These Attack Methods?
Deploying a customized AI solution that integrates predictive behavioral analysis techniques into network monitoring is a type of method that can allow for timelier detection of unusual network activities. For supplemental support, organizations should consider:
Creating an AI governance policy
Implementing strict information access controls
Using secure coding practices
Employing data encryption to prevent unauthorized data manipulation
Providing relevant security awareness training
Conducting continuous IT audits and network monitoring activities, to detect behavioral anomalies such as unauthorized AI use
Managing AI Risks
The adoption of a comprehensive AI framework is essential for managing AI risks and will help ensure proper governance of AI solutions.
Below is a brief outline of notable frameworks worth considering.
In January 2023, the National Institute of Standards and Technology released its NIST AI Risk Management Framework (AI RMF), which addresses how to manage new and emerging risks related to AI.
Published by the European Commission, the Ethics Guidelines for Trustworthy AI require AI systems to be lawful, ethical, and robust.
Practices for responsible and secure use of AI systems are detailed in Google’s Security Artificial Intelligence Framework (SAIF).
Conclusion
Although AI risks can be prevented and mitigated, failure to govern and deploy a secure AI system can result in significant fines imposed by governing regulatory bodies such as Health and Human Services, when protected health information is abused, or by the European Union if an organization fails to adequately implement data protection safeguards.
Before deploying AI solutions, organizations should establish AI ethical use committees to govern information security initiatives such as: the deployment of guardrails which may permit for the early detection and prevention of AI-related risks; secure system development life cycle practices; alignment of controls with AI framework requirements and security control standards (e.g., NIST Cybersecurity Framework version 2.0).
This article was originally published in Financial PoiseTM. This article is subject to the disclaimers found here.
Illinois Bans AI Therapy, Preserves Human Oversight in Care
On August 4, 2025, Illinois Governor JB Pritzker signed the Wellness and Oversight for Psychological Resources Act into law, which went into immediate effect, and “prohibits anyone from using AI to provide mental health and therapeutic decision-making, while allowing the use of AI for administrative and supplementary support services for licensed behavioral health professionals.” The Act passed each chamber of the Illinois General Assembly before being signed by Governor Pritzker.
The law is designed “to protect patients from unregulated and unqualified AI products, while also protecting the jobs of Illinois’ thousands of qualified behavioral health providers. This will also protect vulnerable children amid the rising concerns over AI chatbot use in youth mental health services.”
The Act comes in response to increasing reports of the risks AI chatbots pose in the mental health space. A news agency reported one example that found “an AI-powered chatbot recommended a ‘small hit of meth to get through this week’ to a fictional former addict.”
The Act does not prohibit the use of AI tools for certain tasks. It specifically allows AI tools to be used for administrative support, which includes: (1) managing appointment scheduling and reminders; (2) processing billing and insurance claims; and (3) drafting general communications related to therapy logistics that do not include therapeutic advice. It also allows the use of AI tools for supplementary support in the provision of mental health services, as long as the licensed professional “maintains full responsibility for all interactions, outputs, and data use associated with the system.”
Prior to using an AI tool for supplementary support in therapy, the licensed professional is required to provide written notice to the patient that AI will be used, the purpose for the use of an AI tool, and obtain? consent to the use of AI during the therapy session. The Act further prohibits a licensed professional from using AI to “(1) make independent therapeutic decisions; (2) directly interact with clients in any form of therapeutic communication; (3) generate therapeutic recommendations or treatment plans without review and approval by the licensed professional; or (4) detect emotions or mental states.”
The Act does not apply to those providing religious counseling, peer support, or self-help materials and education resources available to the public that “do not purport to offer therapy or psychotherapy services.”
The Act provides the Illinois Department of Financial and Professional Regulation with enforcement authority to investigate violations and levy a fine against violators of up to $10,000 per violation.
Tasked with Troubling Content: AI Model Training and Workplace Implications
The discussion of Artificial Intelligence (“AI”) in the workplace typically focuses on whether the AI tool and model has a discriminatory impact.
This means examining whether the AI output creates an unlawful disparate impact against individuals belonging to a protected category.
However, that discussion rarely centers on the types of training data used, and whether the training data itself could have a harmful effect on the workers tasked with training the AI model.
Background
To effectively train AI models, the model must first recognize the entire scope of data input—the good and the bad. For the AI model to recognize traumatic and harmful content and distinguish it from beneficial and safe content, humans are often required to identify and label the traumatic and harmful content—over and over and over—until the model learns it and can filter it out from good and safe content. This coding work can not only be tedious for the human coders, but also pose a danger of psychological harm, potentially inadvertently creating an abusive and unsafe work environment.
Schuster v. Scale AI
In Schuster v. Scale AI, the Northern District of California is currently confronted with evaluating the psychological harm and potentially hostile working conditions that workers may experience when coding violent and toxic content in an AI model. In this case, a group of independent AI input contractors—known as “taskers”— filed a complaint alleging class wide claims of workplace psychological injury (e.g., depression, anxiety and PTSD), “moral injury”— the emotional, behavioral, and relational problems that can develop when someone acts in ways that go against deeply held values— and “institutional betrayal” —the purported betrayal that can arise when an employer fails to take appropriate steps to prevent or respond appropriately to highly distressing workplace circumstances. The plaintiffs brought causes of action for negligence and violations of California’s Unfair Competition Law.
The plaintiffs in Schuster allege that defendants—operators of generative AI services—required them to input and then monitor psychologically harmful information in AI models. This harmful information, according to the complaint, pertained to suicidal ideation, predation, child sexual assault, violence, and other highly violent and disturbing topics. In some instances, the plaintiffs were purportedly required to engage in hours long traumatic conversations with the AI, demanding complete mental focus as the AI prompted multiple follow-up questions pertaining to disturbing scenarios. Plaintiffs contend that, because of this repeated exposure to traumatic content, they developed PTSD, depression, anxiety, and other mental functioning problems. Plaintiffs further claim that they were not provided sufficient warning, support, or workplace safeguards.
Lessons and Takeaways for Employers
Schuster serves as a reminder to employers to exercise caution and diligence when engaging with AI and other new technologies. Whether an employer is training an internal AI model, deploying AI to assist with employment decisions, or contracting with a company that develops AI tools, this recent litigation underscores how workers may experience conduct which may be construed as unlawful due to their interactions with AI. Employers should continue to monitor and audit workers’ interactions with AI to ensure that AI use and AI training does not create a hostile or abusive environment or otherwise violate workplace-related laws. Further, employers may consider implementing effective technological guardrails as well as providing support and notice to employees interfacing with certain AI tools.
If the plaintiffs are successful, Schuster could introduce changes across the AI data-labeling industry—such as the promulgation of more comprehensive disclosures, mental health resources, and stronger legal protections for employees and independent contractors involved in AI training.
Employers are reminded to:
Implement robust oversight mechanisms for workers tasked with potentially coding AI models for harmful content.
Notify employees of the types of data that they are expected to code in training the AI model.
Offer the option to opt out of being exposed to disturbing content.
Develop personalized and interactive training programs that address how workers should approach traumatic content in the workplace.
Provide mental and emotional health resources, including preventative measures, medical monitoring, and treatment.
Develop a procedure to investigate employee complaints concerning exposure to harmful content.
Recognize the variety of legal claims (g., moral injury and institutional betrayal) that workers may assert as a result of training AI models.
New Updates to CCPA Regulations: California’s Focus on Automated Decisionmaking Technology, Cybersecurity Audits, Risk Assessments, and More
On July 24, 2025, during a public meeting following public comment, the California Privacy Protection Agency (CPPA) Board unanimously approved amendments to the California Consumer Privacy Act (CCPA). These substantial changes include new obligations for businesses subject to the CCPA. Significantly, the updates emphasize CPPA’s new regulatory focus over AI decision-making and cybersecurity in addition to privacy. In addition, the CPPA opted to open the Delete Request and Opt-Out Platform (DROP) regulations for further public comment on its proposed changes. Below is a summary of the key updates:
Automated Decisionmaking Technology
ADMT Defined –The updates provide a new regulatory focus on automated decisionmaking technology (ADMT), which is defined as “any technology that processes personal information and uses computation to replace human decisionmaking or substantially replace human decisionmaking.” This definition does not cover when such automated technology is used to assist in, but not to entirely substitute, human decisionmaking.
Consumer Rights – Under the new ADMT provisions, businesses must inform consumers of their opt-out and access rights with respect to the business’s use of ADMT to make any significant decisions about the consumer. “Significant decisions” are defined as decisions related to financial or lending services, housing, education opportunities, employment opportunities, or healthcare services.
Pre-Use Notice – Businesses must also provide pre-use notices regarding the use of ADMT. These notices should explain what the ADMT does, consumer rights related to opt-out and access, and a detailed description of how the ADMT works to make a significant decision about the consumer.
Annual Cybersecurity Audits
The CCPA final text introduces an annual cybersecurity audit requirement for businesses that meet a certain threshold. Businesses will be required to conduct annual, independent cybersecurity audits to assess how their cybersecurity program protects consumer personal information from unauthorized access and disclosure. Businesses are required to submit a certificate of completion to the CPPA annually.
Audit Components – Components of a cybersecurity program that fall into the audit’s scope include the business’s cybersecurity measures such as authentication, access controls, inventory management, secure hardware and software configurations, network monitoring, and cybersecurity education. The report must outline, in detail, gaps or weaknesses in the organization’s policies or cybersecurity program components that the auditor deemed to increase the risk of unauthorized access or activity.
Impartiality Requirement – Audits must be performed by an independent and qualified professional. If the auditor is internal to the business, the CCPA requires specific measures to be put in place to ensure the auditor’s impartiality and objectivity.
Repurposing Audits – A cybersecurity audit used for another purpose, such as an audit that uses the NIST Cybersecurity Framework 2.0, may be used for this audit purpose, provided that it meets all of the requirements outlined in the CCPA.
Compliance Timeline – The timeline for completion of the initial cybersecurity audit depends on the business’s revenue for the previous years. All businesses must complete this audit by April 1, 2030, but some will be required to do so by April 1, 2028, depending on annual income.
Pre-Processing Risk Assessments
Under the new regulations, any business that poses a significant risk to consumers’ privacy in processing personal information must conduct a risk assessment before initiating that processing. The goal of a risk assessment is to restrict or prohibit the processing of personal information if the resulting privacy risks to the consumer outweigh the benefits to the business and other stakeholders. Risk assessments must be reviewed and updated once every three years. If there is a material change in processing activity, a business must update its risk assessment as soon as possible, but no later than 45 calendar days from the change.
Broad Definition of Significant Risk – The CCPA outlines several activities that are deemed to present significant risk, including selling or sharing personal information and processing sensitive personal information. This is an expansive definition, because most businesses share personal information with third parties.
Risk Assessment Components – Risk assessments must document a business’s purpose for processing consumer personal information and the benefits to the organization of that processing. Risk assessments must also document the categories of information to be processed. In addition, the risk assessment must also consider the negative impacts of processing to consumers’ privacy. The business must further identify safeguards it plans to implement for the processing, such as encryption and privacy-enhancing technologies.
Compliance Timeline – For risk assessments conducted in 2026 and 2027, businesses must submit an attestation to the CPPA by April 1, 2028. The individual submitting the risk assessment attestation must be a member of the business’s executive management team who is directly responsible for, and has sufficient knowledge of, the business’s risk assessment compliance. Risk assessments must be maintained for as long as the processing continues or five years after completion, whichever is later, and available for inspection by CPPA or the Attorney General.
Insurance
The final CCPA changes also include clarification of the law’s application to insurance companies. Insurers are required to comply with the CCPA for personal information collected outside of an insurance transaction. The final text provides an example whereby if an insurance company collects personal information of website visitors who have not applied for any insurance product or service to tailor personalized advertisements to those users, the insurer must comply with the CCPA with respect to that information. Since most websites use
tracking technologies, insurance companies should assess their compliance with the CCPA promptly.
Recommended Next Steps
The California Office of Administrative Law (OAL) still needs to review and approve these changes. OAL has 30 business days after receiving the final text from the CPPA to do so. However, many industry experts expect that the OAL will only make minor, if any, changes. Businesses should expect the OAL to approve most of this final text. The regulations take effect in 2027, so preparation for these new compliance obligations should be a top priority. CPPA’s next meeting is September 26, 2025, where it is expected to present its annual enforcement report and priorities. For a more in-depth analysis of the new CPPA Regulations, click here.
The Rise of “Acquihiring” in a Post-Layoff Tech Sector

As a practicing M&A attorney representing both strategic acquirers and venture-backed targets, I have had a front row seat to the fundamental transformation of Silicon Valley’s exit landscape. The rise of acquihiring represents more than a market trend, it’s an evolutionary response to capital scarcity, talent wars, and regulatory uncertainty that demands sophisticated legal frameworks and innovation-friendly policy approaches.
As we come to the midpoint of Summer 2025, the tech industry continues to lay off staff, as big tech companies implement strategic workforce reductions. These layoffs are the result of the quick and efficient integration of artificial intelligence (AI), as firms restructure, phasing out roles in the process. Additional factors include cost-cutting initiatives, operational streamlining, and performance-based cuts targeting underperforming employees.
According to Layoffs. fyi, 159 tech companies laid off approximately 80,000 workers as of July 17, 2025. This follows a 2024 trend where at least 95,000 U.S.-based tech workers lost jobs, per Crunchbase data.
Amid frozen funding rounds and startup pivots, a familiar deal structure is gaining traction: the acquihire. For startups stuck between seed and Series A funding, an acquihire, where the company is acquired specifically for its talent rather than its products or revenue, offers an alternative exit to an IPO or scaling.
Unlike the more traditional merger or acquisition, acquihires prioritize employees, sometimes bringing on entire teams. Discreetly done, these types of hires help corporations compete without the complexities of full integrations. Additionally, an acquihire, which can be less costly than a traditional acquisition, involves structuring the deal through stock or asset sales, with most value directed to employee retention packages. The cost hinges on the perceived value of the employees to the buyer, with monetary incentives like signing bonuses, stock option conversion, and new option packages, combined with restrictive covenant agreements, designed to ensure staff remain locked-in following the transaction with the buyer.
Market Data for Acquihires
The bar chart below illustrates the rise in acquihire deals from 2020 through July 2025, with Q1 and Q2 of 2025 showing increased activity compared to previous periods.
The data shows that after four years of being frozen out of the acquihire market by former FTC chair Lina Khan, big tech companies are back with a vengeance. The following chart compares acquihire activity of the six most prolific big tech buyers in 2024 compared to 2025, and we are barely halfway through the year:
Several factors are driving this increased activity.
Talent Availability: Layoffs have increased the pool of skilled tech workers, particularly those with niche expertise in AI, cybersecurity, and quantum computing. Upon layoff, these skilled tech workers often form or join new startups. Larger companies are targeting these startup talent pools through acquihires to secure specialized skills.
AI-Driven Focus: The race to dominate AI is accelerating acquihire activity. Companies are acquiring startups or teams with advanced AI capabilities, including generative AI, machine learning, and natural language processing expertise, to stay competitive.
Regulatory Clarity in Washington: While the new administration cannot be characterized as “tech friendly,” there seems to be a burgeoning understanding that mergers will be scrutinized for structural and vertical overlap but not shut down altogether as was the case under the prior administration.
Legal and Regulatory Considerations
Acquihires introduce legal challenges, requiring a team of attorneys who can advise on a wide variety of issues beyond the typical flurry of M&A contract issues, such as employment law, intellectual property (IP), compensation structures, and compliance.
Retention and Equity Challenges
Retention agreements are an important part of this process, with equity grants, signing bonuses, or extended vesting schedules to ensure talent retention. Legal counsel should review to be sure these incentives comply with prior obligations, such as stock option agreements, Simple Agreements for Future Equity (SAFEs), or investor rights from the startup’s cap table. Missteps can potentially land you in court and lead to disputes with former investors or employees.
The IP Minefield
IP rights are always a significant hurdle to a business transaction. In an acquihire situation, acquiring companies need to verify that incoming teams possess clean, transferable IP. Complications arise when code was developed pre-incorporation, uses improperly licensed open-source software, or involves co-ownership with universities, former employers, or contractors.
Compliance in a Regulated Landscape
Acquihires are becoming more and more visible to regulators. Cross-border deals require navigating local labor laws, tax structures, and other requirements, such as visa sponsorship requirements. Antitrust scrutiny may apply if the acquired startup posed a competitive threat. Compliance with GDPR and all other data privacy regulations must also be a consideration.
Strategic Implications
For corporate attorneys, legal advisors, and other deal professionals, proactive planning is essential to the successful execution of an acquihire. Acquihires are driven by talent retention as opposed to revenue or product synergies in traditional acquisitions. Advisors play a key role in structuring agreements that protect the acquiring company while ensuring key team members are incentivized to stay post-transaction. When done the right way, acquihiring provides a strategic edge. When mismanaged, it can lead to legal and financial liabilities.
As we progress into Fall 2025, and AI is continuing to shape the tech industry, acquihires remain a strategic lever for companies. Approached with diligence, they are a powerful way to grow and stay competitive. This model offers honorable (if not always lucrative) exits for founders and investors. Will it risk consolidating AI innovation within a few dominant players, sidelining employees, and narrowing the competitive landscape? This administration, like the last one, seems to be watching carefully to safeguard the competitive landscape while tampering down any new regulation.
Policymakers and regulators may be thinking about whether acquihiring fuels progress or stifles broader innovation, while in truth, Silicon Valley needs the surplus of AI companies to have soft landings (or big exits) to recycle capital deployed back into the startup ecosystem, which has been thirsty for exits since the end of ZIRP in 2022 (zero interest rate policy). The answer can redefine the very structure of the AI economy for decades.
Sources and data references
The author leveraged numerous publicly available data sources for the above-referenced statistics and charts:
Layoffs.fyi – Tech layoffs tracker and database
Crunchbase – Startup funding and acquisition data
TechCrunch – Technology industry news and analysis
McKinsey & Company – Technology trends outlook 2025
Deloitte Insights – 2025 technology industry outlook
Visual Capitalists – Big Tech hiring trends analysis
Robert Half – Technology hiring trends report 2025
CB Insights – Acquire market analysis and trends
Tracxn – Corporate acquisition tracking (Google, Apple, Microsoft)
CompTIA – State of the tech workforce 2024
Korn Ferry – Talent acquisition trends 2025
AI for Judges Is Both Inevitable and a Good Thing, Done Correctly
Recently, two federal judges – Henry Wingate in Mississippi and Julien Neals in New Jersey — quietly withdrew major rulings. Counsel had flagged glaring factual errors: misnamed parties, phantom declarations, fabricated quotations.
The courts offered no explanation. But the fingerprints of generative AI are hard to miss.
For the past two years, headlines have focused on lawyers who copy-pasted ChatGPT outputs into their filings, often with comical or catastrophic results.
But this time, the hallucinations didn’t come from a brief. They came from the bench. Which means the integrity of the judicial record is now at risk not just from what lawyers submit, but from what judges sign.
So what should the courts do?
There are only three options: abstinence, bureaucracy, and better tools.
The first two are tempting, but dangerous. The only viable path is building better tools.
First, abstinence: prohibit AI entirely. Critics warn that AI risks corrupting judicial legitimacy. Ignoring that “just say no” rarely works, these critics make a more fundamental error: they’ve mistaken the symptom for the cause.
Hallucinated citations aren’t the crisis — they’re evidence of one. Judges aren’t turning to ChatGPT out of laziness. They’re turning to it because they’re drowning: 66 million filings a year — that’s 120 cases a minute, around the clock — shrinking staff, unrelenting deadlines, and dockets that demand expertise and speed.
In this system, backlogs grow, hearings delay, and litigants lose faith that anyone is truly listening.
That breakdown — not AI — is what’s partially driving the collapse in public trust. Since 2020, confidence in the courts has dropped from 59% to 35% — one of the steepest declines Gallup has ever measured, steeper even than in some authoritarian states.
If we care about legitimacy, we must care about capacity.
The scandal isn’t a fake quote. It’s the system that made a judge rely on a chatbot in the first place.
If we care about legitimacy, we must care about capacity. And if we care about capacity, abstaining from the technology that gives judges their best chance of catching their breath is no option at all.
The second option is bureaucracy, which offers the comforting illusion of control. Policies, guidelines, and disclaimers satisfy a familiar instinct: if the tool is risky, regulate the user.
But this approach rests on the incorrect assumption that governance can compensate for unfit tools. It can’t.
Consumer chatbots like ChatGPT are not just error-prone. They’re deceptively error-prone. They don’t spew nonsense; they generate citations that sound plausible, quotes that feel familiar, authorities that glide seamlessly into the legal argument — until they collapse under scrutiny.
That’s not a misuse problem. That’s a design problem. Bureaucracy offers false reassurance, papering over errors until it’s too late.
Worse, bureaucracy shifts the burden to judges to verify quotes, trace sources, and double-check the law clerks. But for a system already underwater, more paperwork doesn’t mean safer use of AI. It means more pressure at a moment when judges can least absorb it.
The only solution is building better tools.
Bans and rules do not solve the underlying problem, and may even exacerbate it. The only solution is building better tools. The good news is they already exist — and they’re spreading.
Across courts nationwide, judges and clerks are quietly adopting systems that function less like chatbots and more like junior clerks, mapping claims to legal elements, linking elements to facts, and grounding every inference in controlling law.
What sets these tools apart is discipline. They reflect the key aspects of the rule of law in their software code. They are designed to be neutral, reasoning like a judge rather than an advocate; correct, with no hallucinated cases or doctrinal misstatements; to embody fidelity, surfacing the right law and framing the real issue; and transparency, with every step traceable and open to challenge. A tool meant to serve the rule of law should reflect it.
Recent AI failures in the courtroom shouldn’t trigger retreat, but reform. The problem isn’t that judges used AI; it’s that they used the wrong kind. We don’t need to ban machines from the courthouse. We need to incorporate machines that belong there. The sooner courts get on board with testing and welcoming AI for judges, the sooner the AI will be up to the tasks required of it.
If we get legal AI right, the payoff is profound: faster triage, earlier error detection, less delay, more human attention on what machines can’t do: scrutinize testimony, weigh equities, render judgment.
The rule of law does not forbid the use of AI. It constrains it. And it is only by submitting our tools to those constraints that we can justify their presence in the courtroom. The goal is not to automate judgment, but to protect it.
All of the views and opinions expressed in this article are those of the author and not necessarily those of The National Law Review.
Balancing Innovation and Oversight: Federal AI Policy in Transition
Consistent with their broader policy differences, the Trump and Biden administrations can be characterized by their notably divergent approaches to regulating artificial intelligence (AI). Indeed, a recent White House publication underscores how the Trump administration has reoriented AI policy, highlighting a significant shift in priorities and deregulation compared to the Biden era. On July 23, 2025, the White House published “Winning the Race: America’s AI Action Plan” (the AI Action Plan), which centers on three key pillars: innovation, infrastructure and international diplomacy and security.
Federal AI Policy Continues to Develop
The AI Action Plan was published exactly six months after the issuance of Executive Order No. 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” an initiative to maintain global leadership in AI. The release of the AI Action Plan comes on the heels of the recent passage of the One Big Beautiful Bill Act (OBBBA), a sweeping legislative package addressing a wide range of policy areas. Signed into law on July 4, 2025, the OBBBA notably excluded a heavily debated provision that would have imposed a 10-year federal moratorium on state-level AI regulation.
Though the AI Action Plan makes no specific mention of this moratorium, it implicitly revives the concept in spirit, noting that “AI-related Federal funding” should not be directed toward “states with burdensome AI regulations that waste these funds.”[1] Though the AI Action Plan further caveats that the federal government should not “interfere with states’ rights to pass prudent laws that are not unduly restrictive to innovation[,]” its emphasis on deregulation echoes throughout the 28-page document.[2] Its message is clear: “[t]o maintain global leadership in AI, America’s private sector must be unencumbered by bureaucratic red tape.”[3]
The Trump administration has taken a significantly different approach to AI governance compared to the Biden administration. Whereas the current administration emphasizes a pro-industry, deregulatory framework, the Biden administration pursued a robust regulatory strategy to govern AI. With Executive Order No. 14110, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” the prior administration sought to meet its goal of making AI “safe and secure” by implementing “policies, institutions, and, as appropriate, other mechanisms to test, understand, and mitigate risks from [AI] systems before they are put to use.”[4] The current administration’s aforementioned Executive Order No. 14179 signaled a marked shift in policy, “revok[ing] certain existing AI policies and directives that act as barriers to American AI innovation, clearing a path for the United States to act decisively to retain global leadership in artificial intelligence.”[5] Stating that the prior Executive Order No. 14110 “foreshadowed an onerous regulatory regime,” the AI Action Plan recommends a number of policy actions, one of which requires the Federal Communications Commission (FCC) to “evaluate whether state AI regulations interfere with the agency’s ability to carry out its obligations and authorities under the Communications Act of 1934.”[6]
State-Level AI Regulatory Trends
The AI Action Plan may signal that states need to reevaluate their regulatory regimes; however, despite the recent shift toward deregulation of AI at the federal level, state governments have maintained and intensified their regulatory activities around AI, continuing trends established under the Biden administration. So far in the 2025 legislative session, all 50 states, Puerto Rico and Washington, DC, have introduced legislation related to AI; while at least 38 states have adopted or enacted AI measures.[7] In 2024, state legislatures introduced nearly 700 AI-related bills, with 31 states successfully enacting laws or formal resolutions concerning AI regulation.[8]
These state-level initiatives cover a wide range of issues and industries, which reflect substantial and diverse approaches to AI governance. For example, Colorado’s enactment of a comprehensive AI law mandated that developers of “high-risk” AI systems exercise reasonable care to prevent algorithmic bias and disclose the use of AI technologies to consumers.[9] Similarly, California, a state that has been aggressive in its AI regulation, approved a package of bills addressing transparency in generative AI services and deepfake technologies, though some of those bills were vetoed by Governor Newsom.[10] And just a few weeks ago, Texas Governor Greg Abbott signed into law the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), which will take effect on January 1, 2026.[11] Other states, including New York, Illinois, Maryland, Tennessee and New Hampshire, have also implemented varied regulations aimed at addressing transparency, privacy, fairness and accountability in AI applications.[12] Currently, approximately 1,000 AI-related bills are pending across the country, spanning numerous sectors and regulatory approaches.[13]
Looking Ahead: Balancing Innovation and Oversight
The AI Action Plan reflects a decisive federal preference for deregulation, encouraging private sector leadership and discouraging restrictive state policies through indirect means, such as funding disincentives. However, the ongoing surge of state legislative activity at the state level highlights a growing divergence in regulatory philosophy. Whether this divide results in a patchwork of inconsistent AI standards, increased federal preemption efforts, or a new cooperative governance model will be a defining issue for policymakers and stakeholders in the coming year. Questions of jurisdiction, and the broader challenge of balancing innovation with oversight, are becoming increasingly urgent.
Notable Dates
October 30, 2023: Executive Order No. 14110 issued (“Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”)
January 23, 2025: Executive Order No. 14179 issued (“Removing Barriers to American Leadership in Artificial Intelligence”)
July 4, 2025: OBBBA signed (without moratorium on state AI regulation)
July 23, 2025: The AI Action Plan published
[1] AI Action Plan at p. 3.
[2] Id.
[3] Id.
[4] 88 Fed. Reg. 75191 (Nov. 1, 2023).
[5] 90 Fed. Reg. 8741 (Jan. 31, 2025).
[6] AI Action Plan at p. 3.
[7] “Artificial Intelligence 2025 Legislation,” National Conference of State Legislatures (NCSL) (updated July 10, 2025).
[8] Josh Hansen and Camila Tobon, “Year in Review: Privacy Compliance and Artificial Intelligence Developments,” JD Supra (Dec. 18, 2024).
[9] Id.
[10] Id.
[11] Tex. H.B. 149 (NS) (June 22, 2025).
[12] Josh Hansen and Camila Tobon, “Year in Review: Privacy Compliance and Artificial Intelligence Developments,” JD Supra (Dec. 18, 2024).
[13]“Artificial Intelligence 2025 Legislation,” National Conference of State Legislatures (NCSL) (updated July 10, 2025).
New White House AI Action Plan Aims to Remove Barriers and Shift Regulatory Focus
The Trump administration recently unveiled a new action plan relating to artificial intelligence (AI) technology that focuses on removing regulations and/or other barriers, building upon President Donald Trump’s prior executive orders (EO) related to AI. The plan seeks to pressure states to deregulate, as many states and local jurisdictions are expected to step in to address regulatory gaps left by the federal government’s hands-off approach.
Quick Hits
The Trump administration released a new AI action plan that presents a broad roadmap for AI development in the United States.
The plan aims to eliminate regulatory barriers and promote innovation in AI, promote the adoption of AI across the federal government, and increase workforce development in the private sector.
The plan further seeks to discourage states from imposing their own regulations by recommending that funding for AI projects be sent to states with favorable regulatory climates.
On July 23, 2025, the White House released a policy document titled “Winning the Race: America’s AI Action Plan,” setting forth a roadmap for federal AI policy structured around three pillars:(I) innovation, (II) infrastructure, and (III) international diplomacy and security. The plan could have significant implications for employers adopting and utilizing AI and similar technologies amid a patchwork of regulations across states and even cities.
Among its wide-ranging set of goals, the plan focuses on “remov[ing] red tape and onerous regulation” concerning AI, accelerating the adoption of AI across the federal government, and promoting AI education and workforce development. In particular, the plan directs certain federal funding decisions for AI-related projects to be guided by states’ regulatory climates, potentially pressuring states and local jurisdictions to avoid implementing new AI laws or regulations.
Shifting Policy and Regulatory Landscape
Although AI promises productivity and efficiency, potential risks remain—particularly in the area of automated decisionmaking tools. While the Biden administration previously sought to balance AI innovation with regulatory safeguards to protect employees and consumers (such as concerns about employment discrimination, privacy concerns, and job displacement), President Trump has reversed course, rescinding previous executive orders designed to impose stronger oversight and controls.
The Trump administration’s new AI action plan signals a regulatory rollback, instructing the Office of Science and Technology Policy (OSTP) to solicit information from businesses and the public on federal regulations that “hinder AI innovation and adoption.” Further, the Office of Management and Budget (OMB) is directed to identify and revise or repeal regulations, guidance, administrative orders, and other federal policies that “unnecessarily hinder AI development or deployment.”
New Limits on State Regulation?
Crucially, the action plan addresses the interplay between federal and state regulations. Several states and local jurisdictions—including California, Colorado, Illinois, New York City, Texas, and Utah—have implemented AI laws or regulations. Onlookers have anticipated that state and local jurisdictions will continue to implement new employee and consumer protections in the absence of federal action.
However, the administration’s new action plan recommends that OMB work with federal agencies with “AI-related discretionary funding programs to ensure, consistent with applicable law, that they consider a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.” (Emphasis added.)
That recommendation comes after congressional lawmakers recently proposed a 10-year moratorium on state AI regulations in the recently passed federal spending bill before dropping it at the last minute before passage. These efforts could influence future state laws and regulations.
Further, the plan recommends that Federal Trade Commission (FTC) investigations initiated under the prior administration “ensure that they do not advance theories of liability that unduly burden AI innovation.” The plan also suggests that FTC final orders, consent decrees, and injunctions be reviewed and modified or set aside to the extent that they “unduly burden AI innovation.”
Addressing Workforce and Labor Market Implications
In addition, the action plan focuses on AI’s workforce implications, emphasizing the need for AI literacy and skills development to ensure that American workers are equipped to thrive in an AI-driven economy. The plan follows President Trump’s EO 14277 to promote education on and integration of AI in K-12, higher education, and workplace settings through public-private partnerships with industry leaders and academic institutions.
Specifically, the AI action plan recommends that the U.S. Department of the Treasury issue guidance clarifying that AI literacy and skills development programs may qualify for eligible educational assistance as a tax-free working condition fringe benefit under Section 132 of the Internal Revenue Code. The plan also recommends that the U.S. Department of Labor (DOL) establish an “AI Workforce Research Hub” to evaluate and mitigate the impact of AI on the labor market, including funding retraining for individuals impacted by AI-related job displacement.
Next Steps
Employers may want to review their use of AI in the workplace and consider the potential benefits. They may also want to invest in training programs and collaborate with educational institutions to prepare their workforce for the technological advancements brought about by AI.
At the same time, potential AI risks remain, implicating existing federal, state, and local laws on antidiscrimination, privacy, and automated decisionmaking tools. It is unclear whether states and local jurisdictions will limit enforcement of existing AI regulations and/or delay implementing new laws and regulations. Regardless of state regulatory approaches, employers may want to continue implementing policies and procedures that set forth reasonable guardrails that allow for innovation and expanded AI use while limiting potential risks associated with the use of automated decisionmaking tools in the workplace.
Moreover, employers may want to audit the results of AI when used to make employment decisions to promote fairness, accuracy, and appropriate human oversight, as well as evaluate whether it would be appropriate to provide affected employees with the opportunity to appeal or request review of decisions made or materially influenced by AI.
New York Enacts Legislation Regulating Algorithmic Pricing and AI Companions
New York recently passed a bill that amends the New York General Business Law (the “Bill”) to impose new requirements on (1) the use of artificial intelligence (“AI”) for algorithmic pricing and (2) the operation of AI systems that simulate social human interactions (i.e., AI companions).
The requirements related to algorithmic pricing went into effect on July 8, 2025; those related to the operation of AI companies will go into effect on November 5, 2025.
Algorithmic Pricing Requirements
The Bill requires entities engaging in personalized algorithmic pricing using a consumer’s personal data to include a clear and conspicuous disclosure alongside the price that states “THIS PRICE WAS SET BY AN ALGORITHM USING YOUR PERSONAL DATA”. “Entity” is broadly defined as “any natural person, firm, organization, partnership, association, corporation, or any other entity domiciled or doing business in New York state.” “Personal data” also is broadly defined as “any data that identifies or could reasonably be linked, directly or indirectly, with a specific consumer or device.” It does not include location data that is used by for-hire vehicles to calculate fares based on mileage and trip duration.
The disclosure requirement does not apply to prices offered to consumers who have an existing subscription-based contract for goods or services with an entity and where the price offered is less than the price for the same good or service set forth in the subscription-based contract. The Bill also does not apply to entities that are subject to New York’s insurance law or regulations, financial institutions or their affiliates subject to the Gramm Leach Bliley Act, or financial institutions as defined in New York’s financial services law.
The New York Attorney General may issue a cease and desist letter for violations of the disclosure requirement. If an entity fails to cure the violation, the Attorney General may seek a court injunction and civil penalties of up to $1,000 per violation. The Bill does not provide a private right of action.
AI Companion Requirements
The Bill imposes specific requirements on operators of AI companions. The Bill defines an “AI companion” as a system using AI, generative AI and/or emotional recognition algorithms designed to simulate a sustained human or human-like relationship with a user. The Bill prohibits AI companion operators from providing an AI companion unless the AI companion contains a protocol to take reasonable efforts to detect and address suicidal ideation or expressions of self-harm expressed by a user to the AI companion. Such protocols include detection of user expressions of suicidal ideation or self-harm and a notification to the user that refers them to crisis service providers.
In addition, AI companion operators must provide notification to users at the beginning of any AI companion interaction that the user is not communicating with a human, as well as at every three-hour interval during continued AI companion interactions.
With respect to enforcement, where the New York Attorney General believes the operator has violated these requirements, it may bring an action to enjoin an AI companion operator from engaging in the unlawful act and may seek civil penalties of up to $15,000 per day. The Bill does not provide a private right of action.
White House Aims To Accelerate Environmental Permitting For Data Centers
On July 23, 2025, the White House issued an Executive Order titled “Accelerating Federal Permitting of Data Center Infrastructure.” Released alongside “America’s Artificial Intelligence (AI) Action Plan,” the Order reflects a broader federal goal to reduce permitting delays for large-scale data center projects supporting AI workloads and national infrastructure. The reforms focus on streamlining federal environmental review and permitting processes, thereby aiming to address longstanding regulatory hurdles that have historically contributed to project delays and cost overruns.
“Qualifying Projects” Expressly Defined
The Order defines “Data Center Project” as a facility requiring more than 100 megawatts (MW) of new load “dedicated to AI inference, training, simulation, or synthetic data generation.” It also speaks to “Covered Components” as those “materials, products, and infrastructure” necessary to build Data Center Projects or upon which they depend.
Data Center Projects and Covered Component Projects are deemed to be “Qualifying Projects” under the Order, so long as they involve a total capital investment exceeding $500 million, require at least 100 megawatts of new electrical load, protect national security, or are a project otherwise designated by the Secretary of Defense, the Secretary of the Interior, the Secretary of Commerce, or the Secretary of Energy. The scope of environmental permitting reform efforts is limited to these categories, and early-stage screening will be necessary to confirm eligibility.
Siting Data Centers on Brownfields and Federal Land May Facilitate Faster Development
The Order emphasizes the siting of data center development on previously disturbed land, particularly brownfield and Superfund sites, as well as on federally controlled land. Specifically, it directs the U.S. Environmental Protection Agency (EPA) to identify eligible sites and to develop guidance to expedite environmental reviews, enabling project sponsors to leverage existing cleanup certifications and site infrastructure to streamline permitting and minimize potential controversy.
In addition, the Order directs the Departments of the Interior, Energy, and Defense to authorize data center construction on suitable federal lands. Projects on federal land typically require only federal permits and approvals, which avoids the need for additional lengthy state and local review processes.
New Federal Efforts to Expedite Environmental Review and Permitting
National Environmental Policy Act (NEPA) Review: To reduce NEPA statutory review timelines, the Order directs the Council on Environmental Quality (CEQ) to coordinate with agencies in identifying or establishing new categorical exclusions for data center-related construction activities that customarily result in less than significant environmental effects. Categorical exclusions are a procedural mechanism that, when applicable, can exempt a project from needing to prepare lengthy environmental assessments or environmental impact statements, thereby facilitating a quicker project approvals process.
In addition, in a potentially significant move for project structuring, the Executive Order establishes a presumption that projects receiving less than 50% of their total capital costs from federal financial support do not constitute “major Federal actions” under NEPA. As a result, many data center projects with limited federal funding may be exempt from NEPA’s environmental review requirements. This federal-funding threshold could materially reduce the number of projects subject to NEPA altogether, though its ultimate impact will depend on how federal agencies interpret and implement the Order’s guidance.
Clean Water Act (CWA) Permitting: With respect to waters and wetlands permitting, the Order directs the U.S. Army Corps of Engineers to identify opportunities to apply or develop a data center-specific nationwide permit under Section 404 of the CWA and Section 10 of the Rivers and Harbors Appropriation Act of 1899. If finalized, such a permit could streamline the approval process for routine site preparation activities that currently require individual permits and interagency consultation.
Clean Air Act (CAA) Permitting: The Order directs the EPA to develop or modify regulations under the CAA to expedite permitting for large-scale data center infrastructure. Data centers frequently utilize emergency backup generators to maintain uninterrupted operations. These generators are typically subject to air permitting requirements under the CAA, which can contribute to regulatory delays. The Order instructs the EPA to pursue streamlined permitting processes to shorten review periods for these facilities where possible, while ensuring compliance with the Act.
Endangered Species Act (ESA) & Programmatic Consultation: The Order mandates programmatic consultation between the Secretary of the Interior, the Secretary of Commerce, or both, under Section 7 of the ESA for common construction activities for Qualifying Projects that will occur over the next 10 years at a programmatic level. Programmatic consultations may help avoid repetitive project-specific reviews or the need for project-specific biological opinions.
State and Local Environmental Review Still Applies
The Order and Action Plan do not preempt state environmental laws and permitting requirements. Many states maintain parallel statutes that mirror key aspects of NEPA, the CWA, the CAA, and the ESA. For instance, states such as California (through the California Environmental Quality Act (CEQA)) may require independent, project-specific environmental review for discretionary projects even when a federal NEPA categorical exclusion applies. Likewise, state air and water boards and natural resource agencies retain independent authority to evaluate emissions, discharges, and biological and wetlands impacts. Additionally, local jurisdictions often impose land use regulatory requirements as part of the project approval process. As a result, state and local permitting requirements may still extend project permitting timelines despite the Order’s federal streamlining directives.
Looking Ahead
The July 2025 Executive Order and AI Action Plan signal a potentially meaningful shift in federal environmental permitting policy for qualifying data center projects. Nevertheless, most data center projects will continue to require a mix of federal, state, and local permits. The practical impact of these new federal streamlining measures will depend on a range of factors, including project location, ownership structure, environmental conditions, and the extent to which states choose to align their own permitting processes with federal reforms.