‘One Big Beautiful Bill Act’: Senate Version Caps Section 899 ‘Revenge Tax’ at 15% and Carves Out ‘Portfolio Interest’
On June 16, 2025, the Senate Finance Committee released its own version (Senate Version) of the tax provisions of H.R. 1, entitled the “One Big Beautiful Bill Act,” which the U.S. House of Representatives passed on May 22, 2025.
The Senate Version introduces several important changes and clarifications to the proposed new Section 899 of the Internal Revenue Code (Code), “Enforcement of Remedies Against Unfair Foreign Taxes” that was included in the bill that was passed by the House (commonly referred to as the Revenge Tax). If enacted as proposed in the Senate Version, Section 899 would increase by up to 15 percentage points a range of U.S. federal tax rates, including both U.S. withholding tax rates (such as those on interest, dividends, and other fixed or determinable annual or periodical income paid to foreign persons) and certain other U.S. income tax rates (such as the regular tax rates applicable to nonresident individuals and foreign corporations, branch profits tax, and tax on private foundations), on income derived by taxpayers who are resident in, or otherwise connected to, jurisdictions the U.S. government designates as “offending foreign countries.”
The Senate Version includes a specific exception from the tax increase under Section 899 for “portfolio interest,” as well as a cap of 15% on the additional tax under Section 899 (as opposed to 20% cap the House proposed, determined without regard to any rate applicable in lieu of the statutory rate). The Senate Version also delays the implementation of the proposed Section 899, generally until Jan. 1, 2027 (compared to the effective date of Jan. 1, 2026, under the House version).
This GT Alert summarizes the most significant substantive changes and explains their potential impact.
Click here to continue reading the full GT Alert.
Congressional Budget Proposal Includes Adjustments to Dual-Eligible Enrollment Pathways and Medicare Savings Program Rules
In June 2025, the U.S. House of Representatives introduced a budget reconciliation bill titled the One Big Beautiful Bill Act (OBBBA). The legislation proposes a number of administrative changes to existing federal health programs, including modifications to automatic enrollment procedures affecting individuals who qualify for both Medicare and Medicaid. The bill does not repeal current benefit programs but includes provisions that would revise the process through which certain low-income individuals access premium and cost-sharing assistance programs.
Individuals who are eligible for both Medicare and Medicaid, commonly referred to as dual-eligible beneficiaries, accounted for approximately 14 percent of the Medicare population in 2021 and represented about 30 percent of Medicare fee-for-service spending, according to the Medicare Payment Advisory Commission. One program that serves this population is the Low-Income Subsidy (LIS), also known as “Extra Help,” which assists with Medicare Part D prescription drug premiums and other related out-of-pocket costs. The Centers for Medicare & Medicaid Services has estimated that the average annual value of this benefit is approximately $6,200 per beneficiary.
Under current law, individuals who are enrolled in Medicaid and subsequently become eligible for Medicare are automatically enrolled in LIS. The OBBBA proposes to eliminate automatic LIS enrollment for individuals who lose Medicaid eligibility. According to the Congressional Budget Office, which is a nonpartisan legislative agency, approximately 1.38 million individuals who are dually eligible for Medicare and Medicaid may lose their Medicaid coverage between 2025 and 2034. As a result, those individuals would no longer be automatically enrolled in LIS and would instead need to apply for the benefit directly through the Social Security Administration.
The OBBBA also includes provisions to delay implementation of certain federal regulations related to the Medicare Savings Programs, which help low-income individuals pay for Medicare Part B premiums and, in some cases, additional cost-sharing obligations. These regulations were finalized by the Centers for Medicare & Medicaid Services in 2023 and 2024 and were designed to streamline enrollment processes by reducing paperwork and simplifying eligibility verification. CMS previously estimated that these regulatory changes would result in approximately 860,000 new enrollees in the Medicare Savings Programs. The legislation proposes to delay the implementation of these provisions from 2027 to 2035.
The Congressional Budget Office projects that this delay would result in a reduction of federal Medicaid expenditures by approximately $162 billion over ten years. It also estimates that the change would lead to approximately 2.3 million fewer Medicaid enrollees during that period, of whom approximately 60 percent would be dual-eligible beneficiaries.
For individuals affected by these changes, the loss of Medicaid coverage would require separate applications to maintain access to both LIS and Medicare Savings Programs. LIS applications must be submitted to the Social Security Administration, while applications for Medicare Savings Programs are processed by individual state Medicaid agencies. These processes generally require income and, in some cases, asset verification. In addition to overseeing eligibility determinations, state Medicaid agencies would remain responsible for ensuring compliance with federal due process requirements, including adequate notice and appeal rights. Agencies would also need to confirm that enrollment procedures align with applicable civil rights and nondiscrimination laws.
The proposed legislation is currently under congressional consideration and may be amended prior to enactment. Stakeholders such as state Medicaid agencies, Medicare Advantage plans, healthcare providers, and beneficiary support organizations may wish to monitor further developments to assess potential operational and compliance implications. The changes outlined in the bill focus on administrative processes and eligibility pathways and do not modify the statutory structure of Medicare or Medicaid benefit categories.
Listen to this post
UK Data Use and Access Act Now in Force
On June 19, 2025, the UK Data Use and Access Bill (DUA Bill) finally received Royal Assent and passed into law as the Data Use and Access Act 2025 (DUA Act). The DUA Act amends the UK General Data Protection Regulation (UK GDPR), the Data Protection Act 2018, and the Privacy and Electronic Communication (EC Directive) Regulations 2003 (PECR).
Key Changes Under the DUA Act
International Data Transfers
The DUA Act introduces a new data protection standard for international data transfers from the UK to other countries. The new standard is “not materially lower” data protection measures than the standard in the UK, as opposed to the current standard as being “essentially equivalent.” This may impact the UK’s adequacy status with the EU. The current EU-UK adequacy decision is valid until December 27, 2025. We will monitor how the European Commission responds to the DUA Act’s new standard.
A New Legal Basis
The DUA Act introduces “Recognized Legitimate Interests” as new a legal basis for data processing. This new legal basis will permit certain security-related activities such as fraud prevention, public safety, and national security. With regard to these data processing activities, a controller will likely not be required to conduct a legitimate interest balancing test. The DUA Act also provides further guidance around what constitutes legitimate interest, such as direct marketing, intra group data transfers for internal administration, and processing necessary to ensure the security of network and information systems.
Data Subject Requests
The DUA Act modifies Data Subject Access Requests (DSARs) by introducing “reasonable and proportionate” searches when controllers are required to respond to DSARs. The DUA Act codifies ICO guidance related to DSARs. Organizations must now explain when they withhold information due to legal privilege.
Automated Decision Making
Article 22 of the UK GDPR restricts solely Automated Decision-Making (ADM) that has a significant legal effect on individuals, requiring meaningful human oversight for all such processes. The DUA Act clarifies that “meaningful human intervention” necessitates that a competent person reviews automated decisions. Organizations conducting ADM must also offer appropriate safeguards. Organizations using AI-driven processes must uphold transparency and accountability in decision-making. Organizations are also required to inform individuals and comply with non-discrimination laws such as the Equality Act 2010.
Cookies
The DUA Act provides new exemptions to the requirement for consent to set cookies for:
Collecting statistical information about how an organization’s service or website is being used with a view to make improvements (such as analytics purposes);
Optimization of content display or to reflect user preferences about content display (such as saving user preferences in relation to font or adapting the display to the size of the user’s device); or
Where the sole purpose is to enable the geographical position of a user to be ascertained in response to an emergency communication.
Even with these exemptions, organizations must clearly inform users about the purpose of the cookies and provide a simple and effective opt-out mechanism.
Digital ID Trust Framework
The DUA Act establishes a Digital ID Trust Framework to establish rules for digital verification services in the UK. This is aimed at fostering innovation, while increasing oversight and consultation. Key provisions of the framework include simplifying regulations to make digital verification services more efficient and accessible.
Children’s Data
The DUA Act introduces several provisions aimed at strengthening the protection of children’s personal data. It outlines that children’s “higher protection matters’” as considerations for how best to safeguard and support children when using services.
Complaints
The DUA Act outlines new rules requiring controllers to respond to complaints within 30 days before being reported to the Information Commissioner’s Office (ICO).
Role of the ICO
The ICO will now see increased oversight by the Secretary of State, potentially leading to shifts in enforcement priorities. The ICO will transition to a corporate body formally established as the Information Commission led by a Chair and supported by a non-executive board.
Next Steps for Organizations
The DUA Act will enter into phased implementation from now through June 2026. Organizations should:
Review and update their data maps and inventories globally.
Assess and audit any ADM and AI related activities.
Review DSAR processes and procedures.
Identify and update how cookies are being used.
Update and/or create complaints handling procedure.
Pared Back Version of the Texas Responsible Artificial Intelligence Governance Act Signed Into Law
On 22 June 2025, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) (HB 149) was signed into law by Governor Greg Abbott.1 TRAIGA takes effect on 1 January 2026.
Originally introduced in late 2024 as HB 1709 and touted as the most comprehensive pieces of artificial intelligence (AI) legislation in the country (the Original Bill),2 TRAIGA, in its final form, significantly reduces the law’s regulatory scheme—eliminating most of the private sector obligations contained in the Original Bill and focusing on government agencies’ use of AI systems and the use of AI for certain, limited purposes, such as social scoring and to manipulate human behavior to incite violence, self-harm, or engage in criminal activities.
TRAIGA regulates those who (1) deploy or develop “artificial intelligence systems” in the Texas; (2) produce a product or service used by Texas residents; or (3) promote, advertise, or conduct business in the state.
Under TRAIGA, “artificial intelligence system” means “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”
Although this definition is broad, the obligations TRAIGA imposes on private employers are much more limited than in the Original Bill.
Key Provisions for Private Employers3
Eliminates Disclosure Obligations
Under TRAIGA, covered private employers are not required to disclose their use of AI, including to job applicants or employees, as they were under the Original Bill. Instead, only state agencies are required to disclose to “consumers”4 that they are interacting with AI and health care service providers are required to disclose to patients when they are using AI systems in treatment.
Only Prohibits Intentional Unlawful Discrimination
Consistent with Executive Order 14281, TRAIGA only prohibits the use of AI systems that are developed or deployed “with the intent to unlawfully discriminate against a protected class” (emphasis added). Disparate impact alone cannot show an intent to discriminate.
Relatedly, unlike in the Original Bill, employers are no longer required to conduct impact assessments, which were aimed at identifying and mitigating algorithmic bias.
Focuses on Specific, Harmful AI Uses
Instead of broadly regulating the use of AI in Texas, TRAIGA focuses on specific, harmful uses of AI, prohibiting:
The development or deployment of AI systems that are intentionally aimed at inciting or encouraging self-harm or criminal activity;
The development or distribution of AI systems to produce child sexual abuse imagery or deep fake pornography, or that engage in text-based conversations that simulate or describe sexual content while impersonating or imitating a child; and
Government entities’ use of “social scoring,” which means evaluating or classifying people based on social behavior or personal characteristics and assigning them a social score or similar valuation that may result in detrimental or unfavorable treatment of a person or group or that may infringe on someone’s federal or state rights.
Eliminates Risk Mitigation Policy Requirement
AI developers and deployers, such as employers, are not required to implement a risk management AI policy, as they were in the Original Bill.
However, discovering violations by complying with a risk management framework for AI systems, such as the National Institute of Standards and Technology’s “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile,” can help entities avoid liability if charges for violating TRAIGA are brought against them by the Texas Attorney General’s Office (Texas AG).
Keeps AI Regulatory Sandbox Program
Consistent with the Original Bill, TRAIGA provides that the Texas Department of Information Resources will create “regulatory sandbox program.” Entities that apply for and are accepted into this program can test AI systems without a license, registration, or other regulatory authorization. The program is designed to promote the safe and innovative use of AI systems, encourage responsible deployment of such systems, provide clear guidelines for developing AI systems while certain laws and regulations related to the testing are waived or suspended, and allow entities to research, train, and test AI systems.
Limits Enforcement and Penalties
The Texas AG has exclusive authority to investigate and enforce TRAIGA violations and there is no private right of action. However, consumers may submit complaints to the Texas AG through an online portal.
The Texas AG can bring an action in the name of the state to enforce TRAIGA and seek civil penalties, injunctive relief, attorney’s fees, and reasonable court costs. If a curable violation is found, between US$10,000 and US$12,000 in civil penalties can be imposed. Remedies for uncurable violations can range between US$80,000 and US$200,000. Continuing violations can result in between US$2,000 and US$40,000 in penalties each day the violation continues.
Recommendations for Texas Employers’ Use of AI
Despite its limited applicability to private entities, covered private employers should take the following steps to prepare for TRAIGA’s 1 January 2026 effective date:
Implement AI policies, including an AI risk management policy, and trainings to ensure adequate oversight and understanding of AI use, to mitigate the risk of intentional discrimination through the use of AI systems, and to be able to use the risk management policy as a defense to charges of violations brought by the Texas AG. Employers can use Section 551.008 of Original Bill as a guide when developing their risk management policy.
Audit AI systems and the use of those systems to make decisions to ensure that employers are not intentionally discriminate against job candidates or employees. For example, employers should ask their AI vendors to confirm that the tools do not intentionally discriminate. In addition, employers should include in AI policies and AI-related trainings information about intentional discrimination and the proper use of the AI tools to avoid such discrimination.
Ensure employers have information about the AI decision-making processes so employers can support their nondiscriminatory and otherwise proper use of the AI tools if challenged.
Conclusion
Although TRAIGA does not contain many of its original regulatory burdens, particularly for covered private employers, the law remains focused on preventing intentional discrimination and ensuring government agencies’ transparent use of AI. TRAIGA is now the latest in the growing body of law governing how employers use AI to interact with prospective and current employees.
Footnotes
1 The final bill is available here: https://capitol.texas.gov/BillLookup/History.aspx?LegSess=89R&Bill=HB149.
2 K&L Gates’ 13 January 2025 alert on the original version, HB 1709, is available here: https://natlawreview.com/article/texas-responsible-ai-governance-act-and-its-potential-impact-employers.
3 TRAIGA also contains several provisions that govern state agencies’ use of AI. This alert does not discuss those provisions in depth.
4 TRAIGA, H.B. 149, § 551.001(2), 89th Tex. Leg., Reg. Sess. (2025). “Consumer” means an individual who is a resident of this state acting only in an individual or household context. The term does not include an individual acting in a commercial or employment context.
New York Assembly Passes Bill to Fill Void as NLRB Lacks Quorum, Raising Preemption Concerns
As we previously reported here, since May 22, 2025, the National Labor Relations Board (“NLRB” or “Board”) has lacked a quorum of at least three members after the U.S. Supreme Court stayed the reinstatement of former Board Member Gwynne A. Wilcox following her firing by President Trump. As a result, the NLRB cannot issue decisions in representation and unfair labor practice cases. The Board has also been faced with constitutional challenges to its regime. (See here and here.)
To attempt to fill this void, on June 17, 2025, the New York State Assembly overwhelmingly passed legislation—referred to by the co-sponsors as the “NLRB Trigger Bill”—that would amend the New York Labor Relations Act to expand the jurisdiction of the Public Employment Relations Board (“PERB”) to essentially step into the role of the National Labor Relations Board (“NLRB” or “Board”) for private-sector employers.
Currently, PERB only oversees public-sector employees and private-sector employees that the NLRA (or the federal Railway Labor Act) do not cover, such as agricultural workers.
If signed into law by Governor Hochul, the NLRB Trigger Bill would permit PERB to act in the following ways if the NLRB does not “successfully assert” jurisdiction:
Representation Elections: To certify—upon application and verification—“any bargaining unit previously certified by another state or federal agency.” The plain text of the bill appears to indicate that PERB’s authority would apply only to bargaining units “previously certified” by the NLRB or another state—meaning it could not process representation petitions for new units that have not been certified.
Adjudicating Unfair Labor Practices / Improper Practices: To exercise jurisdiction over any previously-negotiated collective bargaining agreements and ensure those employment terms remain in full force and effect. Similar to the NLRB, PERB adjudicates unfair labor practices by investigating charges and prosecuting those charges before administrative law judges, and then PERB itself.
Under this bill, PERB’s jurisdiction would not apply where the NLRB “successfully asserts jurisdiction” over employees or employers pursuant to an order issued by an Article III federal court. In other words, if / when the NLRB retains a quorum and asserts jurisdiction, then PERB’s jurisdiction would cease.
Potential Preemption Challenges
If this bill become law, then employers likely will challenge it as preempted under the NLRA based on the Supreme Court’s landmark decision in San Diego Bldg. Trades Council v. Garmon, 359 U.S. 236 (1959). In Garmon, the Court held that where there is even the potential for conflict between the NLRA and state or local law, then such state/local law is preempted. The Court in Garmon reasoned that the purpose of a broad preemption doctrine was to ensure a uniform national labor relations policy overseen by the NLRB—not a patchwork of state and local laws.
California and Massachusetts are considering analogous legislation, and we will closely monitor the progress of these bills, as well as any potential challenges that surface.
Best Practices When Taking Voluntary Compliance Steps Using Workforce Analytics
The Trump administration has decisively shifted its approach to enforcing employment discrimination laws, leaving employers grappling for clarity and stability to inform their efforts to prevent and manage legal risks stemming from harassment and discrimination. Workforce analytics, accompanied by privileged legal advice tethered to risk tolerance, can assist employers to identify and address potential workplace discrimination issues minimizing legal risk amid the administration’s shifting enforcement priorities.
Quick Hits
The Trump administration has sought to end both federal enforcement of antidiscrimination laws based on disparate impact theories and to eliminate employer DEI programs.
Even with these shifting priorities, it remains critically important for employers to collect and study applicant and employee demographic data to maintain compliance with equal opportunity and antidiscrimination laws, as well as to be prepared for scrutiny under the Trump administration’s shifting policies.
Employers may want to consider proactive collection and analysis of workforce demographic data, barrier analyses, and enhanced training programs to ensure compliance with equal employment opportunity and antidiscrimination laws.
The administration—largely through the issuance of executive orders (EO)—has prioritized merit-based opportunity, sought to end usage of disparate impact theories of discrimination, rescinded federal contractor obligations to provide affirmative action and discrimination protections for women and minorities, sought to eliminate “illegal” diversity, equity, and inclusion (DEI) initiatives, and focused on stopping anti-American and anti-Christian bias and combating antisemitism. The Equal Employment Opportunity Commission (EEOC), the U.S. Department of Labor (DOL), and the U.S. Department of Justice (DOJ) have all taken actions to advance the Trump administration’s policy objectives, but questions remain.
In particular, the Trump administration’s focus on discouraging the collection of applicant data related to race, ethnicity, and sex, coupled with its messaging on unlawful race and sex discrimination in DEI programs, has many employers hesitant to collect, maintain, and analyze demographic information from their applicants and employees.
This legal landscape is especially confusing for federal contractors given the wind down of EO 11246 obligations, but the administration’s new focus impacts all employers. As a result, employers face challenges complying with legal obligations and effectively managing risks associated with workplace discrimination and harassment.
However, a close review of the EEOC’s Fiscal Year 2026 Congressional Budget Justification submitted to Congress in May 2025 reveals that EEOC investigations will continue to focus on employer data. According to the budget justification, the EEOC is committed to educating and informing its own staff to “combat systemic harassment, eliminate barriers in hiring and recruitment, recognize potential patterns of discrimination, and examine and analyze these often large or complex investigations effectively.” The agency said that in fiscal year (FY) FY2026, it plans to “conduct mid and advanced level training for field staff and assist with the development of class investigations, data requests, and data analysis for pattern and practice disparate treatment cases.” (emphasis added).
The EEOC’s characterization of budget funds sought for its litigation program is also instructive. As of March 31, 2025, 46 percent of the EEOC’s litigation docket involved systemic discrimination or class lawsuits. Citing efforts to enforce EO 14173, the Commission contemplates involving “expert witnesses” and “the discovery of large-scale selection data to prove the existence and extent of a pattern or practice of discrimination.” The Commission justifies its resource request “to remedy discrimination on prioritized issues,” and argues aggressive enforcement will result in “a strong incentive for voluntary compliance” by employers.
Shifting Enforcement Targets
Employers may see an increase in EEOC charges from charging parties and Commissioner’s as well as other enforcement activities that align with the current administration’s priorities, including enforcement regarding DEI programs, so-called anti-American bias, national origin discrimination, and anti-Semitism. As just one example, the EEOC recently settled a systemic investigation into national origin and anti-American bias for $1.4 million dollars.
EEOC Acting Chair Andrea Lucas has repeatedly warned employers that EEOC focus will be on intentional disparate treatment cases where there has been a “pattern or practice” of discrimination. Like disparate impact, “pattern or practice” claims are rooted in systemic issues and typically involve the use of statistical evidence related to allegedly aggrieved individuals.
The 2024 Supreme Court decision in Muldrow v. City of St. Louis (rejecting a heightened bar for alleging an employment decision or policy resulted in an adverse impact on terms and conditions of employment) and the 2025 decision in Ames v. Ohio Department of Youth Services (rejecting a higher evidentiary standard for employees from majority groups to prove employment discrimination), have made it easier for plaintiffs to plead and prove employment discrimination claims under Title VII. The decisions seemed to have widened the doorway for more claims from individuals from majority groups (so-called reverse discrimination claims) and potentially made it easier to evade summary judgment and reach a jury trial if litigation ensues.
Moreover, federal contractors, institutions relying on federal contracts or grants, and federal money recipients face additional concerns with False Claims Act (FCA) liability. President Trump’s EO 14173, which seeks to require entities to certify for purposes of the FCA that they do not maintain unlawful discriminatory policies, namely illegal DEI policies. The DOJ has launched an initiative to use the FCA to investigate civil rights violations committed by federal fund recipients, expanding legal exposure to such employers.
Proactive Steps
Given the current legal landscape, employers may want to take proactive steps to ensure compliance with equal employment opportunity and antidiscrimination laws. These steps may include:
Collect and Analyze Demographic Data: Collecting and analyzing demographic data can be crucial for identifying and addressing disparities within the workplace and for documenting and demonstrating reasons for employment decisions or policies. While there may be concerns about collecting demographic data, such concerns may be alleviated by keeping data confidential and analyzing it under attorney-client privilege.
Barrier Analysis: Barrier analysis involves identifying and addressing obstacles that may prevent equal employment opportunities. This can include reviewing hiring practices, promotion policies, and other employment decisions that cover all aspects of the employment life cycle to ensure they do not disproportionately impact certain groups. By conducting a thorough barrier analysis, employers can proactively address potential issues before they become legal problems and remove barriers.
Review and Update Policies: Regular reviews of and updates to employers’ antidiscrimination and harassment policies can help ensure they align with current laws and the administration’s priorities, as well as employers’ values, goals, and objectives. Such reviews may include policies related to DEI, national origin discrimination, and anti-Semitism.
Provide Training: Implementing regular training programs for company leaders, managers, and employees on new antidiscrimination enforcement developments can help prevent discriminatory behavior and ensure that all employees understand their rights and responsibilities. Updating modules and examples to reflect changing priorities may help employers remain compliant. Likewise, covering a wide variety of scenarios and examples, including majority characteristics, can be important to review and include.
Next Steps
The shifting landscape of employment law presents both challenges and opportunities for employers. To be prepared, employers can stay informed on the latest actions and consider which proactive steps may be best to avoid potential liability and achieve their goals and objectives.
Big Sky State Makes Big Privacy Updates
Montana’s privacy law has received a refresh and updates will go into effect October 1, 2025 – exactly one year since the law took effect. The law was modified with SB 297, and changes include coverage, approach with minors, and more:
Broadening who is covered. Previously, Montana’s privacy law applied only to those controlling or processing the personal data of at least 50,000 Montanans. SB 297 cuts those numbers in half, bringing in any business handling the data of just 25,000 state residents or making substantial revenues off the personal data of at least 15,000 people.
Minors. As amended, businesses offering online services, products, or features to those under 18 must use reasonable care to avoid heightened risks of harm to minors. Data protection impact assessments -will also be needed if engaging in activities that might create a risk of harm to minors. As revised, companies will need to get consent from those 13-18, or from their parents if the minor is under 13, to process minors’ information for targeted advertising, certain profiling activities, or selling of personal data. There are also restrictions on geolocation information collection and using “system design feature[s]” to increase use of online services.
Narrowed exemptions. Montana has removed the broad GLBA entity-level exemption that exists in most states (joining California, Minnesota, and Oregon). There will still be an exemption for GLBA-covered information, but the only types of financial institutions that receive the entity-level exception are banks and credit unions. Montana’s law also previously exempted non-profits, but now narrows this to only non-profits that detect and prevent fraudulent acts in connection with insurance. Delaware and Oregon’s privacy laws contain similar carveouts for non-profits.
Privacy policy updates. Under the law’s revisions, privacy policies will need to explain what rights consumers have (not just that the consumer has rights) and the types of data and third parties to whom data is shared or sold. Like California, Colorado, Minnesota, and New Jersey, Montana businesses must also state the date the privacy notice was “last updated.” Privacy notice content will need to be accessible to individuals with disabilities and available in each language in which the business provides a product or service. Links to the notices must be conspicuously posted. Material changes must communicated to consumers for prospective data collection and they must be allowed to opt out of such changes.
AG and right to cure. Finally, as amended, businesses will no longer have a statutory cure period. Previously, when the AG issued a notice of violation, businesses were given 60 days to cure.
Putting it into Practice: Montana joins California, Colorado, and Virginia in making changes to its comprehensive privacy laws. Provisions to keep in mind include privacy policy content, approach with minors’ information, and who and what is covered under the law.
Listen to this post
Texas Enacts “Responsible AI Governance Act”
The Texas Responsible AI Governance Act (TRAIGA), signed into law on June 22, 2025, represents a significant evolution in state-level AI regulation. Originally conceived as a comprehensive risk-based framework modeled after the Colorado AI Act and EU AI Act, TRAIGA underwent substantial modifications during the legislative process, ultimately emerging as a more targeted approach that prioritizes specific prohibited uses while fostering innovation through regulatory flexibility. Set to take effect on January 1, 2026, TRAIGA marks Texas as a key player in the developing landscape of AI governance in the United States. TRAIGA’s evolution from comprehensive risk assessment to targeted prohibition also reflects deeper challenges in AI governance: how do we regulate technologies that can modify their own behavior faster than traditional oversight mechanisms can adapt?
From Sweeping Framework to Targeted Regulation. The original draft, introduced in December 2024, proposed an expansive regulatory scheme that would have imposed extensive obligations on developers and deployers of “high-risk” AI systems, including mandatory impact assessments, detailed documentation requirements, and comprehensive consumer disclosures. The final version, following stakeholder feedback and legislative debate, represents a significant shift from process-heavy compliance requirements to outcome-focused restrictions. Rather than creating broad categories of regulated AI systems, the enacted version attempts to strike a balance of preventing specific harms while maintaining Texas’s business-friendly environment.
Core Prohibitions. TRAIGA enacts a prohibited uses framework that prohibits AI systems designed or deployed for:
1. Behavioral Manipulation: Systems that intentionally encourage physical harm or criminal activity.
2. Constitutional Infringement: AI developed with the sole intent of restricting federal Constitutional rights.
3. Unlawful Discrimination: Systems intentionally designed to discriminate against protected classes.
4. Harmful Content Creation: AI for producing child pornography, unlawful deepfakes, or impersonating minors in explicit conversations.
Notably, the Act requires intent as a key element for liability. This intent-based standard provides important protection for developers whose systems might be misused by third parties, while still holding bad actors accountable. The provision clarifying that “disparate impact” alone is insufficient to demonstrate discriminatory intent aligns with recent federal policy directions and provides clarity for businesses navigating compliance. While this intent-based framework addresses obvious harmful uses, it leaves open more complex questions about AI systems that influence human decision-making through design choices that fall below the threshold of conscious intent — systems that shape choice environments without explicitly intending to manipulate. Companies should consider how their AI systems structure user choice environments, even when not explicitly designed to influence behavior.
The Sandbox Program. TRAIGA implements a regulatory sandbox program administered by the Department of Information Resources. This 36-month testing environment allows AI developers to experiment with innovative applications while temporarily exempt from certain regulatory requirements. Participants must submit quarterly reports detailing system performance, risk mitigation measures, and stakeholder feedback.
Enforcement. TRAIGA vests enforcement authority exclusively with the Texas Attorney General, avoiding the complexity of multiple enforcement bodies or private rights of action. The enforcement framework includes several features designed to incentivize proactive compliance and self-correction while providing meaningful deterrents for intentional bad actors:
60-day cure period for violations before enforcement actions can proceed
Affirmative defenses for companies that discover violations through internal processes, testing, or compliance with recognized frameworks like NIST’s AI Risk Management Framework (RMF)
Scaled penalties ranging from $10,000 – $12,000 for curable violations to $80,000 – $200,000 for uncurable ones
The Texas AI Council. TRAIGA establishes the Texas Artificial Intelligence Advisory Council, which is explicitly prohibited from promulgating binding rules, and instead focuses on:
Conducting AI training for government entities
Issuing advisory reports on AI ethics, privacy, and compliance
Making recommendations for future legislation
Overseeing the sandbox program
Implications for Businesses. For companies operating in Texas, TRAIGA offers both clarity and flexibility. It’s focus on intentional harmful uses rather than broad categories of “high-risk” systems reduces compliance uncertainty. Key considerations for businesses include:
1. Intent Documentation: Companies should maintain clear records of their AI systems’ intended purposes and uses
2. Testing Protocols: Implementing robust testing procedures can provide affirmative defenses
3. Framework Adoption: Compliance with recognized frameworks like NIST’s AI RMF offers legal protection
4. Sandbox Opportunities: Innovative applications can benefit from the regulatory flexibility offered by the sandbox program
National and Policy Implications. TRAIGA’s passage positions Texas as a significant voice in the national AI governance conversation. Its pragmatic approach contrasts with more prescriptive frameworks proposed elsewhere, potentially offering a model for AI regulation that prioritizes innovation while addressing concrete harms. However, TRAIGA also contributes to the growing patchwork of state AI laws, raising questions about regulatory fragmentation. With Colorado, California, Utah, and now Texas each taking different approaches to more comprehensive AI regulation, businesses face an increasingly complex compliance landscape. This fragmentation may accelerate calls for federal preemption or a unified national framework.
Conclusion
The Texas Responsible AI Governance Act charts a distinctive course in AI governance by focusing on specific prohibited uses rather than comprehensive risk assessments. However, TRAIGA’s effectiveness will ultimately depend on how well traditional regulatory frameworks can adapt to technologies that operate at machine speed while making decisions that fundamentally affect human agency and choice. While the act addresses intentional harmful uses, the more challenging questions may involve AI systems that influence decision-making environments in ways that fall outside traditional regulatory categories. As other states and the federal government continue to grapple with AI regulation, the Texas model offers valuable lessons — and reveals limitations — of applying legal frameworks to technologies that challenge basic assumptions about agency, intent, and choice.
The POWER Act: Strengthening Worker Protections
On May 27, 2025, Philadelphia enacted the Protect Our Workers, Enforce Rights Act (“POWER Act”), amending Title 9 of The Philadelphia Code as it pertains to the following sections: “Promoting Healthy Families and Workplaces,” “Wage Theft Complaints,” “Protections for Domestic Workers,” “Protecting Victims of Retaliation,” and “Enforcement of Worker Protection Ordinances.”
Amendments to Chapter 9-4100 Promoting Healthy Families and Workplaces
The definition of who may file a wage theft complaint has been broadened. Now, any “employee” (including independent contractors misclassified as such) who performs work in Philadelphia is explicitly authorized to file a complaint for unpaid wages, regardless of immigration status. Additionally, the Office of Worker Protections (OWP), as opposed to just the offices the Mayor designates, may now initiate investigations based on information, even if a formal complaint has not yet been filed—allowing the City to proactively enforce the law in high-risk industries.
The POWER Act also changes the calculation for Paid Sick Time (PSL) for tipped employees (i.e., employees who customarily and regularly receive more than fifty dollars ($50) a month in tips from the same employment). Paid sick time means time that is compensated at the same hourly rate and with the same benefits, including health care benefits, as the employee normally earns from the employee’s employment at the time the employee uses the paid sick time and is provided by an employer to an employee. Under the Act’s new calculation method for tipped employees, the hourly rate of pay shall be the numerical average of the hourly wage for “Bartenders,” Waiters & Waitresses,” and “Dining Room & Cafeteria Attendants & Bartender Helpers,” as published by the Pennsylvania Department of Labor and Industry.
Amendments to Chapter 9-4300 Wage Theft Complaints
The original chapter—enacted in 2020—established protections for domestic workers, including mandatory contracts, rest breaks, and anti-retaliation provisions. The POWER Act strengthens those rights by incorporating them into the city’s broader labor enforcement framework. As with the amendments to the wage theft portions of the law, the Act empowers the OWP to actively investigate complaints and impose penalties against employers who violate domestic workers’ rights.
Amendments to Chapter 9-4500 Protections for Domestic Workers
The OWP also aligns domestic workers’ sick leave rights with the city’s paid sick leave (“PSL”) ordinance, ensuring they now accrue and use paid time off, with a centralized portable benefits system to be developed, regardless of how many employers they work for. The Act further clarifies that live-in domestic workers are fully entitled to these PSL benefits, including protections against retaliation, wage theft, and coercion. Finally, employers must provide written contracts outlining leave time.
Enhanced Anti-Retaliation Provisions
The POWER Act reinforces protections against retaliation for workers who assert their rights under Title 9. Additionally, the Act prohibits employers from retaliating against employees for exercising their rights to use sick time and specifies that employers may not consider paid sick leave covered absences as part of any absence control or disciplinary action. It also places a rebuttable presumption of unlawful retaliation on any employer in certain circumstances.
Notice & Retention of Employer Records Obligations:
Employers are required to provide a written notice of rights to employees, including leave entitlements. Employers must also create and maintain contemporaneous records for a period of three years regarding the hours worked by an employee, including dates, and hours of sick time taken by an employee and payments made to an employee for the sick time.
Penalties:
If the OWP determines that an employer has violated the Act, the agency can seek civil penalties. The OWP also provides for the recovery of liquidated damages and other consequences for repeated violations.
Employers are reminded to review their policies for compliance with these latest legislative updates.
Minnesota Expands and Strengthens Meal and Rest Break Rules
New law effective in 2026 imposes minimum break times, expands eligibility and introduces penalties for noncompliance
New standards go into effect on January 1, 2026, for Minnesota employers.
Requires minimum 15-minute rest breaks and 30-minute meal breaks
Adds penalties for missed or insufficient breaks
Minnesota has enacted new requirements for employee meals and rest breaks, expanding existing protections and imposing penalties for noncompliance. The amendments to Minnesota Statutes §177.253 and §177.254 will take effect on January 1, 2026.
Under the current law, employers are required to provide “adequate time” for a restroom break at least once every four-hour shift. The new law clarifies that “adequate time” cannot be less than 15 minutes. Beginning in 2026, employers must offer either a 15-minute rest break or enough time to use the restroom — whichever is longer.
Similarly, the law will change the unpaid meal break requirement from “sufficient time to eat a meal” to a minimum length of 30 minutes. In addition, employers will be required to provide a meal break for every employee working a shift of six hours or more, as opposed to the current law requiring meal breaks for shifts of more than eight hours.
For both paid rest breaks and unpaid meal breaks, the new law adds a remedy for violations. Employees who do not receive the required breaks will be eligible to receive damages equal to two times the amount earned during the missed break at the employee’s regular rate of pay.
The practical impact of these provisions will vary by workplace, but all Minnesota employers should assess whether their current practices align with the new standards and make adjustments as needed before the law takes effect.
Trump Administration SEC Withdraws Proposed Anti-Greenwashing Rule
On June 17, the SEC officially withdrew a proposed rulemaking undertaken by the Biden Administration that sought to combat greenwashing in ESG (or similarly labeled funds). Specifically, the proposed rule would have “facilitate[d] enhanced disclosure of ESG issues to clients and shareholders” (according to the SEC) by mandating “more specific disclosures . . . based on the ESG strategies [that funds and advisers] pursue” including that “[f]unds focus[ing] on environmental factors generally would be required to disclose the greenhouse gas emissions associated with their portfolio investments.” In essence, the SEC would have required funds to provide disclosures demonstrating that these funds actually conformed to their proclaimed ESG strategy. This rule had been proposed in tandem with the SEC “Names Rule” (which was finalized and entered into effect), each of which sought to address perceived problems with greenwashing in the investment space.
The policy decision by the Trump Administration’s SEC to withdraw this proposed rule is unsurprising and is in accordance with a number of other recent initiatives by the Trump Administration that effectively rolled back various Biden Administration initiatives focused on climate, including the mandatory climate disclosure rule promulgated by the SEC (that the Trump Administration is no longer defending in the courts). Still, even if this particular move was expected, it nonetheless demonstrates further the U-turn that the Trump Administration has effectively tried to implement with respect to climate policy, especially concerning the various efforts to promote climate disclosures by companies and business organizations.
Enhanced Disclosures by Certain Investment Advisers and Investment Companies About Environmental, Social, and Governance Investment The Securities and Exchange Commission (“Commission”) is formally withdrawing certain notices of proposed rulemaking issued between March 2022 and November 2023. The Commission does not intend to issue final rules with respect to these proposals. If the Commission decides to pursue future regulatory action in any of these areas, it will issue a new proposed rule.
www.sec.gov/…
TRAIGA: Key Provisions of Texas’ New Artificial Intelligence Governance Act
On May 31, 2025, the Texas Legislature passed House Bill 149, the Texas Responsible Artificial Intelligence Governance Act (TRAIGA). TRAIGA sets forth disclosure requirements for government entity AI developers and deployers, outlines prohibited uses of AI, and establishes civil penalties for violations. On June 2, 2025, the bill was sent to the governor of Texas for review and signed into law on June 22. TRAIGA takes effect on Jan. 1, 2026, the latest in a string of states, including California, Colorado, and Utah that have passed AI legislation.
To Whom Does TRAIGA Apply? Key Definitions
TRAIGA applies to two “groups”: (1) covered persons and entities1, which include developers and deployers,2 and (2) government entities3. Covered Persons and Entities
Covered persons and entities, each a “person,” are defined as any person who (1) promotes, advertises, or conducts business in Texas; (2) produces a product or service Texas residents use; or (3) develops or deploys an artificial intelligence system in Texas.4
Developers and Deployers
A “developer” is a person who develops an artificial intelligence system that is offered, sold, leased, given, or otherwise provided in Texas, and a “deployer” is a person who deploys an artificial intelligence system for use in Texas.5
Government Entities
A “governmental entity” is any department, commission, board, office, authority, or other administrative unit of Texas or of any political subdivision of Texas that exercises governmental functions under the authority of the laws of Texas.6 The definition specifically excludes hospital districts and institutions of higher education.7
Consumer
“Consumer” means an individual who is a resident of Texas “acting only in an individual or household context.”8Accordingly, employment or commercial uses are not subject to TRAIGA.
Artificial Intelligence System
TRAIGA broadly defines an “artificial intelligence system” as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”9
How Would TRAIGA Be Enforced?
The Texas attorney general (AG) has exclusive authority to enforce the law (with rare exceptions where licensing state agencies have limited enforcement power, discussed further below).10 TRAIGA does not, however, provide for a private right of action.11
Notice and Opportunity to Cure
Before the AG can bring an action, the AG must send a written notice of violation to the alleged violator.12 The alleged violator then has 60 days to:
cure the alleged violation;
provide supporting documentation showing the cure; and
update or revise internal policies to prevent further violation.13
Civil Penalties
TRAIGA also sets forth civil penalties, which include the following categories:
Curable violations: $10,000 – $12,000 per violation;14
Uncurable violations: $80,000 – $200,000 per violation;15
Ongoing violations: $2,000 – $40,000 per day.16
Additionally, the AG may seek injunctive relief, attorneys’ fees, and investigative costs.17
Safe Harbors
TRAIGA provides for safe harbors and affirmative defenses. A person is not liable under TRAIGA if:
a third party misuses the AI in a manner TRAIGA prohibits;
such person discovers a violation through testing or good faith audits; or
such person substantially complies with NIST’s AI Risk Management Framework or similar, recognized standards.18
State Agency Enforcement Actions
If the AG finds that a person licensed, registered, or certified by a state agency has violated TRAIGA and recommends additional enforcement by the applicable agency, such agency may impose other sanctions such as:
suspension, probation, or revocation of a license, registration, certificate, or other authorization to engage in an activity; and/or
fines up to $100,000.19
How Would TRAIGA Work?
The sections on disclosures to consumers and the prohibited uses of AI may have implications for businesses.
Disclosure to Consumers
Government agencies are required to disclose to each consumer, before or at the time of interaction, that the consumer is interacting with AI (even if such disclosure would be obvious to a reasonable consumer).20 The disclosure must be clear and conspicuous, written in plain language, and not use a dark pattern.21
Prohibited Uses
TRAIGA specifically prohibits a government entity from using AI to:
assign a social score;22
uniquely identify a specific individual using biometric data, without the individual’s consent;23
–
Under the law, “biometric data” is defined as “data generated by automatic measurements of an individual’s biological characteristics.”24
–
The term includes a fingerprint, voiceprint, eye retina, or iris, or other unique, biological pattern or characteristic that is used to identify a specific individual.25
–
The term does not include a physical or digital photograph or data generated from a physical or digital photograph; a video, or audio recording or data generated from a video or audio recording; or information collected, used, or stored for health care treatment, payment, or operations under HIPAA.26
TRAIGA specifically prohibits a person from using AI to:
incite or encourage self-harm, crime, or violence;27
infringe, restrict, or otherwise impair an individual’s rights guaranteed under the U.S. Constitution;28
unlawfully discriminate against a protected class in violation of state or federal law.29 Note that the law explicitly does not recognize “disparate impact” alone as sufficient to demonstrate an intent to discriminate.30
–
“Protected Class” is defined as “a group or class of persons with a characteristic, quality, belief, or status protected from discrimination by state or federal civil rights laws, and includes race, color, national origin, sex, age, religion, or disability.”31
produce or distribute certain sexually explicit content or child pornography, including deep fakes.32
TRAIGA also establishes a sandbox program to allow companies to test AI in a controlled setting without full regulatory compliance33 and creates the Texas Artificial Intelligence Council to provide guidance and review ethical and legal issues related to AI34.
TRAIGA Compliance Considerations
Applicability assessment. Companies should inventory all AI systems developed or deployed in Texas to determine whether such AI meets TRAIGA’s definition of “any machine-based system that infers from inputs to generate outputs, which can influence physical or virtual environments.”35 Assessments should include third-party AI tools used, such as chatbots.
Use case analysis. Companies should consider whether their AI systems: (1) interact with consumers, (2) potentially infringe on rights under the Constitution, (3) affect protected classes, or (4) may be perceived to manipulate behavior (i.e., encouraging self-harm, crime, or violence).
Notice requirement review. Governmental agencies should implement clear and conspicuous disclosures (which can be hyperlinked36) wherever AI interacts with Texas consumers and ensure such disclosures are written in plain language and contain no dark patterns.
Risk framework alignment. Companies and government entities may wish to align current AI programs with nationally/internationally recognized AI risk frameworks such as NIST’s AI Risk Management Framework. TRAIGA specifically offers a safe harbor for “substantial compliance” with these frameworks.37
Sandbox program participation. Companies developing a novel AI product should consider joining the sandbox program. Participants may obtain legal protection and limited access to the Texas market to test innovative AI systems in a compliance-friendly environment.38
Federal AI moratorium. On May 22, 2025, the House of Representatives passed a proposal to impose a 10-year moratorium (ban) on state-level laws regulating AI. The proposal was included in the “One Big Beautiful Bill.” After Senate deliberation, the moratorium remains in the bill as of June 21, 2025. If the AI moratorium passes, it would preempt TRAIGA, along with other active state AI-related bills and enacted state AI laws.
1 HB00149F, Sec. 551.002. Covered persons and entities are not defined in the law like developers, deployers, and government entities. Covered persons and entities are listed in the Applicability Section.
2 HB00149F, Sec. 552.001.
3 HB00149F, Sec. 552.001(3).
4 HB00149F, Sec. 551.002(1-3).
5 HB00149F, Sec. 552.001(1-2).
6 HB00149F, Sec. 552.001(3).
7 Id.
8 HB00149F, Sec. 552.001(2).
9 HB00149F, Sec. 551.001(1).
10 HB00149F, Sec. 552.101(a).
11 HB00149F, Sec. 552.101(b).
12 HB00149F, Sec. 552.104(a).
13 HB00149F, Sec. 552.101(b).
14 HB00149F, Sec. 552.104(a)(1).
15 HB00149F, Sec. 552.104(a)(2).
16 HB00149F, Sec. 552.104(a)(3).
17 HB00149F, Sec. 552.104(b)(2-3).
18 HB00149F, Sec. 552.105(e)(1-2).
19 HB00149F, Sec. 552.106(a-b).
20 HB00149F, Sec. 552.051(b-c).
21 HB00149F, Sec. 552.051(d)(1-3).
22 HB00149F, Sec. 552.053.
23 HB00149F, Sec. 552.054.
24HB00149F, Sec. 552.054(a).
25 Id.
26 Id.
27 HB00149F, Sec. 552.052.
28 HB00149F, Sec. 552.055.
29HB00149F, Sec. 552.056.
30 Id. at 3(c).
31 HB00149F, Sec. 552.056(a)(3).
32 HB00149F, Sec. 552.057.
33 HB00149F, Sec. 553.
34 HB00149F, Sec. 554.
35 HB00149F, Sec. 551.001(1).
36 HB00149F, Sec. 552.051(e).
37 HB00149F, Sec. 552.105(e)(2)(D).
38 HB00149F, Sec. 553.051(a).