“No Robo Bosses Act” Proposed in California to Limit Use of AI Systems in Employment Decisions

A new bill in California, SB 7, proposed by State Senator Jerry McNerney, seeks to limit and regulate the use of artificial intelligence (AI) decision making in hiring, promotion, discipline, or termination decisions. Also known as the “No Robo Bosses Act,” SB 7 applies a broad definition of “automated decision system,” or “ADS,” as: any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output, including a score, classification, or recommendation, that is used to assist or replace human discretionary decision making and materially impacts natural persons. An automated decision system does not include a spam email filter, firewall, antivirus, software, identity and access management tools, calculator, database, dataset, or other compilation of data.
Specifically, SB 7 would:

Require employers to provide a plain-language, standalone notice to employees, contractors, and applicants that the employer is using ADS in employment-related decisions at least 30 days before the introduction of the ADS (or by February 1, 2026, if the ADS is already in use).
Require employers to maintain a list of all ADS in use and include that list in the notice to employees, contractors, and applicants.
Prohibit employers from relying primarily on ADS for hiring, promotion, discipline, or termination decisions.
Prohibit employers from using ADS that prevents compliance with or violates the law or regulations, obtains or infers a protected status, conducts predictive behavior analysis, predicts or takes action against a worker for exercising legal rights, or uses individualized worker data to inform compensation.
Allow workers to access the data collected and correct errors.
Allow workers to appeal an employment-related decision for which ADS was used, and require an employer to have a human reviewer.
Create enforcement provisions against discharging, discriminating, or retaliating against workers for exercising their rights under SB 7.

Similar to SB 7, the California Civil Rights Council has proposed regulations that would protect employees from discrimination, harassment, and retaliation related to an employer’s use of ADS. The Civil Rights Council identifies several examples, such as predictive assessments that measure skills or personality trainings and tools that screen resumes or direct advertising, that may discriminate against employees, contractors, or applicants based on a protected class. The proposed rule and SB 7 would work in tandem, if both are passed through their respective government bodies.
The bill is still in the beginning stages. It is set for its first committee hearing — Senate Labor, Public employment, and Retirement Committee — on April 9, 2025. How the bill may transform before (and if) it becomes law is still unknown, but because of the potential reach of this bill and the possibility other states may emulate it, SB 7 is one to watch.

Federal Agencies Cracking Down on DEI/DEIA

In the first two months of President Trump’s second term, his administration has engaged in a full-throated repudiation of “illegal” diversity, equity, and inclusion (“DEI”) and diversity, equity, inclusion, and accessibility (“DEIA”) programs.1
The Trump Administration issued a January 21, 2025 executive order titled “Ending Illegal Discrimination and Restoring Merit-Based Opportunity” (“EO 14173” – click here to read our recent client alert on this executive order). Since then, the Attorney General issued a memo titled “Ending Illegal DEI and DEIA Discrimination and Preferences”, the Office of Personnel Management issued a memo titled “Further Guidance Regarding Ending DEIA Offices, Programs and Initiatives ”(the OPM memo”), and the Equal Employment Opportunity Commission and Department of Justice jointly issued a set of FAQs titled “What You Should Know About DEI-Related Discrimination at Work”.
Executive orders are directives to federal agencies and officials that must be followed but are not binding on those outside the government without legislative action. Inter-governmental memos and FAQs are also not binding on those outside the federal government. Nevertheless, the EOs and related documents give us insight into the direction the administration intends to take.
But what is an “illegal” DEI program? To date, this Administration has provided no guidance regarding what makes a DEI program illegal or even what constitutes a “DEI program.” Despite the lack of clarity, however, the law relating to DEI programs has not changed—if a DEI program was lawful under federal antidiscrimination laws on January 19, 2025, it remains lawful today.
Nevertheless, the lack of guidance, paired with the clear language this administration has used to vilify DEI programs in general, has caused fear, confusion, and uncertainty within organizations, leading some to eliminate DEI programs and/or scrub their websites of all references to DEI programs. Doing so, however, could subject an employer to employee backlash, including claims of discrimination, as well as public calls for boycott. Before deciding whether to eliminate, maintain, or enhance your diversity and inclusion programs, we recommend the following:

Assess your risk tolerance. 
Understand the laws in your state. Although this administration has signaled it expects compliance with its directives regardless of state law, the states may not agree.
Document the lawful purpose behind diversity and inclusion programs.
Document employment decisions carefully, setting forth the legitimate business reasons behind the decisions and showing that decisions are based on merit without regard to any protected characteristics.
Review your diversity and inclusion policies, programs, and training materials, including all public-facing DEI-related communications and disclosures. Consider whether to conduct this review under the umbrella of attorney-client privilege.
Review your investigation protocols, to encompass complaints and concerns about DEI programs and “DEI-related discrimination.”
Develop internal and external communications strategies, to mitigate legal risks while staying true to your culture and values.
Closely monitor legal developments.

Some DEI programs may contain elements that could be challenged under the law that existed on January 19, 2025, before President Trump’s second term began. Consider immediately eliminating those elements, which may include the following,

Employee resource groups/affinity groups that are only open or provide benefits to employees based on specific protected characteristics.
Scholarship, fellowship, internship, mentoring, and other professional development opportunities that are limited to or targeted at members of specific protected characteristics.
Goals, targets, or quotas based on protected characteristics.
Compensation targets based on the achievement of DEI objectives or goals.

Our team will continue to track and analyze significant directives and policy changes as they are announced. For further information, contact the authors of this alert or your WBD attorney.

1 For purposes of this Alert, both DEI and DEIA programs will be used interchangeably.

What to Know About International Travel by Employees with Work Visas

We have previously written about the steps employers should take to ensure I-9 compliance and prepare for immigration site visits. In light of new immigration guidelines impacting visa holders, employers also should prepare for travel outside the U.S. (whether for personal or business reasons) by their employees with work visas.
Visa holders traveling outside of the U.S. for the first time on a new visa have to get their visa stamped at a U.S. Embassy or Consulate in order to return to the U.S. — recent immigration policy changes and changes to the visa processing procedure may cause delays in employees returning to the U.S. (and to work) from international travel.
First, in an executive order on January 20, 2025, President Trump ordered that all immigrants should be “vetted and screened to the maximum degree possible.” H-1B visa and other work visa holders traveling abroad, to get their visas stamped, will likely be subject to increased scrutiny under this directive. Employers should expect that more visas will be placed in “administrative processing,” in which the consular officer requires additional information from sources other than the visa holder to determine eligibility. Administrative processing can result in long delays, during which time visa holders cannot return to the U.S.
More recently, on February 18, 2025, the Department of State (DOS) announced changes to the Visa Interview Waiver, or “dropbox,” eligibility requirements. The dropbox process allows visa holders to get their visas stamped without attending an in-person visa interview, greatly reducing processing times for those eligible. Previously, the dropbox process was open to visa holders whose last visa expired within the prior 48 months. DOS has now reverted to pre-COVID guidelines, reducing the 48-month limitation to just 12 months and further limiting eligibility to visa applicants seeking approval in the same category as their prior visa. In other words, an H-1B holder can only use the dropbox process if they have a prior H-1B visa that expired within the last 12 months. An H-1B holder who previously held an F-1 (student) visa or whose prior visa expired more than 12 months ago is not eligible for the dropbox process. As a result, employers can expect that more employees will be required to attend visa interviews in person.
The visa stamping process is already fraught with long wait times, especially in countries where U.S. consulates process large numbers of visas, like India. With these changes, employees with work visas — and their employers — should be prepared for extended wait times for visa appointments, as more visa holders are required to attend in-person interviews. Employers also should be prepared for the risk that employees will “get stuck” abroad for weeks, or even months, if their visa is placed in administrative processing.
Here are some steps employers can take to prepare for the risks of international travel by employees with work visas:

Remind employees to notify the appropriate employer representative well in advance of international travel. Employers should ensure that employees who are not eligible for the dropbox process timely schedule a visa interview that coincides with their travel.
Confirm that the employee’s current job details match their latest visa filing to avoid any delays in processing. Material changes in the employee’s job, location, or pay may require an updated filing.
Consider how to respond if an employee “gets stuck” while awaiting administrative processing or delays in visa interviews. Employers may decide to require these employees to use paid time off or unpaid leave to account of the additional delays. However, employees who “get stuck” may ask to work remotely from their home country while awaiting a decision. Employers should consult with counsel before agreeing to allow employees to work remotely from a foreign country, as such extraterritorial work typically raises tax and other employment law compliance implications.
Stay on top of developments in immigration law, including travel bans, that may impact international travel by employees.

Navigating Employee Grief: Bereavement Law in California

In 2022, California passed Assembly Bill (AB) 1949 which amended the California Family Rights Act (CFRA) to provide for bereavement leave. The law took effect in January 2023, but here are some reminders for employers about bereavement leave requirements.
Under the law, employers with five or more employees must allow eligible employees to take up to five unpaid days of bereavement leave for certain family members. Consistent with the CFRA’s broad definition, a “family member” means a spouse, child, parent, sibling, grandparent, grandchild, domestic partner, or parent-in-law. Employers may voluntarily allow bereavement leave for a person not defined as a family member under the law. Although bereavement leave is unpaid, employers must allow employees to use any accrued paid sick days or personal days to receive pay during their bereavement leave.
Employees are required to follow the employer’s bereavement leave policy pertaining to notice. Employees are not required to take the five days consecutively but must complete all leave during the three months after the death of the family member. And, although the CFRA provides for bereavement leave, leave taken for bereavement does not affect the amount of time available for CFRA leave.
Employers may require documentation of the death of a family member. This may include a death certificate, obituary, or written verification of death, burial, or memorial service from a mortuary, funeral home, burial society, crematorium, religious institution, or government agency.

Privacy and Data Security in Community Associations: Navigating Risks and Compliance

Privacy and data security laws govern how organizations collect, handle, and protect personally identifiable information (PII) to ensure it is properly processed and protected.
For community associations, this is especially important as these organizations often manage large amounts of PII of homeowners and residents (e.g., name, address, phone number, etc.), including certain categories of sensitive PII, such as financial details. With identity theft and various cyber scams on the rise, cybercriminals frequently target this type of data. Once this data is accessed, a threat actor can do anything it wants with the data. For instance: the threat actor can sell the PII to the highest bidder; encrypt the data and hold it for ransom, meaning that a community association can no longer access the information and potentially must pay large sums in order to get it back; or make a copy of the PII and then extort the community association to return or delete the data instead of releasing it publicly, among other malicious acts. 
With these risks in mind, data security breaches have become a widespread concern, prompting legislative action. All fifty states now have laws requiring organizations to notify individuals if unauthorized access to PII occurs. These laws apply to community associations in North Carolina under North Carolina General Statute § 75-65. In order to avoid being involved in a data security breach, North Carolina community associations should prioritize taking steps to protect PII of their residents and homeowners.
While North Carolina does not offer specific statutory guidance for community associations regarding personal data handling, federal frameworks can help. The National Institute of Standards and Technology (NIST) has developed comprehensive privacy and cybersecurity guidelines. To view their resource and overview guide, visit this link. The NIST’s frameworks assist organizations in identifying the data they possess, protecting it, managing and governing it with clear internal rules, and responding to and recovering from data security incidents. To summarize some of the key steps necessary for a community association to protect its data, please see the list below.
Key Steps for Strengthening Privacy and Data Security

Keep Technology Updated. Community associations should prioritize keeping their systems, networks, and software up to date. Oftentimes, software updates include patches for security vulnerabilities that threat actors can exploit. As technology evolves, new threats emerge, and these software updates are designed to address these risks by closing security gaps. In addition, community associations should change passwords periodically and be sure that passwords are not universal among all systems and websites. If presented with the option, it is recommended to use multi-factor authentication on various log-in platforms. By using multi-factor authentication, there is an extra layer of security beyond a password that can be guessed, stolen, or compromised.
Manage Access. Ensure that only necessary employees have access to residents’ and homeowners’ PII. For those who have access, be sure to adequately train those employees to confirm they are apprised of the community associations’ cybersecurity policies and procedures. Additionally, be sure these employees can recognize common attack methods of threat actors and are able to avoid and report any suspicious activity. One of the basic ways to manage access is to ensure the community association is only collecting information that it absolutely needs to carry out its operations. If less data is in the possession of the community association, less data can be accessed by a threat actor.
Regularly Review Vendor Contracts. It’s crucial for community associations to audit contracts with vendors, at least annually, to ensure they align with the association’s risk tolerance. Many breaches stem from third-party service providers who have access to PII and sensitive PII. Without clear contractual safeguards, a breach could result in significant remediation costs, with limited legal recourse against the responsible vendor. Always be sure that your contracts address data protection and breach response obligations.
Consider Cyber Insurance. Cyber insurance has become an essential risk management tool for community associations. However, it’s important to understand that cyber insurance is not a catch-all solution. Insurers are increasingly raising premiums and limiting coverage for organizations that fail to implement strong data protection practices. Cyber insurance should be seen as a safety net, not a substitute for a comprehensive privacy and security strategy. Community associations should also periodically review their cyber insurance policies to confirm they are providing coverage for any new or emerging threats that may arise.
Engage the Community. Transparency, especially regarding the categories of data collected and how they are used, is key in building trust with residents and homeowners. Community Associations should seek input from their stakeholders on privacy and data security policies. While legal obligations will not change based on community sentiment, understanding residents’ concerns can help guide decision-making and foster a sense of accountability. Discussing data security efforts and proactively addressing cybersecurity challenges at an annual meeting provides an opportunity to clarify expectations and show the association’s commitment to protecting personal information.

For guidance on strengthening a community association’s privacy and data security efforts, contact us to learn more about best practices and compliance strategies.

California Bill Proposes Expanding False Claims Act to Include Tax-Related Claims

California lawmakers are considering Senate Bill 799 (SB 799), introduced by Sen. Ben Allen, which proposes amending the California False Claims Act (CFCA) to encompass tax-related claims under the Revenue and Taxation Code.
The CFCA currently encourages employees, contractors, or agents to report false or fraudulent claims made to the state or political subdivisions, offering protection against retaliation. Under the CFCA, civil actions may be initiated by the attorney general, local prosecuting authorities, or qui tam plaintiffs on behalf of the state or political subdivisions. The statute also permits treble damages and civil penalties.
At present, tax claims are excluded from the scope of the CFCA. SB 799 aims to amend the law by explicitly allowing tax-related false claims actions under the Revenue and Taxation Code, subject to the following conditions: 
1. The damages pleaded in the action exceed $200,000.  2. The taxable income, gross receipts, or total sales of the individual or entity against whom the action is brought exceed $500,000 per taxable year. 
Further, SB 799 would authorize the attorney general and prosecuting authorities to access confidential tax-related records necessary to investigate or prosecute suspected violations. This information would remain confidential, and unauthorized disclosure would be subject to existing legal penalties. The bill also seeks to broaden the definition of “prosecuting authority” to include counsel retained by a political subdivision to act on its behalf.
Historically, the federal government and most states have excluded tax claims from their False Claims Act statutes due to the complexity and ambiguity of tax laws, which can result in increased litigation and strain judicial resources. Experiences in states like New York and Illinois illustrate challenges associated with expanding false claims statutes to include tax claims. For instance, a telecommunications company settled a New York False Claims Act case involving alleged under collection of sales tax for over $300 million, with the whistleblower receiving more than $60 million. Such substantial incentives have led to the rise of specialized law firms targeting ambiguous sales tax collection obligations, contributing to heightened litigation.
If enacted, SB 799 would require California taxpayers to evaluate their exposure under the CFCA for any positions or claim taken on tax returns. Importantly, the CFCA has a statute of limitations of up to 10 years from the date of violation, significantly longer than the typical three- or four-year limitations period applicable to California tax matters. Taxpayers may also need to reassess past tax positions to address potential risks stemming from this extended limitations period.

US State AI Legislation: Virginia Vetoes, Colorado (Re)Considers, and Texas Transforms

Virginia’s Governor, Glenn Youngkin, vetoed a bill this week that would have regulated “high-risk” artificial intelligence systems. HB 2094, which narrowly passed the state legislature, aimed to implement regulatory measures akin to those established by last year’s Colorado AI Act. At the same time, Colorado’s AI Impact Task Force issued concerns about the Colorado law, which may thus undergo modifications before its February 2026 effective date. And in Texas, a proposed Texas Responsible AI Governance Act was recently modified.
The Virginia law, like the Colorado Act, would have imposed various obligations on companies involved in the creation or deployment of high-risk AI systems that influence significant decisions about individuals in areas such as employment, lending, health care, housing, and insurance. These obligations included conducting impact assessments, keeping detailed technical documentation, adopting risk management protocols, and offering individuals the chance to review negative decisions made by AI systems. Companies would have also needed to implement safeguards against algorithmic discrimination. Youngkin, like Colorado’s Governor Polis, worried that HB 2094 would stifle the AI industry and Virginia’s economic growth. He also noted that existing laws related to discrimination, privacy, data usage, and defamation could be used to protect the public from potential AI-related harms. Whereas Polis ultimately signed the Colorado law, Youngkin did not.
However, even though Polis signed the Colorado law last year, he urged in his statement for legislators to assess and provide additional clarity and revisions to the AI law. And, last month, the AI Task Force issued a report on their recommendations. The task force identified potential areas where the law could be clarified or improved. It divided them into four categories: (1) where consensus exists about changes to be made; (2) where consensus needs additional time and stakeholder engagement; (3) where consensus depends on resolving multiple interconnected issues; and (4) where there is “firm disagreement.” In the first are only a handful of relatively minor changes. In the second, for example, is clarifying the definition of what are “consequential decisions” – important because AI tools used to make them are the ones that are subject to the law. In the third, for example, is defining “algorithmic discrimination” and obligations developers and deployers should have in preventing it. And in the fourth, by way of example, is whether or not to include an opportunity to cure incidents of non-compliance.
Texas, like Colorado and Virginia, has been considering legislation that addresses high-risk AI systems that are a “substantial factor” in consequential decisions about people’s lives. That bill was recently modified to remove the concept of algorithmic discrimination, and as currently drafted prohibits AI systems that are developed or deployed with the “intent to discriminate.” It has also been modified to expressly state that disparate impact alone is not sufficient to prove that there was an intent to discriminate. The proposed Texas law is similar to Utah’s AI legislation (which went into effect on May 1, 2024), insofar as it would require notice if individuals were interaction with AI (though this obligation is only for government agencies.) Lastly, the law would also prohibit the intentional development of AI systems to “incite harm or criminality.” The law was filed on March 14 and, as of this writing was pending in the House Committee.
Putting it into Practice: The veto of HB 2094 emphasizes the complex journey towards comprehensive AI regulation at the state level. We anticipate ongoing action at a state level as legislatures, and some time before we see a consensus approach to AI governance. As a reminder, there are currently AI laws in effect focusing on various aspects of AI in New York (likenesses and employment), California (several different topics), Illinois (employment), and Tennessee (likenesses), passed AI legislation set to go into effect at different times in 2024 through 2026, and bills sitting in committee in at least 17 states. 
Listen to this post 

GSA Expansion under Executive Order “Eliminating Waste and Saving Taxpayer Dollars by Consolidating Procurement”

On March 20, 2025, President Trump issued an Executive Order (the “Order”) targeted at consolidating domestic government procurement processes. Titled “Eliminating Waste and Saving Taxpayer Dollars by Consolidating Procurement,” this Order aims to streamline Federal procurement by consolidating it under the General Services Administration (GSA) rather than continuing the current practice of allowing all executive agencies and their subcomponents to manage much of their own procurement processes. As the Federal government is the largest buyer of goods and services globally, this Order seeks to enhance efficiency and effectiveness in procurement by better aligning with the GSA’s original purpose established in 1949—to consolidate the Federal government’s resources in order to streamline administrative work. The Order proposes that by centralizing procurement functions, the Federal government can better eliminate waste and duplication, allowing for the efficient use of taxpayer dollars and allowing agencies to focus on their core missions.
To ensure consistency across Federal procurement activities, the Order defines several key terms. The term “Administrator” shall refer to the GSA Administrator—not any agencies’ independent administrators— and “Agency” shall retain its definition as per Section 3502 of Title 44, but with an emphasis on the Executive Office of the President’s exclusion from this definition. “Common goods and services” are to be those defined by the Office of Management and Budget’s (OMB) Category Management Leadership Council, while an “indefinite delivery contract vehicle” refers to agreements that allow flexible ordering over time. The intention behind laying out these definitions is to further ensure clarity and consistency across to be-consolidated Federal procurement activities.
Next Steps for Implementation
The order outlines a clear timeline for procurement consolidation. By April 19, 2025, the GSA Administrator will be designated as the executive agent for government-wide acquisition contracts (GWACs) for information technology. The GSA Administrator, in consultation with the Director of OMB, will also defer or decline the executive agent designation for GWACs for information technology, when necessary, to ensure continuity of service. Further, the GSA Administrator must, on an ongoing basis, “rationalize” Government-wide indefinite delivery contract vehicles for information technology for agencies across the Government to reduce contract duplication, redundancy, and other inefficiencies. By April 3, 2025, the OMB must issue a memorandum to agencies implementing the aforementioned requirements. By May 19, 2025, Federal agency heads must submit proposals for the GSA to handle the procurement of common goods and services. The GSA Administrator is then tasked with submitting a comprehensive plan to the OMB by June 18, 2025.
Potential Implications for Government Contractors
The Order emphasizes that so as not to impair existing legal authorities or budgetary functions, its decree must be implemented in accordance with applicable laws and available appropriations. Perhaps most importantly, this Order does not create any new enforceable rights or benefits against the U.S. government, suggesting a potentially limited ability of government contractors to protest or dispute the allocation of Federal awards.
On the same day of the Order’s release, the GSA held an all-hands meeting where the head of GSA’s Federal Acquisition Service is reported as stating, “[o]ver the coming months, we are going to ingest all domestic, commercial goods and services inside the GSA. We’re not going to do all $900 billion, but we will do about $400 billion, so we’re going to quadruple our size.”1

[1] https://www.nextgov.com/acquisition/2025/03/gsa-quadruple-size-centralize-procurement-across-government/403935/.

Virginia’s Governor Vetos AI Bill

On March 24, 2025, Virginia’s Governor vetoed House Bill (HB) 2094, known as the High-Risk Artificial Intelligence Developer and Deployer Act. This bill aimed to establish a regulatory framework for businesses developing or using “high-risk” AI systems.
The Governor’s veto message emphasized concerns that HB 2094’s stringent requirements would stifle innovation and economic growth, particularly for startups and small businesses. The bill would have imposed nearly $30 million in compliance costs on AI developers, a burden that could deter new businesses from investing in Virginia. The Governor argued that the bill’s rigid framework failed to account for the rapidly evolving nature of the AI industry and placed an onerous burden on smaller firms lacking large legal compliance departments.
The veto of HB 2094 in Virginia reflects a broader debate in AI legislation across the United States. As AI technology continues to advance, both federal and state governments are grappling with how to regulate its use effectively.
At the federal level, AI legislation has been marked by contrasting approaches between administrations. Former President Biden’s Executive Orders focused on ethical AI use and risk management, but many of these efforts were revoked by President Trump this year. Trump’s new Executive Order, titled “Removing Barriers to American Leadership in Artificial Intelligence,” aims to foster AI innovation by reducing regulatory constraints.
State governments are increasingly taking the lead in AI regulation. States like Colorado, Illinois, and California have introduced comprehensive AI governance laws. The Colorado AI Act of 2024, for example, uses a risk-based approach to regulate high-risk AI systems, emphasizing transparency and risk mitigation. While changes to the Colorado law are expected before its 2026 effective date, it may emerge as a prototype for others states to follow. 
Takeaways for Business Owners

Stay Informed: Keep abreast of both federal and state-level AI legislation. Understanding the regulatory landscape will help businesses anticipate and adapt to new requirements.
Proactive Compliance: Develop robust AI governance frameworks to ensure compliance with existing and future regulations. This includes conducting risk assessments, implementing transparency measures, and maintaining proper documentation.
Innovate Responsibly: While fostering innovation is crucial, businesses must also prioritize ethical AI practices. This includes preventing algorithmic discrimination and ensuring the responsible use of AI in decision-making processes.

Virginia Enacts Law Protecting Reproductive and Sexual Health Data

On March 24, 2025, Virginia Governor Youngkin signed into law S.B. 754, which amends the Virginia Consumer Protection Data Act (“VCDPA”) to prohibit the collection, disclosure, sale or dissemination of consumers’ reproductive or sexual health data without consent.
The law defines “reproductive or sexual health information” as “information relating to the past, present, or future reproductive or sexual health” of a Virginia consumer, including:

Efforts to research or obtain reproductive or sexual health information services or supplies, including location information that may indicate an attempt to acquire such services or supplies;
Reproductive or sexual health conditions, status, diseases, or diagnoses, including pregnancy, menstruation, ovulation, ability to conceive a pregnancy, whether an individual is sexually active, and whether an individual is engaging in unprotected sex;
Reproductive and sexual health-related surgeries and procedures, including termination of a pregnancy;
Use or purchase of contraceptives, birth control, or other medication related to reproductive health, including abortifacients;
Bodily functions, vital signs, measurements, or symptoms related to menstruation or pregnancy, including basal temperature, cramps, bodily discharge, or hormone levels;
Any information about diagnoses or diagnostic testing, treatment, or medications, or the use of any product or service relating to the matters described above; and
Any information described above that is derived or extrapolated from non-health-related information such as proxy, derivative, inferred, emergent, or algorithmic data.

“Reproductive or sexual health information” does not include protected health information under HIPAA, health records for the purposes of Title 32.1, or patient-identifying records for the purposes of 42 U.S.C. § 290dd-2.
These amendments to the VCDPA will take effect on July 1, 2025.

Opinion: DEI & Bullying – Where Law, Politics and Business Need to Align

The recent news that Trump rescinded the executive order issued six days earlier against law firm Paul Weiss is a striking example of the intersections between politics, law, and business. According to Business Insider, “… since Trump’s earlier order to revoke its security clearances, the law firm has lost clients” and their government contracts were put at risk. The law firm’s security clearance has been reinstated after an agreement was reached with the Trump Administration. 
For the legal profession, it seems to be the tip of the iceberg. Covington & Burling and Perkins Coie were issued separate executive orders on February 25 and March 6. And, on March 25 and 27, two more executive orders targeting prestigious firms were issued – Addressing Risks from Jenner & Block and Addressing Risks from WilmerHale. 
Who’s next? Is the White House bullying law firms over DEI practices?
These incidents are part of a broader narrative unfolding across America, where political shifts are influencing business practices, and law firms, corporations, and elected officials find themselves at the crossroads of political ideologies and corporate responsibilities. In light of these developments, there is a growing concern about the broader implications for the workplace and business ethics.
For decades, DEI initiatives have functioned as guardrails in the corporate world, ensuring fair treatment in hiring, promoting inclusivity, and fostering environments where bias is actively mitigated. These practices, designed to level the playing field, were never about special advantages.  Instead, they emphasized fairness—hiring and promoting the right person for the right job, regardless of their background.
Corporate America in the pre-DEI era was a very different place, often characterized by unchecked biases, discrimination, and exclusion. Although we’ve made significant progress, there are individuals and organizations that still foster a negative professional culture.  
Pressure on law firms to drop DEI practices comes amid broader efforts to scale back DEI across corporate America, including sectors that have seen significant benefits from inclusive hiring. According to Business Insider, “… companies like Walmart, Meta, and Lowe’s have all rolled back their DEI programs.” 
The erosion of protections that have improved workplaces over the past 20 years will reverse much of the progress made over the past few decades – but will not erase it. There are too many people—managers, employees, customers, investors—who believe in the practice, whether supported by policy or not.
And, these protections have not only created safer, more inclusive environments but have also contributed to better business outcomes. Diverse teams, after all, produce more innovative solutions, offer broader perspectives, and serve diverse client bases more effectively. And, even with DEI in place, many corporations misbehave, particularly under the guise of “business as usual.” 
Employees are the lifeblood of any organization. Creating a happy, productive, and safe work environment is essential to the success of any company. DEI initiatives—and a movement toward a safe, fair workplace—have been an essential part of fostering these environments, ensuring that all employees have the opportunity to succeed, regardless of their background or company culture. 
However, if companies are allowed to abandon these initiatives, I fear that we will see a return to the corporate cultures of the past, rife with discrimination, exclusion, bias and bullying. 
“In the words of William Edward Demming, ‘a bad system will always beat a good person’,” said Sharon Mahn, Esq., a leading legal recruiter and workplace expert. “Equality in the workplace means ensuring that everyone, regardless of their background or characteristics, has the same opportunity and is treated fairly.”
Even with DEI and legal protections in place, some corporations seem to behave badly. Without these checks and balances, this type of behavior will become more widespread.
The intersection of politics and law in business is unavoidable. Political decisions, such as Trump’s executive orders, can have wide-reaching effects on corporate practices, and law firms are forced to make very difficult decisions for the sake of multiple stakeholders. Legal structures, on the other hand, provide the mechanisms for enforcing fairness. 
When politics and law fail to align with business ethics, the consequences for employees and organizations alike can be catastrophic. 

The opinions expressed in this article are those of the author and do not necessarily represent those of The National Law Review.

Virginia Governor Vetoes Rate Cap and AI Regulation Bills

On March 25, Virginia Governor Glenn Youngkin vetoed two bills that sought to impose new restrictions on “high-risk” artificial intelligence (AI) systems and fintech lending partnerships. The vetoes reflect the Governor’s continued emphasis on fostering innovation and economic growth over introducing new regulatory burdens.
AI Bias Bill (HB 2094)
The High-Risk Artificial Intelligence Developer and Deployer Act would have made Virginia the second state, after Colorado, to enact a comprehensive framework governing AI systems used in consequential decision-making. The proposed law applied to “high-risk” AI systems used in employment, lending, and housing, among other fields, requiring developers and deployers of such systems to implement safeguards to prevent algorithmic discrimination and provide transparency around how automated decisions were made.
The law also imposed specific obligations related to impact assessments, data governance, and public disclosures. In vetoing the bill, Governor Youngkin argued that its compliance demands would disproportionately burden smaller companies and startups and could slow AI-driven economic growth in the state.
Fintech Lending Bill (SB1252)
Senate Bill 1252 targeted rate exportation practices by applying Virginia’s 12% usury cap to certain fintech-bank partnerships. Specifically, the bill sought to prohibit entities from structuring transactions in a way that evades state interest rate limits, including through “rent-a-bank” models, personal property sale-leaseback arrangements, and cash rebate financing schemes.
Additionally, the bill proposed broad definitions for “loan” and “making a loan” that could have reached a wide array of service providers. A “loan” was defined to include any recourse or nonrecourse extension of money or credit, whether open-end or closed-end. “Making a loan” encompassed advancing, offering, or committing to advance funds to a borrower. In vetoing the measure, Governor Youngkin similarly emphasized its potential to discourage innovation and investment across Virginia’s consumer credit markets.
Putting It Into Practice: The vetoes of the High-Risk Artificial Intelligence Developer and Deployer Act (previously discussed here) and the Fintech Lending Bill signal Virginia’s preference for a more flexible, innovation friendly-oversight. This development aligns with a broader pullback from federal agencies with respect to oversight of fintech and related emerging technologies (previously discussed here and here). Fintechs and consumer finance companies leveraging AI should continue to monitor what has become a rapidly evolving regulatory landscape.
Listen to this post