All American AI: New OMB Memos Set Priorities for Federal AI Use and Acquisition

On April 3, 2025, OMB released two new memorandums on artificial intelligence (“AI”) as directed by Executive Order 14179, Removing Barriers to American Leadership in Artificial Intelligence. (As a reminder, President Trump issued Executive Order (EO) 14179 on January 23, 2025 after rescinding President Biden’s AI Executive Order (EO 14110)).
The first memo (M-25-21) provides guidance to agencies on federal AI use while the second memo (M-25-22) focuses on agency acquisition of AI. In a nutshell, these memos signal that the federal government is embracing AI and plans to maximize its AI use. Federal agencies can leverage AI to enhance operational efficiencies, improve decision-making, automate routine tasks, and analyze large datasets for insights that could inform policy and regulatory compliance. Contractors can expect to see a flurry of guidance and agency adoption of AI technologies.
OMB Memo M-25-21
M-25-21, Accelerating Federal Use of AI through Innovation, Governance, and Public Trust – This memo focuses on responsible federal agency AI use. We note below key points and timelines:

Guidance. Agencies must issue AI strategies within 180 days and post those strategies publicly on their websites. In addition, agencies must designate a Chief AI Officer within 60 days (if they have not already done so). Agencies will also develop internal policies and generative AI policies within 270 days.
AI Boost. Agencies must identify and remove barriers to AI adoption and application. Contractors should see a streamlined boost in federal interest in AI products, especially American-made AI.
High-Impact AI. The memo introduces the concept of “high-impact AI,” which is AI with output that “serves as a principal basis for decisions that have a legal, material, binding or significant effect on rights or safety.” M-25-21 at 14. This replaces the earlier concept in materials prepared by the Biden administration of safety-impacting and rights-impacting AI. There are particular considerations and expected requirements for use and implementation of high-impact AI set forth in the memo.
Code Sharing. Agencies are required to share any custom-developed federal AI code in active use, including models, among the federal government, with limited exceptions. Contractors should consider this when providing custom-developed code and the potential impacts on proprietary information.
Public Input. It is recommended that agencies solicit public input on AI policies. Contractors should be on the lookout for any rulemaking, public comment periods, or hearings to provide feedback.

OMB Memo M-25-22
M-25-22, Driving Efficient Acquisition of Artificial Intelligence in Government – This memo provides guidance on federal AI acquisition. Note that it will apply to any solicitations issued 180 days after the memo, including any option periods on existing contracts. Commercial products with embedded AI (e.g., word processer, navigation system) are not within the scope of this guidance. Below are notable points and timelines:

American-Made AI Boost. Acquisition will be focused on AI developed and produced in the U.S.
Policy and Acquisition Guides. Within 270 days, agencies must update internal acquisition procedures. Within 100 days, GSA and OMB will publicly release guides to assist the federal acquisition workforce with AI procurement. Contractors should be on the lookout for these guides as they should provide valuable insight for doing business with the government in the AI space.
Federal Information Sharing. Within 200 days, GSA and OMB will develop an internal best practices repository for AI acquisition. While not expected to be publicly available, this will likely be the internal resource for standard contract clauses and prices. This signals that the government will strive to maximize uniformity in AI acquisition practices across agencies.
AI Use by Contractors. The memo directs agencies to consider AI use by vendors and contractors in contract performance that may occur outside of deliberate acquisition of AI. While there are already avenues for vendor and contractor disclosure of AI use, the memo explicitly cautions agencies about unsolicited AI use that may pose risks, especially in performance situations where the government may not anticipate it. Contractors should be on the lookout for any solicitation or contract provisions that require more detailed reporting on AI use.

Moving Forward
The OMB memos provide important insight into how the federal government will handle AI and acquisition of AI technologies moving forward. Due to the nature of AI technology, contractors can expect this field to be ever-evolving. However, it is clear AI is taking a strong foothold in the federal government. We anticipate more federal guidance and proposed rulemaking resulting from these memos and will continue to provide updates.
Listen to this post 

Choose Your GenAI Model Providers, Models, and Use Cases Wisely

Generative AI (GenAI) vendors, models, and uses cases are not created equal. Model providers must be trusted to handle sensitive data. Models, like tools in a toolbox, may be better suited for some jobs than others. Use cases vary widely in risk.
When it comes to selection of GenAI model providers (e.g., tech companies and others offering models) and their models, due diligence is wise. For example, DeepSeek dominated headlines in early 2025 as a trendy pick for high performance and lower cost GenAI models. But not everyone is sold. A number of U.S. states and the federal government are reportedly implementing or considering bans because the models allegedly transfer user data to China, among other concerns. 
Before selecting a provider and model, it is important to learn where the provider is located; where data is transferred and stored; where and how the training data was sourced; compliance with the NIST AI risk management framework, ISO/IEC 42001:2023, and other voluntary standards; impact or risk assessments under the EU AI Act, Colorado AI Act, and other laws; guardrails and other safety features built into the model; and performance metrics of the model relative to planned use cases. This information may be learned from the “model card” and other documentation for each model, conversations with the provider, and other research. And, of course, the contractual terms governing the provider relationship and model usage are critical. Key issues include IP ownership, confidentiality and data protection, cybersecurity, liability, reps and warranties, and indemnification.
Once the appropriate provider and model are selected, the job is not done. Use cases must also be scrutinized. Even if a particular GenAI model is approved for use generally, what it is used for still matters (a lot). It may be relatively low risk to use an AI model for one purpose (e.g., summarizing documents), but the risk may increase for another purpose (e.g., autonomous resume screening). Companies should calibrate their risk tolerance for AI use cases, leaning on a cross-functional AI advisory committee. Use cases should be vetted to mitigate risks including loss of IP ownership, loss of confidentiality, hallucination and inaccuracies in outputs, IP infringement, non-unique outputs, and biased and discriminatory outputs and outcomes. If GenAI is already being used by employees in an ad hoc manner before a formal governance framework is implemented, identify such use though the advisory committee and other outreach, and prioritize higher-risk use cases for review and potential action.
Once enterprise risk tolerance is calibrated, AI usage policies and employee training should be rolled out. The policy and training should articulate which models and use cases are (and are not) permitted and explain the “why” behind the decisions to help contextualize important risks for employees. Policies should consider both existing laws and voluntary frameworks like NIST and ISO/IEC and should remain living documents subject to regular review and revision as the legal and technological landscape continue evolving rapidly. Employee training is not only a good idea, but may also be a legal mandate, e.g., under the “AI literacy” requirement of the EU AI Act for companies doing business in the EU.
Bottom line: all businesses and their employees will soon be using GenAI in day-to-day operations—if they are not already. To mitigate risk, carefully select your vendors, models, and uses cases, and implement policies and training reflecting enterprise risk tolerance.

U.S. House of Representatives Pass the Take it Down Act

On April 28, 2025, the U.S. House of Representatives voted 409-2 to pass S.146, the Take it Down Act. The bill aims to stop the misuse of Artificial Intelligence (AI) created illicit imagery and Deepfake Abuse. The bill will be enforced by the Federal Trade Commission (FTC).
The bill requires online platforms to remove nonconsensual intimate imagery (NCII) within 48 hours of a request. The bill requires online platforms to remove NCII within 48 hours of a request. The bill also makes it illegal for a person to “knowingly publish” authentic or synthetic NCII and outlines separate penalties for when the image depicts an adult or a minor. 
Free speech advocates and digital rights groups say the bill is too broad and could lead to censorship of legitimate images. Other critics, such as the Cyber Civil Rights Initiative, an organization dedicated to protecting victims of online sexual abuse, are concerned that the bill “is an alarming expansion of the FTC’s enforcement authority.”
As the regulatory framework around AI and digital privacy evolves, companies developing or deploying AI, or engaging in content moderation will need to be alert to shifting expectations around accountability and enforcement priorities.

Illinois Anti-Discrimination Law to Address AI Goes Into Effect on 1 January 2026

Effective 1 January 2026, Illinois House Bill 3773 (HB 3773) amends the Illinois Human Rights Act, (IHRA) to expressly prohibit employers from using artificial intelligence (AI) that “has the effect of subjecting employees to discrimination on the basis of protected classes.” Specifically, Illinois employers cannot use AI that has a discriminatory effect on employees, “[w]ith respect to recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment.”
Employers are increasingly using AI during the employment life cycle, including resume scanners, chatbots, and AI-powered performance management software. While AI tools can streamline processes and increase data-based decision making, they also carry risks, such as perpetuating bias and discrimination. In light of HB 3773, Illinois employers should be mindful of these risks and carefully select and regularly audit their AI applications to ensure that the applications do not have a discriminatory effect on applicants1 and employees.
HB 3773 also requires employers to notify employees and applicants when using AI during recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or when the use could affect the terms, privileges, or conditions of employment To comply with this mandate, Illinois employers must understand how the AI-powered employment tools they are using work and impact employment decisions and explain their use of AI tools to employees and applicants in a way that is easily understandable. Illinois employers should consider working with counsel to prepare these mandatory disclosures now to ensure compliance with HB 3773 by 1 January 2026.
1Although HB3773 does not explicitly refer to “applicants”, the IHRA extends to job applicants and HB3773’s requirements apply to “recruitment” and “hiring,” which suggest that Illinois employers should comply with the new AI requirements with respect to applicants.

Workado Settles With FTC Over Alleged Misrepresentations of AI Accuracy

The Federal Trade Commission (FTC) issued a press release this week announcing that it settled with Workado over alleged misrepresentations of its ability to detect whether content was generated by artificial intelligence (AI) or humans.
Workado’s AI Content Detector was marketed to consumers as a tool to decipher whether online content was generated by AI or a human being. Workado marketed the product as being “98 percent” accurate, but the FTC found that “independent testing showed the accuracy rate on general-purpose content was just 53 percent.” The FTC alleged that “the product did no better than a coin toss.” The claim of 98 percent accuracy was misleading, false, and not substantiated, according to the FTC.
The consent order prohibits Workado “from making any representations about the effectiveness of any covered product unless it is not misleading, and the company has competent and reliable evidence to support the claim at the time it is made”; requires Workado “to retain any evidence it uses to support such efficacy claims; email eligible consumers about the consent order and settlement with the Commission; and submit compliance reports to the FTC one year after the order is issued and every year for the following three years.”
The FTC continues to concentrate on misrepresentations of companies around products and services, which is a potent reminder not to overstate capabilities in advertising.

New Jersey’s Attorney General and Division on Civil Rights Starts 2025 With Guidance on AI Use in Hiring

On 9 January 2025, New Jersey Attorney General Matthew J. Platkin and the New Jersey Division on Civil Rights (DCR) announced the launch of a Civil Rights and Technology Initiative (the Initiative) “to address the risks of discrimination and bias-based harassment stemming from the use of artificial intelligence (AI) and other advanced technologies.” As part of the Initiative, the DCR issued a guidance about how the New Jersey Law Against Discrimination (LAD) applies to discrimination resulting from the use of artificial intelligence (the Guidance).1 The Guidance addresses the use of AI in several contexts but is particularly relevant for employers who use AI to help screen applicants and assess employee performance.
Overview
Algorithmic Discrimination
The Guidance explains that New Jersey’s long-standing LAD applies to “algorithmic discrimination,” meaning discrimination resulting from an employer’s use of AI or other automated decision-making tools, in the same way it applies to other discriminatory conduct. Indeed, even if the employer did not develop the AI tool and is not aware of the tool’s algorithmic discrimination, the employer can still be liable for the discrimination that results from the employer’s use of the tool under the LAD. Therefore, employers must carefully consider how they use AI to avoid potential liability for algorithmic discrimination.
Disparate Treatment and Disparate Impact Discrimination
The Guidance gives several examples of algorithmic discrimination. It notes that AI tools can engage in disparate treatment discrimination if they are designed or used to treat members of a protected class differently. Relatedly, an entity could be liable for disparate treatment discrimination if it selectively uses AI only to assess members of a particular class, such as screening only Black prospective applicants with AI but not applicants of other races. Moreover, even if an AI tool is not used selectively and does not directly consider a protected characteristic, it may impermissibly “make recommendations based on a close proxy for a protected characteristic,” such as race or sex. 
AI tools can also engage in disparate impact discrimination in violation of the LAD if their facially nondiscriminatory criteria have a disproportionate negative effect on members of a protected class. The Guidance gives the example of a company using AI to assess contract bids that disproportionately screens out bids from women-owned businesses. 
Reasonable Accommodations
The Guidance also cautions that an employer’s use of AI tools may violate the LAD if they “preclude or impede the provision of reasonable accommodations.” For example, when used in hiring, AI “may disproportionately exclude applicants who could perform the job with a reasonable accommodation.” And if an employer uses AI to track its employees’ productivity and break time, it may “disproportionately flag for discipline employees who are allowed additional break time to accommodate a disability.” 
Liability
Notably, the Guidance takes a broad view of who can be held liable for algorithmic discrimination. Like other AI-related guidance and laws, under the LAD, employers cannot shift liability to their AI vendors or external developers. This is the case even if the entity does not know the inner workings of the tool or understand how it works.
Best Practices
To decrease the risk of liability under the LAD, employers should take certain steps to ensure the AI tools they are using to make or inform employment decisions are not engaging in algorithmic bias or otherwise violating the LAD. These steps include:

Creating an AI group responsible for overseeing the implementation of the AI tools that is comprised of a cross-section of the organization, such as members of the legal, human resources, privacy, communications, and IT departments.
Implementing AI-related policies and procedures.
Conducting training on the AI tools and algorithmic bias and only allowing employees who have completed the training to use the AI tools.
Thoroughly vetting AI vendors and tools.
Securing appropriate contract provisions from the AI vendors that (A) the vendor’s tools comply with all applicable laws, including, without limitation, all labor and employment laws; (B) the vendor will provide to the employer any and all information reasonably requested by employer to understand the algorithms behind the tool and how such tool complies with all applicable laws to ensure the tool is not a “black box”; (C) possibly require a third party acceptable to the employer to audit the tool for such compliance on an annual basis with costs to be shared in some agreeable way; and (D) provide full indemnification to the employer for any breach of the provisions backed by required liability insurance policies.
Swiftly addressing any issues identified in the audits or tests.
Reviewing any employment practices liability insurance or other applicable insurance policies to see if coverage is available.
Ensuring there is still a human element to any decision-making involving an AI tool.

Our Labor, Employment, and Workplace Safety lawyers regularly counsel clients on a wide variety of concerns related to emerging issues in labor, employment, and workplace safety law and are well-positioned to provide guidance and assistance to clients on AI developments.

Footnotes

1 Guidance on Algorithmic Discrimination and the New Jersey Law Against Discrimination, https://www.nj.gov/oag/newsreleases25/2025-0108_DCR-Guidance-on-Algorithmic-Discrimination.pdf. 

China Launches Special Campaign to “Clear Up and Rectify the Abuse of AI”

On April 30, 2025, China’s Cyberspace Administration (CAC) launched a 3-month campaign to “clear up and rectify the abuse of AI technology” including using information that infringes on others’ intellectual property rights, privacy rights and other rights. Per the Cyberspace Administration, “the first phase will strengthen the source governance of AI technology, clean up and rectify illegal AI applications, strengthen AI generation and synthesis technology and content identification management, and promote website platforms to improve their detection and identification capabilities. The second phase will focus on the abuse of AI technology to create and publish rumors, false information, pornographic and vulgar content, impersonate others, engage in online water army [paid posters] activities and other prominent issues, and concentrate on cleaning up related illegal and negative information, and deal with and punish illegal accounts, multi-channel networks (MCNs) and website platforms.”
Per the CAC, in the first phase, the focus is on rectifying six prominent problems:

First, illegal AI products by failing to perform large model filing or registration procedures. Providing “one-click undressing” and other functions that violate laws and ethics. Cloning and editing other people’s voices, faces and other biometric information without authorization and consent, infringing on other people’s privacy.
Second, teaching and selling illegal AI product tutorials and products. Teaching tutorial information on how to use illegal AI products to forge face-changing videos, voice-changing audio, etc. Selling illegal “speech synthesizers” and “face-changing tools” and other product information. Marketing, hyping, and promoting illegal AI product information.
Third, lax management of training corpus. Using information that infringes on others’ intellectual property rights, privacy rights and other rights. Using false, invalid, and untrue content crawled from the Internet. Using data from illegal sources. Failure to establish a training corpus management mechanism, and failure to regularly check and clean up illegal corpus.
Fourth, weak security management measures. Failure to establish content review, intent recognition and other security measures that are commensurate with the scale of business. Failure to establish an effective illegal account management mechanism. Failure to conduct regular security self-assessments. Social platforms are unclear about the AI automatic reply and other services accessed through API interfaces, and do not strictly control them.
Fifth, the content identification requirements have not been implemented. The service provider has not added implicit or explicit content identification to deep synthetic content, and has not provided or prompted explicit content identification functions to users. The content dissemination platform has not carried out monitoring and identification of generated synthetic content, resulting in false information misleading the public.
Sixth, there are security risks in key areas. AI products that have been registered to provide question-and-answer services in key areas such as medical care, finance, and for minors have not set up targeted industry security audits and control measures, resulting in problems such as “AI prescribing”, “inducing investment”, and “AI hallucinations”, misleading students and patients and disrupting the order of the financial market.

The second phase focuses on rectifying seven prominent problems:

First, using AI to create and publish rumors. Fabricating all kinds of rumors and information involving current politics, public policies, social livelihood, international relations, emergencies, etc., or making arbitrary guesses and malicious interpretations of major policies. Fabricating and fabricating causes, progress, details, etc. by taking advantage of emergencies, disasters, etc. Impersonating official press conferences or news reports to publish rumors. Using content generated by AI cognitive bias to maliciously guide.
Second, using AI to create and publish false information. Splicing and editing irrelevant pictures, texts, and videos to generate mixed, half-true and half-false information. Blurring and modifying the time, place, and people of the incident, and rehashing old news. Creating and publishing exaggerated, pseudo-scientific and other false content involving professional fields such as finance, education, justice, and medical care. Using AI fortune-telling and AI divination to mislead and deceive netizens and spread superstitious ideas.
Third, using AI to create and publish pornographic and vulgar content. Using AI stripping, AI drawing and other functions to generate synthetic pornographic content or indecent pictures and videos of others, soft pornographic, two-dimensional borderline images such as revealing clothes and coquettish poses, or ugly and other negative content. Produce and publish bloody and violent scenes, distorted human bodies, surreal monsters and other terrifying and bizarre images. Generate synthetic “pornographic texts” and “dirty jokes” and other novels, posts and notes with obvious sexual implications.
Fourth, use AI to impersonate others to commit infringement and illegal acts. Through deep fake technologies such as AI face-changing and voice cloning, impersonate experts, entrepreneurs, celebrities and other public figures to deceive netizens and even market for profit. Use AI to spoof, smear, distort and alienate public figures or historical figures. Use AI to impersonate relatives and friends and engage in illegal activities such as online fraud. Improper use of AI to “resurrect the dead” and abuse the information of the dead.
Fifth, use AI to engage in online water army [paid posting] activities. Use AI technology to “raise accounts” and simulate real people to register and operate social accounts in batches. Use AI content farms or AI to wash manuscripts to batch generate and publish low-quality homogeneous writing to gain traffic. Use AI group control software and social robots to like, post and comment in batches, control the volume and comments, and create hot topics to be listed.
Sixth, AI products, services and applications violate regulations. Create and disseminate counterfeit and shell AI websites and applications. AI applications provide illegal functional services, such as creative tools that provide functions such as “expanding hot searches and hot lists into texts”, and AI social and chat software that provide vulgar and soft pornographic dialogue services. Provide illegal AI applications, generate synthetic services or sell courses, promote and divert traffic, etc.
Seventh, infringe on the rights and interests of minors. AI applications induce minors to become addicted, and there is content that affects the physical and mental health of minors in the minor mode.

The original text is available here (Chinese only).

FTC Regulators Remark on Agency’s Priorities Under Trump Administration

As reported by Bloomberg Law, last week, Federal Trade Commission Commissioners made statements indicating a shift in the agency’s priorities under the Trump Administration to focus enforcement efforts on existing federal privacy laws while foregoing broader definitions of consumer harm, and fostering AI innovation.
At the International Association of Privacy Professional’s Annual Global Privacy Summit, FTC Commissioner Melissa Holyoak stated, “The Commission is committed to protecting consumers’ privacy and security interests while promoting competition and innovation.” Holyoak remarked, “We’ll do that by enforcing the laws we have—and not by stretching our legal authorities.” Specifically, Holyoak indicated that the agency should focus its enforcement of three laws under its jurisdiction: the Children’s Online Privacy Protection Act (“COPPA”), the Fair Credit Reporting Act (FCRA), and the Gramm-Leach-Bliley Act (“GLBA”). Holyoak also noted a focus on data brokers and other businesses that sell Americans’ sensitive data in bulk to foreign adversaries and bad actors.
Christopher Mufarrige, newly appointed director of the FTC’s Bureau of Consumer Protection, also made statements during the Interactive Advertising Bureau’s Public Policy and Legal Summit reinforcing the agency’s priorities. Mufarrige stated that the FTC will now be “much more in favor of innovation” and will focus on “actual, concrete harms” to consumers,” and remarked that “the Commission’s role is to reinforce market practices, not replace them.” With respect to AI, Mufarrige said that the agency will focus on “how AI is used to facilitate frauds and scams,” instead of challenging the technology in and of itself.

President Trump Issues Executive Order to Support AI Education and Workforce Development

On April 23, 2025, President Donald Trump signed an executive order (EO) to promote education on and integration of artificial intelligence (AI) in K-12, higher education, and workplace settings through public-private partnerships with industry leaders and academic institutions. The order continues the Trump administration’s efforts to expand and promote this emerging technology.

Quick Hits

President Trump’s executive order aims to enhance AI education and workforce development in the United States.
The order establishes a cross-agency task force to implement the policy and prioritizes public-private partnerships with industry leaders and academic institutions to provide resources for AI education and workforce development.
The order further directs the secretary of labor to promote registered apprenticeships in the AI industry.

The EO, “Advancing Artificial Intelligence Education For American Youth,” seeks to create a framework for expanding AI education in K-12 and higher education and expanding AI workforce development. The EO outlines a strategy to integrate AI into education, promote early exposure to AI concepts, and develop an AI-ready workforce.
Central to the EO are establishing public-private partnerships to provide resources to teach AI literacy in K-12 education and a U.S. Department of Labor (DOL)-led initiative to establish registered AI apprenticeships.
The EO comes after President Trump, in his first days in office, issued a separate order, EO 14179, “Removing Barriers to American Leadership in Artificial Intelligence,” which seeks to “enhance America’s global AI dominance” and rescinded a Biden-era EO that had sought to balance promoting development of AI with safeguarding workers and consumers from potential negative impacts of the technology.
Promoting AI Education
President Trump’s latest EO establishes the “Artificial Intelligence Education Task Force,” which brings together the heads of a range of federal government agencies and entities, including the secretary of labor, the secretary of education, and the special advisor for AI and crypto. The task force is directed to create the “Presidential Artificial Intelligence Challenge,” a competition across multiple age categories and regions. Additionally, the EO seeks to prioritize and establish resources to support training teachers on the use of AI.
A key component of the EO is to “establish public-private partnerships with leading AI industry organizations, academic institutions, nonprofit entities, and other organizations with expertise in AI and computer science education” to provide resources and support for AI in K-12 education. The order further directs the task force to develop industry commitments and identify federal funding mechanisms, including discretionary grants, to support K-12 AI education.
Registered Apprenticeships
The EO directs the secretary of labor to “increase participation in AI-related Registered Apprenticeships” by engaging “industry organizations and employers” and supporting “the creation of industry-developed program standards to be registered on a nationwide basis.” The EO directs the secretary of labor to encourage states and federal grantees to use funding provided by the Workforce Innovation and Opportunity Act (WIOA) “to develop AI skills and support work-based learning opportunities within occupations utilizing AI,” including encouraging states to use set-asides to integrate AI learning opportunities in youth programs. The EO further directs the secretary of education and the director of the National Science Foundation (NSF) to create more opportunities for high school students to take coursework on AI and expand such coursework or certification programs.
Next Steps
The EO advances the Trump administration’s promotion of AI technology and development and supports broader application of AI in various contexts through AI literacy. In particular, the EO further focuses on involving AI technology developers and industry leaders through public-private partnerships to provide financial support and resources to expand AI education and provide workforce development opportunities.
Thus far, the Trump administration’s policies regarding AI differ from those of the Biden administration. While the Biden administration likewise promoted AI development, it was simultaneously cautious of the potential negative impacts of the technology. For example, many regulatory agencies during the Biden administration issued nonbinding guidance regarding the risks of AI and safeguards to protect against those risks, and the administration encouraged private organizations to self-regulate. One key element of the Biden administration’s approach was its “Blueprint for an AI Bill of Rights,” which outlined nonbinding recommendations for the design, use, and deployment of AI and automated systems when such tools are used in ways that affect an individual’s rights, opportunities, or access to critical resources or services.
Without a similar federal focus on guardrails against AI, it is anticipated that states will continue to fill in the gaps. Many states and jurisdictions, including California, Colorado, Illinois, and New York City, have already passed laws and regulations or are considering new laws and regulations to restrict the use of AI without human oversight and clarify that the use of such technology to employment-related decisions may result in substantive violations of federal and state antidiscrimination violations.

Employment Law This Week Episode – 100 Days In – What Employers Need to Know [Video, Podcast]

This week, we’re bringing you a special episode on the first 100 days of the Trump administration, in which we highlight sweeping policy shifts, battles at the National Labor Relations Board (NLRB); revisions to diversity, equity, and inclusion (DEI) programs; the rapid evolution of artificial intelligence (AI) in the workplace; and more.
100 Days In: What Employers Need to Know
The current administration has reached the 100-day mark, and employers have faced sweeping changes and major policy shifts—but not everything has moved at the same pace. While DEI programs and workplace AI have faced significant revisions, other areas, such as the NLRB, have been marked by board member disputes and ongoing court battles, adding layers of uncertainty.
This week’s key topics include:

DEI program scrutiny,
independent agency challenges,
rescinded policies from past administrations, and
AI workplace guidance.

In this special episode, Epstein Becker Green attorneys unpack these significant changes and provide actionable insights for navigating the regulatory and compliance chaos.

CNIL Publishes 2024 Annual Activity Report

On April 29, 2025, the French Data Protection Authority (the “CNIL”) published its Annual Activity Report for 2024 (the “Report”). The Report provides an overview of the CNIL’s activities in 2024, including enforcement activities and other new developments.
In particular, the Report revealed that:

In 2024, the CNIL conducted numerous inspections of private and public companies. These investigations were initiated following complaints or reports linked to current events or as part of the CNIL’s identified priority areas. The CNIL focused on a wide range of issues, including cookie compliance, cybersecurity, and the use of CCTV. In total, the CNIL adopted 303 corrective measures in 2024, including 87 sanctions, resulting in more than €55 million in fines. At the EU level, the CNIL led 12 cross-border sanction projects under the cooperation and consistency mechanisms.
The CNIL received a high volume of complaints, with a total of 17,772 complaints filed. Data breaches, particularly those that could result in the theft of banking information or identity theft, remained a major source of concern for the CNIL. More broadly, issues related to telecommunications, websites and social media generated the highest number of complaints (49% of the total complaints). These were followed by issues related to the retail sector (19%) and employment (13%).
The CNIL was notified of 5,629 personal data breach – a 20% increase compared to 2023. Beyond this increase, according to the CNIL, large-scale incidents surged, with the number of breaches affecting over one million people doubling. These attacks targeted key sectors such as Internet service providers, e-commerce, public services and healthcare platforms. One-third of the CNIL’s sanctions concerned security failings, highlighting an ongoing compliance gap. To respond to this threat, the CNIL furthered its collaborations with the French Agency for Information Security (Agence nationale de la sécurité des systèmes d’information), the Paris cyber prosecutor (J3) and Cybermalveillance.gouv.fr in an effort to contain the impact of breaches.
Building on its 2023 action plan, the CNIL published its first AI recommendations, including 12 practical guidance sheets (nine of which were finalized), with the aim of supporting privacy-friendly innovation.
As part of the CNIL’s broader commitment to sector and technological alignment, the CNIL launched a regulatory sandbox focused on the senior economy, selecting four companies for tailored support. Additional initiatives included the publication of a recommendation on designing privacy-friendly mobile apps, regional outreach to discuss GDPR implementation, and thematic webinars.
The CNIL also increased its outreach on youth data protection, focusing on issues such as access to harmful content, parental monitoring, cyberbullying and media literacy. It conducted 84 in-person actions targeting minors. Workshops were held in schools, public events and fairs, alongside the release of youth-friendly resources like the “Your Data, Your Rights” campaign, created in collaboration with the data protection authority of South Korea. The CNIL also expanded partnerships with television broadcast channels and the Ministry of Education, reinforcing its role in digital citizenship education. Beyond youth, the CNIL organized 173 awareness-raising activities nationwide targeting intergenerational audiences, persons with disabilities and families. Among others initiatives, the CNIL published a guide on cyber threats for families and organized workshops for seniors.

Download a copy of the Report (only available in French).

Delete All IP Law? Really?

By now, everyone should know about the X post heard around the intellectual property world. On April 11, Jack Dorsey, co-founder of Twitter and Block, posted these four words : “delete all IP law.” A few hours later, Elon Musk chimed in with “I agree.” Over the next few days, the media was full of reactions from tech and legal luminaries. Some tech celebrities agreed. “Jack has a point,” posted Chris Messina, ironically on X rival Bluesky. Members of the IP legal community who were quoted in the media objected strongly, as one might expect. 
When I asked several members of the IP legal community if they wanted to comment, the response I got from some was cool.
“These offhand tweets have received far more media attention than they deserve,” said Prof. Edward Lee of Santa Clara University School of Law, adding, “Let’s talk when Block and Tesla allow competitors to freely use their trademarks and trade secrets.”
The professor has a point. Perhaps X posts, shot from the hip, aren’t worth the paper they aren’t written on. But, as documented recently in the Washington Post, “[s]ome X posts appear to have influenced the White House.”. Tech Crunch’s Anthony Ha observed, “the line between a random conversation on Twitter/X and actual government policy is thinner than it used to be.”. And there is strong pressure from Silicon Valley to weaken copyright protection to allow unfettered training of the large language models that are the backbone of Artificial Intelligence (AI). As glib as Dorsey and Musk’s X posts might be, they need to be addressed seriously.
To begin with, my colleague Jim Ko explains in his article, “Delete All IP Law”? Why the Tech Titans Want to Pull Up the Ladder Behind Them,” that intellectual property rights were considered so important to the development of America that they were enshrined in the Constitution by the Founding Fathers. Article I, Section 8, Clause 8 gives Congress the power “[t]o promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.” 
Building on that, Robert Sterne of Washington, D.C.’s Sterne, Kessler, Goldstein & Fox, PLLC, states, “having represented literally hundreds of startups, emerging companies, and universities – the lifeblood of the USA innovation ecosystem – I can say with absolute certainty that a strong, robust, and predictable intellectual property regime is essential for their commercial success. Any weakening of the USA IP system would disadvantage the future of the nation as China and the European Union continue to evolve and strengthen their IP systems to generate jobs, wealth, and security.”
Scott Kelly, a partner at Banner Witcoff in Washington, zeroes in on patent law, which he says “isn’t perfect, but it solves a critical problem. Scientific progress does not benefit from innovators keeping their inventions secret when it is trivial to copy someone else’s ingenuity. Patents incentivize a proactive approach to problem solving, and disclosure of those solutions, rather than a world where the best strategy is to let others make the investment and figure it out for you.”
The economic consequences of deleting all IP law would be severe. Russell Beck of Beck Reed Riden, LLP, of Boston notes that “IP makes up 90% of the value of S&P 500 companies”, citing Ocean Tomo’s Intangible Asset Market Value Study. He says that “eliminating IP rights would erase that value, stifle innovation, strip the US of its global competitiveness, and cripple the US economy. ‘Deleting’ IP doesn’t level the playing field for AI – it removes the field entirely. It replaces commercial ethics and rights with corporate espionage and IP theft.”
Having recently returned from The Sedona Conference’s Global IP Litigation Conference in The Hague, I am particularly sensitive to the international ramifications of ill-considered attacks on intellectual property. We are already making it difficult for foreign students, artists, and researchers to work in the United States through visa restrictions and academic funding cuts. We would exacerbate the problem if inventors and creators – and the capital backing them – decided that to protect their rights, they need to relocate to a jurisdiction with stronger patent, copyright, and trade secret laws. Many commentators have noted that a weakening of trademark protections would cause a flood of knockoff products into the domestic market, eroding consumer confidence.
Observers are correct to note that intellectual property law isn’t perfect, but we can’t afford to throw it all out. There are tensions in patent law between high tech, pharma, and manufacturing sectors. Trade secrets are becoming more important to startups, but restrict labor mobility. As a retired librarian, I find modern digital copyright restrictions difficult to square with traditional concepts of access to knowledge, but I am also anxious about the future for the musicians and videographers in my family. 
At The Sedona Conference, we stand both for the “rule of law” and strive to move the law forward “in a reasoned and just way” through dialogue and consensus. It’s not easy, and it requires a lot of listening to understand different viewpoints and find common ground. It requires more than a four-word post with potentially dire consequences.