Federal Take It Down Act Targeting Revenge-Porn Becomes Law

On May 19, 2025, President Donald Trump signed into law the Take It Down Act (S.146). The federal legislation criminalizes the publication of non-consensual intimate imagery and AI-generated pornography. It comes following approximately forty states already enacting legislation targeting online abuse.
What are the Take It Down Act’s Requirements?
The federal Take It Down Act creates civil and criminal penalties for knowingly publishing or threatening to share non-consensual intimate imagery and computer-generated intimate images that depict real, identifiable individuals. If the victim is an adult, violators face up to two years in prison. If a minor, up to three years.
Social media platforms, online forums, hosting services and other tech companies that facilitate user-generated content are required to remove covered content within forty-eight hours of request and implement reasonable measures to ensure that the unlawful content cannot be posted again.
Consent to create an image will not be a defense.
Exempt from prosecution are good faith disclosures or those made for lawful purposes, such as legal proceedings, reporting unlawful conduct, law enforcement investigations and medical treatment.
What Online Platforms are Covered Under the Take It Down Act?
Covered Platforms include any website, online service, application, or mobile app that that serves the public and either: (i) provides a forum for user-generated content (e.g., videos, images, messages, games, or audio), or (ii) in the ordinary course of business, regularly publishes, curates, hosts or makes available non-consensual intimate visual depictions.
Covered Platforms do not include broadband Internet access providers, email services, or online services or websites with primarily preselected content where the content is not user-generated but curated by the provider – and interactive features are merely incidental or directly related to the pre-selected content.
What are the Legal Obligations for Covered Online Platforms?
The Take It Down Act requires covered platforms to ensure compliance via, without limitation: (i) providing a clear and accessible complaint and removal process; (ii) providing a secure method for secure identity verification; and (iii) removing unlawful content and copies thereof within forty-eight hours of receipt of a verified complaint.
The new law also contained recordkeeping and reporting requirements.
While not expressly required, platforms are well-advised to address content moderation filtration policies. Reasonable efforts are, in fact, required to identify and remove any known identical copies of non-consensual intimate imagery.
Website agreements, as well as reporting and removal processes are amongst the legal regulatory operational compliance areas that warrant consideration and attention.
Who is Empowered to Enforce the TAKE IT DOWN Act?
The Federal Trade Commission has been authorized to enforce the Take It Down Act notice and takedown requirements against technology platforms that fail to comply. Violations are considered deceptive or unfair.
Good faith, prompt compliance efforts may be considered a safe harbor and a mitigating factor for platforms in the context of regulatory enforcement. Internal processes that document good faith compliance efforts, including the documentation of all takedown actions, should be implemented in order to avail oneself of the safe harbor.
Removal and appeals processes must be implemented on or before May 19, 2026.
Takeaway: Covered online platforms including, but not limited to, those that host images, videos or other user-generated content should consult with an FTC and State Attorneys General Defense and Investigations to discuss compliance with the Act’s strict takedown obligations and so in advance of the effective date in order to minimize potential liability exposure.

AI Service Provider Faces Class Actions Over Catholic Health Data Breach

AI service provider Serviceaide Inc. faces two proposed class action lawsuits from a data breach tied to Catholic Health System Inc., a nonprofit hospital network in Buffalo, New York. The breach reportedly exposed the personal information of over 480,000 individuals, including patients and employees.
Filed in the U.S. District Court for the Northern District of California, the lawsuits allege that Serviceaide acted negligently and failed to protect sensitive data in its Elasticsearch database that was made publicly accessible allegedly for months before being disclosed.
Serviceaide, which provides AI-driven chatbots and IT support solutions, was contracted by Catholic Health and entrusted with managing protected health information and employment records. Plaintiffs allege that the company delayed notification to the affected individuals, waiting seven months after the incident to notify affected individuals. The affected data included patient records and personal information.
The lawsuits allege claims of negligence, breach of implied contract, unjust enrichment, invasion of privacy, and violations of California’s Unfair Competition Law.
Both plaintiffs seek to represent a nationwide class of individuals whose data was compromised and are seeking injunctive relief, damages, and attorneys’ fees.
These lawsuits highlight growing legal exposure for tech firms that handle protected health information, especially as more hospitals and healthcare systems outsource services to AI and cloud vendors. The healthcare sector remains one of the most targeted industries for cyber threats, and breaches involving third-party vendors are drawing increasing legal scrutiny.

Bipartisan Take It Down Act Becomes Law

On Monday, May 19, 2025, President Donald Trump signed the “Take It Down Act” into law. The Act, which unanimously passed the Senate and cleared the House in a 409-2 vote, criminalizes the distribution of intimate images of someone without their consent. Lawmakers from both parties have commented that the law is long overdue to protect individuals from online abuse. It is disheartening that a law must be passed (almost unanimously) to require people and social media companies to do the right thing.
There has been a growing concern about AI’s ability to create deepfakes and distribute deepfake pictures and videos of individuals. The deepfake images are developed by tacking benign images (primarily of women and celebrities) with other fake content to create explicit photos to use for sextortion, revenge porn, and deepfake image abuse.
The Take It Down Act requires social media platforms to remove non-consensual intimate images within 48 hours of a victim’s request. The Act requires “websites and online or mobile applications” to “implement a ‘notice-and-removal’ process to remove such images at the depicted individual’s request.” It provides for seven separate criminal offenses chargeable under the law. The criminal prohibitions take effect immediately, but social media platforms have until May 19, 2026, to establish the notice-and-removal process for compliance.
The Take It Down Act is a late response to a growing problem of sexually explicit deepfakes used primarily against women. It makes victims have to proactively reach out to social media companies to take down images that are non-consensual, which in the past has been difficult. Requiring the companies to take down the offensive content within 48 hours is a big step forward in giving individuals the right to protect their privacy and self-determination.

Copyright, AI, and Politics

In early 2023, the US Copyright Office (CO) initiated an examination of copyright law and policy issues raised by artificial intelligence (AI), including the scope of copyright in AI-generated works and the use of copyrighted materials in AI training. Since then, the CO has issued the first two installments of a three-part report: part one on digital replicas, and part two on copyrightability.
On May 9, 2025, the CO released a pre-publication version of the third and final part of its report on Generative AI (GenAI) training. The report addresses stakeholder concerns and offers the CO’s interpretation of copyright’s fair use doctrine in the context of GenAI.
GenAI training involves using algorithms to train models on large datasets to generate new content. This process allows models to learn patterns and structures from existing data and then create new text, images, audio, or other forms of content. The use of copyrighted materials to train GenAI models raises complex copyright issues, particularly issues arising under the “fair use” doctrine. The key question is whether using copyrighted works to train AI without explicit permission from the rights holders is fair use and therefore not an infringement or whether such use violates copyright.
The 107-page report provides a thorough technical and legal overview and takes a carefully calculated approach responding to the legal issues underlying fair use in GenAI. The report suggests that each case is context specific and requires a thorough evaluation of the four factors outlined in Section 107 of the Copyright Act:

The purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes
The nature of the copyrighted work
The amount and substantiality of the portion used in relation to the copyrighted work as a whole
The effect of the use upon the potential market for or value of the copyrighted work.

With regard to the first factor, the report concludes that GenAI training run on large diverse datasets “will often be transformative.” However, the use of copyright-protected materials for AI model training alone is insufficient to justify fair use. The report states that “transformativeness is a matter of degree of the model and how it is deployed.”
The report notes that training a model is most transformative where “the purpose is to deploy it for research, or in a closed system that constrains it to a non-substitutive task,” as opposed to instances where the AI output closely tracks the creative intent of the input (e.g., generating art, music, or writing in a similar style or substance to the original source materials).
As to the second factor (commercial nature of the use), the report notes that a GenAI model is often the product of efforts undertaken by distinct and multiple actors, some of which are commercial entities and some of which are not, and that it is typically difficult to discern attribution and definitively determine that a model is the product of a commercial or a noncommercial actor.
Even if an entity is for-profit, that does not necessarily mean the modeling use will be considered “commercial.” The work of researchers developing a model for purposes of publishing an academic research paper, for example, would not be deemed commercial. Similarly, a nonprofit could very well develop a GenAI model to license for commercial purposes.
With regard to the third factor (the amount of the copyrighted work used), the report acknowledges that machine learning processes often require ingestion of entire works and notes that the wholesale taking of entire works “ordinarily weighs against fair use.” However, in evaluating the use of entire works in GenAI models, the report offers two questions for analysis:

Is there a transformative purpose?
How much of the work is made publicly available?

Fair use is much more likely in instances where a GenAI model employs methods to prevent infringing outputs.
Finally, addressing the fourth factor (market harm), the report acknowledges that the analysis of fair use in GenAI training places the CO in “uncharted territory.” However, the CO suggests that assessment of market harm should address broad market “effects” and not merely the market harm for a specific copyrighted work. The report explains that the potential for AI-generated outputs to displace, dilute, and erode the markets for copyrighted works should be considered because such effects are likely to result in “fewer human-authored” works being sold. This reflects concerns raised by artists, musicians, authors, and publishers about declining demand for original works as AI-generated imitations proliferate. Where GenAI systems compete with or diminish licensing opportunities for original human creators – especially in fields such as illustration, voice acting, or journalism – the fourth factor is likely to weigh strongly against fair use.
Practice Note: Companies developing GenAI systems for text, image, music, or video generation should proceed cautiously when incorporating copyrighted material into training datasets. The CO report casts doubt on assumptions that current training practices are broadly protected under fair use. GenAI developers should consider initiatives such as proactively licensing the content used to train their models. As this fair use issue remains an evolving area of copyright law, companies should be prepared to adjust business models in response to judicial or legislative developments.
On May 10, 2025, the day after the report issued, the White House terminated Registrar of Copyright Shira Perlmutter “effective immediately.” On May 12, 2025, the White House appointed Deputy Attorney General Todd Blanche, who represented Donald Trump during his 2024 criminal trial, as acting registrar. The CO has raised questions about the appointment on the basis that only Congress has the power to fire the registrar or appoint a new one.

Top 10 Labor, Employment, and OSHA Trends for 2025

As we approach midyear, the ArentFox Schiff Labor, Employment & OSHA team highlights some of the most pressing legal issues facing employers this year, including artificial intelligence (AI) regulation at the state level, reshaping of the National Labor Relations Board (NLRB), continuing expansion of state paid family and medical leave laws, challenges to diversity, equity, and inclusion (DEI) in the workplace, and changes to US Equal Employment Opportunity Commission (EEOC) guidance and enforcement.
1. History in the Making: The State of the NLRB Under the New Administration
Like most government agencies, the National Labor Relations Board (NLRB) has not escaped the Trump Administration’s efforts to reshape the federal government and replace officials in positions of power. Since assuming office, President Trump has discharged former NLRB General Counsel Jennifer Abruzzo and removed NLRB panel member Gwynne Wilcox.
While replacement of Abruzzo was expected, the president’s decision to remove Wilcox surprised many and introduced legal challenges, both at the NLRB and in court:

Removing Wilcox on January 27 left the NLRB with only two of its typical five members and without a quorum to decide the cases pending before it.
On March 6, a DC District Court ordered Wilcox’s temporary reinstatement based upon a 90-year-old US Supreme Court precedent, Humphrey’s Executor v. US — a case that prohibited then-president Franklin D. Roosevelt from removing members of independent agency panels.
Upon appeal, on March 28, a three-member panel of the DC Circuit stayed Wilcox’s reinstatement, removing her once more. The concurring judges agreed that the 2020 Supreme Court decision, Seila Law v. CFPB, narrowed the precedent set in Humphrey’s Executor, and empowered the president to fire members of independent, multi-member agency panels that wield “substantial executive power.”
On April 7, a full en banc panel reversed the DC Circuit decision by a 7-4 margin and ordered Wilcox back to work.
On April 9, Chief Justice Roberts, as circuit justice for the DC Circuit, reversed the April 7 decision and reinstated Wilcox’s removal pending full resolution of the appeal by the DC Circuit. Oral argument on the merits was scheduled for May 16.

The question at present is not if the Wilcox case will go to the Supreme Court, but when. The presumptive legal protections restricting President Trump’s removal powers are about to be put to the test. When Wilcox’s case makes its way to the Supreme Court, Humphrey’s Executor will be pinned against recent decisions, like Seila Law, which may result in a more expansive view of the president’s power. In the short-term, Chief Justice Roberts has once more neutered the NLRB; in the long term, the order implicates the scope of a president’s ability to control agency leadership as a matter of law.
As to the General Counsel (GC) position, on March 25, President Trump nominated Crystal Carey, a partner at management-side law firm Morgan Lewis & Bockius LLP, to serve as the new NLRB GC. Carey’s nomination follows rescissions by Acting GC William Cowen of more than 25 “guidance” memoranda previously issued by Abruzzo, suggesting an NLRB less likely to issue decisions unfavorable to employers, and more permissibility in the use of non-competition, confidentiality, and non-disparagement provisions in agreements with nonsupervisory employees.
2. Noncompete Landscape
In recent years, noncompetes have been the subject of significant attention at both the state and federal level. Perhaps most notably, in April 2024, the Federal Trade Commission (FTC) voted to adopt a final rule that would have essentially banned noncompete agreements for workers in the United States by prohibiting employers from entering into noncompete agreements with workers and rendering prior noncompetes unenforceable, except for a narrowly defined category of “senior executives.” The final rule was immediately challenged in multiple lawsuits. On August 20, 2024, on the eve of the effective date of the ban, the US District Court for the Northern District of Texas entered an injunction in one such lawsuit, holding that the FTC lacked statutory authority to create the rule and setting it aside on a nationwide basis.
With the new Administration, Andrew Ferguson replaced Lina Khan as FTC chair. Ferguson voted against the noncompete ban in April 2024 and has opted not to pursue the appeal of the Texas injunction. However, while he indicated that a ban was not the correct approach, Ferguson has also stated that the FTC will continue to exercise its enforcement power against employers who attempt to deploy overbroad noncompetes, particularly for low wage workers. To that end, Ferguson recently announced the formation of a Joint Labor Task Force which will “scrutinize non-compete agreements, deceptive job advertisements, wage-fixing schemes, unlawful coordination on DEI employment metrics, and much more.”
As it stands now, noncompetes continue to be governed by a patchwork of state legislation ranging from bans with very limited exceptions (in California, Oklahoma, North Dakota, and most recently, Minnesota), to restrictions on use with low wage workers (in, for example, Massachusetts, New Hampshire, and Illinois), to restrictions on how and when they may be presented to employees (in, for example, Colorado, Maine, Oregon and Washington). This approach presents challenges to employers who are dealing with an increasingly mobile workforce.
Employers should revisit their agreements to ensure maximum enforceability. In many instances, specific forms or addenda will be required to comply with the various state requirements. Employers should also consider either relying on other types of restrictive covenants or, at a minimum, using other restrictive covenants simultaneously with noncompetes, including non-disclosure, non-solicitation, or no hire provisions, as appropriate. Finally, to create a secondary guardrail for the protection of trade secrets and confidential information, employers should create effective trade secret protection protocols and engage in regular monitoring and auditing of their application.
3. AI and Employment Laws: What Employers Should Know
For employers, the AI landscape continues to evolve on both the federal and state level. On inauguration day, President Trump immediately rescinded President Biden’s 2023 Executive Order (EO) No. 14110 on AI, which had directed federal agencies to use regulatory and enforcement tools to address safety, privacy, and discrimination concerns related to AI. After Commissioner Lucas became acting chair of the US Equal Employment Opportunity Commission (EEOC), two AI-related documents were removed from the EEOC’s website: (1) the May 2023 technical assistance document on AI compliance issues under Title VII, which cautioned employers to assess AI tools for potential adverse impacts on any group protected by Title VII and (2) the May 2022 technical assistance document that warned of potential violations of the Americans with Disabilities Act through AI tools that impermissibly consider or screen for disabilities of applicants. Similarly, the US Department of Labor (DOL) noted on its website that its October 2024 Artificial Intelligence Best Practices guidance might be outdated or not reflective of current policies.
Despite these changes, employers may still be held liable for their use of AI tools in hiring or workplace decision-making when such use violates federal anti-discrimination laws. This is true even when a third-party vendor created the AI tool. As such, employers should monitor and audit their use of AI tools and review their agreements with vendors of AI tools to ensure issues of transparency and liability are addressed.
In contrast to the activity at the federal level, the states have begun to regulate AI in the employment context. Colorado, Illinois, and New York City have laws on the books that offer varying levels of protections against AI-related discrimination to applicants and/or employees. As we approach mid-year, at least 25 other states have already introduced legislation that would regulate the use of AI in the employment setting.
In these changing times, employers should remain vigilant and current on their compliance with applicable and emerging state laws regarding the use of AI. AI policies and use of AI tools should be routinely monitored and audited, with particular focus on transparency, privacy, and discrimination concerns. Human resources personnel and leadership should be trained on appropriate use of AI technologies in the workplace to avoid misuse and mitigate risk.
4. Continued Expansion of State Paid Family and Medical Leave Laws
The landscape of paid family and medical leave laws in the United States is rapidly evolving, with states increasingly adopting comprehensive benefits for employees. As these laws expand, they reflect a growing recognition of the importance of supporting workers during critical life events, such as personal medical issues, the birth or adoption of a child, and caring for family members. This shift not only enhances employee well-being but also promotes a more inclusive and supportive work environment. Legislation for paid family and medical leave has been proposed in multiple states, including Arizona, Iowa, Oklahoma, Tennessee, Pennsylvania, West Virginia, and North Carolina.
In alignment with the federal unpaid leave program, most states prioritize personal medical leave, followed by leave for caring for a new child or family member. While the Family and Medical Leave Act offers unpaid leave, many states are now mandating paid options, funded through taxes collected from employees and employers. While several states have already enacted paid family and medical leave legislation, others, such as Arizona, Iowa, Oklahoma, Tennessee, Pennsylvania, West Virginia, and North Carolina, have proposed similar measures.
Recent State Developments

Alaska, Missouri, and Nebraska: Beginning this year, Missouri (May 1), Alaska (July 1), and Nebraska (October 1) will require paid sick leave accrual of one hour for every 30 hours worked, with annual use caps of 40 hours for small employers and 56 hours for larger employers. All three states permit employers to satisfy obligations through compliant paid time off policies, and each contains industry or size-based nuances that warrant close review and continued monitoring for legislative amendments before the effective dates.
Georgia: Effective July 1, 2024, Georgia has doubled paid parental leave for educators and state employees to six weeks, extending eligibility to charter school employees.
New York: Effective January 1, 2025, New York required all private-sector employers to provide their employees 20 hours of paid prenatal leave each year, in addition to existing sick leave requirements.
Washington: Effective January 1, 2025, Washington expanded the circumstances under which employees can take paid sick leave and broadened the definition of family members for sick leave purposes.
Minnesota: Minnesota’s paid family and medical leave programs are scheduled to launch on January 1, 2026.

Employers, particularly those operating in multiple states, must stay informed about these evolving laws. It is crucial to review specific state requirements and monitor potential legislative amendments before implementation to ensure compliance.
5. Beat the Heat – An Employer’s Duty to Ensure a Workplace Safe From Heat-Related Hazards
As we approach the summer months, employers with employees who work outside or in higher temperatures should be aware of the Occupational Safety and Health Administration’s (OSHA) increasing focus over the past several years on heat-related injuries and illnesses.
Since 2022, OSHA has had a national emphasis program (NEP) in place as a part of which the agency has prioritized enforcement activities focused on indoor and outdoor heat-related hazards. Although that NEP expires this year, in 2024, OSHA introduced a new proposed rule that would establish a nationwide standard for addressing the hazards of excessive heat in the workplace. Specifically, that rule would require employers to develop a Heat Injury and Illness Prevention Plan to address heat-related hazards. The rule also set an “initial heat trigger” at a heat index of 80ºF, at which threshold employers must provide employees with water and break areas, and a “high heat trigger” at 90ºF, which would require employers to monitor for signs of heat illness and provide mandatory 15-minute breaks every two hours. Although the fate of OSHA’s proposed rule is unclear following the change in Administration, OSHA can continue to issue citations for heat-related hazards under the general duty clause of the Occupational Safety and Health Act.
OSHA state plans have also taken steps to address heat-related hazards. In 2024, California OSHA issued a new final rule addressing both indoor and outdoor workplaces with heat-related hazards that imposes safety requirements if employees are exposed to temperatures at 82ºF or higher, with additional elevated requirements where employees are exposed to temperatures at 87ºF or higher. Similarly, earlier this month, New Mexico OSHA issued a notice of proposed rulemaking for its own heat illness prevention rule.
Employers whose employees may be exposed to high temperatures this summer should take steps to ensure that they have measures in place to address the risks associated with heat.
6. Pay Transparency Momentum Continues
In 2024, the momentum for pay transparency legislation has continued to build across the United States. As more states enact these laws, with additional legislation anticipated this year, multistate employers face the complex challenge of aligning their job postings and promotional practices with a patchwork of state-specific requirements. Washington, DC, and Hawaii passed pay transparency legislation that went into effect in 2024. Additionally, Massachusetts, New Jersey, and Vermont passed legislation in 2024 which is expected to go into effect this year.
Pay transparency laws typically require employers to disclose the wages or wage range for a particular position to prospective and/or current employees. Challenges arise when a multistate employer is forced to comply with varying state pay transparency laws. Differences in each state’s legislation include who is covered, the timing and circumstances in which pay information must be disclosed, and the specific parties to whom this information must be shared. Because the application of these laws can differ greatly, states like Colorado have provided explicit guidance to address these ambiguities. For example, Colorado’s guidance clarifies that postings for remote positions, that can be performed anywhere, are subject to the requirements in Colorado’s pay transparency law, the Equal Pay for Equal Work Act, even if the posting explicitly excludes Colorado applicants.
Additionally, some states’ legislation may include wage reporting obligations for covered employers. For example, California and Massachusetts’ pay transparency laws include reporting requirements for certain employers with over 100 employees. Notably, in Massachusetts, the first round of EEO-1, EEO-3, and EEO-5 reporting was due on February 1.
The extent of liability that an employer may face for non-compliance can vary based on the jurisdiction. While some states like Massachusetts assert incremental fines against liable employers, California provides a private right of action for aggrieved parties. Employers should review both their external and internal facing job postings to ensure that they comply with these varying state and local laws. For additional details, please refer to our recent alert.
7. The Current State of DEI
Diversity, equity, and enclusion (DEI) initiatives are significantly scrutinized under the new Administration. Shortly after taking office, President Trump signed EO 14173, aimed at ending what the EO describes as “dangerous, demeaning, and immoral race- and sex-based preferences” under the guise of DEI. EO 14173 creates liability for DEI programs in several notable ways:

It directs the Attorney General to produce a report recommending enforcement strategies to end illegal discrimination and preferences, including DEI, in the private sector, including identifying potential civil investigation targets among publicly traded corporations, nonprofits, and other entities.
It also sets up federal contractors for potential liability under the False Claims Act by requiring them to certify that they do not “operate any programs promoting DEI” that violate federal anti-discrimination laws. A contractor that falsely certifies compliance can face significant penalties under an action brought by the government or a private individual in a qui tam lawsuit.

While there has been significant litigation challenging the EO, contractors with current, pending, or future contracts should expect to receive anti-DEI certifications in their federal contracts soon.
Beyond the EO, the Administration has made clear its intent to investigate and pursue enforcement against DEI in the private sector. The Attorney General issued a memorandum instructing the US Department of Justice (DOJ) to investigate, eliminate, and penalize illegal DEI programs. The EEOC’s Acting Chair also sent investigation letters to law firms seeking information about their DEI activities.
The Administration has produced some guidance to help employers identify what are considered unlawful DEI practices. The EEOC and DOJ released a joint guidance, and the EEOC released its own technical assistance document, stating that under Title VII, DEI programs may be unlawful if they involve employment actions motivated by a protected characteristic, and that customer preferences or the perceived operational benefits of a diverse workforce are not a defense to race- or sex-motivated decision making. The documents also assert that practices like limiting access to employee clubs or resource groups, or certain workplace programming and trainings, can run afoul of federal anti-discrimination laws.
Employers may be motivated to eliminate DEI-type practices altogether, but many remain lawful and important for ensuring equal employment opportunity. Instead of painting with a broad brush, all employers (and particularly federal contractors) should review their DEI programs and initiatives with counsel for compliance with anti-discrimination laws.
8. Immigration Policy Developments
Consistent with his campaign agenda, President Trump has significantly increased immigration investigation efforts since taking over the White House. Given the president’s explicit focus on immigration compliance and enforcement, employers across all industries should expect increased workplace enforcement actions, including US Immigration and Customs Enforcement (ICE) raids and unannounced immigration workplace investigations, and more frequent governmental I-9 audits. In addition, employers should be aware that the Employment Authorization Document (EAD) cards for foreign nationals here through numerous Temporary Protected Status and parole programs have been shortened or terminated, which means that employers shroud revise the expiration dates on their I-9’s and obtain new work authorization documents in a timely manner in order to continue to employ them. Further by increasing scrutiny on those seeking to enter the United States, President Trump has made international travel by foreign nationals riskier and paved the way to implement travel bans. In addition, he has started revoking visas from nationals from selection countries. Thus, employers should consult their immigration counsel to see if their employees’ EAD cards and/or work or travel authorization is impacted.
Unannounced Workplace Investigations and Raids
Employers should be aware that government officials may appear without notice at a workplace and demand access to personnel and business documents, including conducting private discussions with employees. Employers should prepare for such visits and equip the first point-of-contact at each entry with information about what to do.
Federal Form I-9
In response to President Trump’s heightened focus on immigration investigations, employers should organize their I-9 records by conducting a proactive internal I-9 audit to correct any and all deficiencies, to the extent feasible. Notably, employers who choose to proactively correct their I-9 records may take advantage of the “good faith compliance” defense under the Immigration Reform and Control Act of 1986. Such remedial efforts can be taken into consideration during a governmental audit or inspection, and the employer may receive credit for those corrections, thereby mitigating potential penalties.
9. Independent Contractor/Joint Employer Rules Under the New Administration
On May 1, the DOL announced through a Field Assistance Bulletin that it will no longer enforce and may rescind President Biden’s 2024 independent contractor rule. Instead, the DOL will evaluate whether an individual is an independent contractor or an employee under the Fair Labor Standards Act using the traditional “economic realities” test, with emphasis on the following significant factors:

The extent to which the services rendered are an integral part of the principal’s business.
The permanency of the relationship.
The amount of the alleged contractor’s investment in facilities and equipment.
The nature and degree of control by the principal.
The alleged contractor’s opportunities for profit and loss.
The amount of initiative, judgment, or foresight in open market competition with others required for the success of the claimed independent contractor

The 2024 Rule rescinded the first Trump Administration’s more lenient rules which made it easier for businesses to determine joint employer status and classify workers as independent contractors. For the purpose of private litigation, the Bulletin emphasized that the 2024 Rule remains in effect. However, while the DOL has not yet attempted to return to the first Trump Administration’s rules, it indicated that it is “currently reviewing and developing the appropriate standard.”
There are still a number of lawsuits pending in federal courts challenging the 2024 Rule, but the DOL has sought to put the litigation on hold while it decides whether to reconsider or rescind the regulation.
While it is likely the new Trump Administration’s DOL will not defend the 2024 Rule, and considering Loper Bright Enterprises v. Raimondo (overturning the Chevron doctrine and holding that courts do not have to defer to an agency’s interpretation of the law), employers should remain cautious when analyzing joint employer status and classifying independent contractors, particularly in states which utilize the ABC Test.
10. Discrimination Trends
The discrimination landscape is undergoing significant changes as the Trump Administration issues EOs and guidance, while the Supreme Court continues its 2024-2025 term. Two key areas to watch are the evolving perspectives on gender identity discrimination and reverse discrimination.
Gender Identity
The Supreme Court has long held that discrimination based on sex stereotyping is impermissible under Title VII of the Civil Rights Act of 1964. This was established in the 1989 case of Price Waterhouse v. Hopkins, where the Court ruled that discrimination against an employee due to nonconformity to gender expectations constitutes sex discrimination. In 2020, the Court reaffirmed this stance in Bostock v. Clayton County, clarifying that discrimination against individuals for being homosexual or transgender inherently involves sex discrimination.
However, on January 20, President Trump issued an EO titled “Defending Women from Gender Ideology Extremism and Restoring Biological Truth to the Federal Government.” This EO redefined sex as strictly biological, excluding gender identity from its definition. This move starkly contrasts with the Supreme Court’s precedents.
Following this EO, EEOC Acting Chair Andrea Lucas announced a shift in the agency’s focus, aiming to protect women from sex-based discrimination by reversing policies related to gender identity. Changes include disabling the agency’s “pronoun app,” removing non-binary gender markers from discrimination charge forms, and eliminating materials promoting gender ideology from EEOC resources. The EEOC has also sought to dismiss cases involving gender identity discrimination claims.
Despite these administrative changes, the precedents set by Price Waterhouse and Bostock remain valid law. The courts will need to navigate the tension between federal guidance and established Supreme Court rulings, which could lead to significant legal challenges and interpretations.
Reverse Discrimination
In five federal circuits, reverse discrimination claims — those brought by members of majority groups — require plaintiffs to meet a heightened pleading standard, under which plaintiffs must establish “background circumstances” demonstrating a pattern of discrimination against the majority group.
The Supreme Court recently heard arguments in Ames v. Ohio Department of Youth Services, a case involving a woman who claimed she was discriminated against for being heterosexual. Marlean Ames alleged she was denied a promotion and demoted in favor of her gay colleagues. The Sixth Circuit applied the heightened standard and ruled against Ames. The Supreme Court is now considering whether plaintiffs must prove such “background circumstances” to establish a prima facie case of discrimination under Title VII. If the Court determines no such proof is required, the legal landscape of reverse discrimination will shift, making it easier for individuals to bring such reverse discrimination claims.
In addition, the EEOC has announced policies to defend and protect members of majority groups. For instance, EEOC Acting Chair Andrea Lucas announced the EEOC will protect “American workers” from anti-American national origin discrimination. Lucas stated the EEOC “is putting employers and other covered entities on notice: … The EEOC is here to protect all workers from unlawful national origin discrimination, including American workers.” 
 Saisruthi Paspulati , Trevor M. Jorgensen , Ari Asher , Kimia Pourshadi , and Roxana Bokaei also contributed to this article. 

PTAB Rejects AI-Driven Medical Patent – Not for Novelty, But Eligibility

In a recent decision with important implications for artificial intelligence (AI) driven innovation, the Patent Trial and Appeal Board (PTAB) denied a patent for an AI-based medical tool.[1] The refusal was not because the invention was not new or inventive. Rather, the refusal was because the invention did not meet a fundamental rule of U.S. patent law. In Ex parte Michalek, the PTAB specifically acknowledged that the patent claims at issue recited new information about the nexus between certain biomarkers and the development of lung cancer as facilitated by machine learning. In fact, prior to appeal, the applicant had successfully refuted all arguments raised by the patent examiner that the invention was not new or nonobvious. That said, based on U.S. Patent Office guidance and a related example from that guidance, the PTAB still determined the claims were flawed based on the legal principle of subject matter eligibility. Although the facts in this decision concern medical health innovation, the decision is helpful to inform patent strategy for AI-enabled inventions across various disciplines and industries.
In its patent application, the applicant submitted patent claims covering a machine learning enabled capability to predict a disease state of a human based on certain biomarkers. During prosecution, the applicant had overcome all prior art rejections. Thus, the issue of novelty and nonobviousness of the claims had been specifically raised, considered, and resolved in the applicant’s favor. Patentability rested on the only remaining issue of subject matter eligibility.
Subject matter eligibility refers to whether an invention is of the required type to qualify for patent protection under U.S. patent law. Processes, machines, manufactures, and compositions of matter are patentable but natural laws, mathematical concepts, and abstract ideas, for example, are not. In practice, distinguishing between the two categories has proven difficult. Because of this difficulty and the unique complexities posed by AI driven innovation, the U.S. Patent Office has issued specific guidance on subject matter eligibility of AI-related inventions. The guidance sets forth principles that inform how patent examiners should assess whether AI-driven innovations meet subject matter eligibility requirements. To illustrate these principles, the USPTO has provided various specific “examples” demonstrating when AI-related inventions are patent-eligible and when they are not.
Although it acknowledged that the invention involved a new idea, the PTAB in Michalek found that the invention was a natural law and a mathematical concept. The PTAB relied on an example from Patent Office guidance that characterizes an invention relating to determination of patient risk for a medical condition as ineligible because it involved an improvement to an abstract idea, not to the functioning of a computer or other technology. According to the cited example, the recital of a treatment for the medical condition in theory could have helped the applicant to demonstrate the subject matter eligibility of the invention. However, the PTAB did not discuss this option and, in any event, no evidence indicated that a treatment had been described in the patent application. As perhaps more relevant, the PTAB did not discuss other examples from Patent Office guidance that might have better applied to save the invention.
While this case involved medical health technology, the implicated issues inform patent strategies for AI-enabled inventions across all industries. Patent applicants working with AI should be prepared for the Patent Office to apply similar reasoning to their applications. Patent applicants should be prepared to address strained reliance on certain examples from Patent Office guidance or, better yet, highlight more analogous examples. It is critical for patent applicants at the preparation stage to proactively devise an application drafting strategy supported by the guidance and examples to invite smoother prosecution.

FOOTNOTES
[1] Ex parte Michalek, Appeal No. 2023-004204 (PTAB Dec. 27, 2024).
Listen to this post

Guarding Against the Unknown: M&A Due Diligence of AI Companies in Data-Sensitive Sectors

M&A in the AI sector is redefining deal risk, especially when sensitive data is involved. As AI companies power breakthroughs in biotech, healthcare, defense, and critical infrastructure, the stakes for companies acquiring businesses handling proprietary data, biotech research, medical records, trade secrets, critical technology or government intelligence have never been higher. In an era where a single data breach or compliance failure can derail innovation and shatter market trust, due diligence has evolved from a legal checkpoint to a mission-critical strategy for safeguarding value in a rapidly disrupting landscape.
Government Contracts and Defense
AI companies servicing defense sector clients with government contracts must rigorously evaluate their obligations under regulations such as the International Traffic in Arms Regulations (ITAR) and the Export Administration Regulations (EAR). It’s vital to examine transactions that may affect critical infrastructure or defense for clearance by the Committee on Foreign Investment in the United States (CFIUS). A prospective buyer’s due diligence process should include thorough analysis of customer contracts and internal compliance, especially in cross-border sales involving national security concerns, to ensure compliance with vital regulatory requirements.
Biotech Research
In biotech research, AI plays a crucial role in analyzing large datasets like genomic and clinical data, with predictive models aiding drug discovery. The data used in this research must adhere to the U.S. Food and Drug Administration (FDA) guidelines on data integrity, encompassing ethical standards, study validity, and accuracy. Detailed due diligence by a prospective buyer of adherence to these guidelines is imperative to adhere to regulations, ensuring companies advance innovation responsibly in the biotech sector.
Data Protection and Privacy
A top priority when assessing AI companies dealing with sensitive information is compliance with data protection regulations for both customer data and user data. Due diligence should thoroughly examine how companies structure policies to ensure adherence to regulations such as the Health Insurance Portability and Accountability Act (HIPAA) for medical data, the General Data Protection Regulation (GDPR) for EU-based data, and the California Privacy Rights Act (CPRA) for California user data. These evaluations safeguard sensitive information while bolstering trust in partnerships and operations.
Technological and Security Assessments
Robust cybersecurity measures are essential for AI companies to safeguard sensitive data against breaches. Due diligence should entail examining security protocols, vulnerability management strategies, and incident response plans, and evaluating technologies like encryption and secure coding practices. Ensuring an organization is equipped against cyber threats protects its most valuable assets. AI companies also need to be mindful of maintaining confidentiality standards of their proprietary IP when collaborating with government bodies or during regulatory reviews relating to their industry sector. Buyers will review those arrangements to evaluate whether a potential target has allowed for unintended disclosures of its proprietary algorithms. 
Conclusion
The M&A landscape for AI companies managing sensitive data demands comprehensive due diligence across regulatory compliance, intellectual property protection, foreign investments, and cybersecurity. A thorough evaluation of these facets enables informed decision-making, securing sensitive information and aligning strategies in the rapidly evolving AI sector. An AI company undergoing a sale or seeking opportunities should act deliberately to strengthen its position in the above areas.
Listen to this post
Tiernan Still also contributed to this article. 

Exploring California’s Proposed AI Bill

California lawmakers have proposed new legislation to reshape the growing use of artificial intelligence (AI) in the workplace. While this bill aims to protect workers, employers have expressed concerns about how it might affect business efficiency and innovation.
What Does California’s Senate Bill 7 (SB 7) Propose?
SB 7, also known as the “No Robo Bosses Act,” introduces several key requirements and provisions restricting how employers use automated decision systems (ADS) powered by AI. These systems are used in making employment-related decisions, including hiring, promotions, evaluations, and terminations. The pending bill seeks to ensure that employers use these systems responsibly and that AI only assists in decision-making rather than replacing human judgment entirely.
The bill is significant for its privacy, transparency, and workplace safety implications, areas that are fundamental as technology becomes more integrated into our daily work lives.
Privacy and Transparency Protections
SB 7 includes measures to safeguard worker privacy and ensure that personal data is not misused or mishandled. The bill prohibits the use of ADS to infer or collect sensitive personal information, such as immigration status, religious or political beliefs, health data, sexual or gender orientation, or other statuses protected by law. These limitations could significantly limit an employer’s ability to use ADS to streamline human resources administration, even if the ADS only assists but does not replace human decision making. Notably, the California Consumer Privacy Act, which treats applicants and employees of covered businesses as consumers, permits the collection of such information.
Additionally, if the bill is enacted, employers and vendors will have to provide written notice to workers if an ADS is used to make employment-related decisions that affect them. The notice must provide a clear explanation of the data being collected and its intended use. Affected workers also must receive a notice after an employment decision is made with ADS. This focus on transparency aims to ensure that workers are aware of how their data is being used.
Workplace Safety
Beyond privacy, SB 7 also highlights workplace safety by prohibiting the use of ADS that could violate labor laws or occupational health and safety standards. Employers would need to make certain that ADS follow existing safety regulations, and that this technology does not compromise workplace health and safety. Additionally, ADS restrictions imposed by this pending bill could affect employers’ ability to proactively address or monitor potential safety risks with the use of AI.
Oversight & Enforcement
SB 7 prohibits employers from relying primarily on an ADS for significant employment-related decisions, such as hiring and discipline, and requires human involvement in the process. The bill grants workers the right to access and correct their data used by ADS, and they can appeal ADS employment-related decisions. A human reviewer must also evaluate the appeal. Employers cannot discriminate or retaliate against a worker for exercising their rights under this law.
The Labor Commissioner would be responsible for enforcing the bill, and workers may bring civil actions for alleged violations. Employers may face civil penalties for non-compliance.
What’s Next?
While SB 7 attempts to keep pace with the evolution of AI in the workplace, there will likely be ongoing debate about these proposed standards and which provisions will ultimately become law. Jackson Lewis will continue to monitor the status of SB 7.

CAFC Ruling Questions How and When Tools Built Using Machine Learning are Patentable

CAFC affirms that applying generic machine learning to industry-specific problems is not enough for patent eligibility under §101, reinforcing the importance of how innovations are framed in patent applications — especially in emerging tech like AI.
PDFShare
In a decision with major implications for AI-related patent strategy, the U.S. Court of Appeals for the Federal Circuit (CAFC) held on April 18, 2025 that four patents related to machine learning in live-event scheduling and broadcasting were ineligible under 35 U.S.C. §101. The court affirmed a decision from the U.S. District Court for the District of Delaware, concluding that the innovation merely applied generic machine learning techniques in the data environment for the entertainment industry, without demonstrating any specific technological improvement or inventive concept.
This ruling has significant implications for innovators, businesses, and legal practitioners in the technology and intellectual property fields, particularly those working with machine learning and AI, because it further defines the boundaries of patent eligibility under judicially created exceptions to patent subject matter eligibility.
Background
These exceptions — categories of subject matter that include laws of nature, natural phenomena, and abstract ideas — are not eligible for patent protection. Notably, the “abstract idea” category, which encompasses mathematical formulas and business methods, is often applied against software-implemented inventions. The judiciary and the United States Patent and Trademark Office (USPTO) have ruled that such categories are considered the basic tools of scientific and technological work — which, if patentable, would stifle innovation and restrict access to knowledge.
In considering whether claims are eligible for patent protection, the USPTO or a federal court looks to whether the claims recite more than the subject matter deemed to be within the judicial exception. Under this framework, a claim that recites a judicial exception but also includes additional elements that transform the nature of the claim into a patentable application is typically considered eligible.
Notably, although the CAFC stated that “[m]achine learning is a burgeoning and increasingly important field and may lead to patent-eligible improvements in technology,” patents that “do no more than claim the application of generic machine learning to new data environments, without disclosing improvements to the machine learning models to be applied,” are nevertheless “patent ineligible under § 101.”
Still, it is worth noting that patent owner repeatedly conceded that it was not claiming machine learning itself. Thus, the ruling highlights that application of known machine learning techniques to new contexts will not constitute a patent-eligible invention unless the claims disclose specific improvements to the machine learning technique itself. Such an approach is a further extension of the Supreme Court rulings of Alice Corp. v. CLS Bank International and Mayo v. Prometheus Labs, in that it effectively limits the eligibility analysis to a determination of how machine learning itself is improved rather than the use of machine learning to build systems and methodologies to address technological problems in particular industrial applications.
Implications and Takeaways
The USPTO’s most recent patent eligibility guidance examples are fairly aligned with the CAFC’s recent decision. However, it remains to be seen whether such an approach effectively serves the underlying goal of enabling free use of known “basic tools” such as machine learning technology.
As a result, patent applicants should understand and anticipate that patent eligibility will hinge on both how an innovation is framed in the application text as well as the underlying technology used to generate that innovation.

Employment Law This Week- New Executive Order Targets Disparate Impact Claims Nationwide [Video, Podcast]

This week, we explore how key changes introduced by President Trump’s Executive Order 14281, “Restoring Equality of Opportunity and Meritocracy” (“EO 14281”), raise important questions for employers navigating compliance with varying federal, state, and local laws.
New Executive Order Targets Disparate Impact Claims Nationwide
EO 14281 poses significant challenges for employers because it seeks to limit disparate impact liability but clashes with established state and local regulations and laws, such as New York City’s law regarding the use of automated employment decision tools. This tension underscores the increasing complexity of managing artificial intelligence (AI)-driven decision-making in the workplace amid shifting legal standards.
This week’s key topics include:

the scope of EO 14281;
conflicts between EO 14281 and existing federal, state, and local laws; and
best practices to mitigate risks in AI employment decisions.

Epstein Becker Green attorneys Marc A. Mandelman and Nathaniel M. Glasser unpack these developments and provide employers with practical strategies to stay compliant and address critical workforce challenges.

AI in Job Postings: What Employers in Canada Need to Know

Artificial intelligence (AI) is rapidly changing the hiring landscape. Whether scanning resumes with machine learning tools or ranking candidates based on predictive models, employers in Canada may now want to ensure transparency when using AI during recruitment. This is no longer just a best practice—it is increasingly being reflected in legislative requirements.

Quick Hits

In Ontario, if AI is used to screen, assess, or select applicants, a disclosure may be required directly in the job posting.
Employers with fewer than twenty-five employees are exempt from Ontario’s requirement.
In Quebec, if a decision is made exclusively through automated processing (such as AI), employers need to inform the individual and offer a mechanism for human review.
Across Canada, privacy laws (Quebec, Alberta, British Columbia, and federally under PIPEDA) suggest that individuals be informed about the purposes for collecting their personal data and openness requirements, which suggests disclosing AI use.
In Quebec, individuals also have the right to know what personal data was used, the key factors behind an automated decision, and to request corrections.

With the coming into force of Ontario’s Working for Workers Four Act, 2024 (Bill 149) and the coming into force of Quebec’s Act to Modernize Legislative Provisions as Regards the Protection of Personal Information (Law 25), which began in 2022, alongside longstanding privacy obligations in Alberta, British Columbia, and under federal law, the Personal Information Protection and Electronic Documents Act, S.C. 2000, c. 5 (PIPEDA), employers may want to carefully review how AI is used in job postings and the broader hiring process.
Quebec—Regulatory Context
Quebec’s Act Respecting the Protection of Personal Information in the Private Sector, CQLR c. P-39.1, was significantly amended and is now in force. These provisions apply to all private sector organizations collecting, using, or disclosing personal information in Quebec. This includes employers hiring employees located in Quebec, regardless of where the employer is based, as long as they are considered to be doing business in Quebec.
Section 12.1 provides that any individual whose personal information is the subject of a decision made exclusively through automated processing may be informed of the decision, the main factors and parameters that led to it, and of their right to have the decision reviewed by a person. As such, employers may want to ensure that systems that are used for any automated decision-making in Quebec are explainable so that if they receive a request on the factors and parameters that led to the decision they are able to provide this information.
Although the law is not specific about how such a request must be made, we assume that the section on access rights will apply. This means that employers would need to respond to a written request for information within thirty days.
While the statute does not define “automated decision-making technology,” the language of the law may be interpreted broadly and may apply to a wide range of systems, including algorithmic and AI tools used in hiring. Based on the approach of the data protection regulatory authority in Quebec, the Commission d’accès à l’information (CAI), we can expect a broad interpretation of this concept as the CAI has recently taken the position that privacy laws in Quebec are quasi-constitutional in nature. (For a discussion of Quebec’s restrictive approach to data privacy, see our article, “Québec’s Restrictive Approach to Biometric Data Poses Challenges for Businesses Working on Security Projects.”)
The CAI has broad investigative and enforcement powers. These powers include conducting audits, issuing binding orders, and imposing administrative monetary penalties. Employers may want to monitor guidance from the CAI as the authority’s enforcement evolves.
Ontario—Regulatory Context
On March 21, 2024, the Working for Workers Four Act, 2024 (Bill 149) received Royal Assent in Ontario. Among other amendments, the act introduced a new provision under the Employment Standards Act, 2000, S.O. 2000, c. 41 (ESA) regarding employer disclosure of artificial intelligence use in hiring when a job is publicly advertised.
The implementing regulation, O. Reg. 476/24: Job Posting Requirements, defines artificial intelligence as:
“A machine-based system that, for explicit or implicit objectives, infers from the input it receives in order to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.”

The same regulation defines a “publicly advertised job posting” as:
“An external job posting that an employer or a person acting on behalf of an employer advertises to the general public in any manner.”

This requirement is set to take effect on January 1, 2026. It will not apply to general recruitment campaigns, internal hiring efforts, or employers with fewer than twenty-five employees at the time of the posting.
Employers may find it useful to assess what tools qualify as “artificial intelligence” or what constitutes “screening,” “assessing,” or “selecting” a candidate. The broad definition may include simple keyword filters or more complex machine learning systems, raising the potential for over- or under-disclosure.
Considerations for Employers: Human Rights and AI Bias
Employers in Canada may also want to consider their use of AI tools in conjunction with human rights legislation to ensure that their recruitment practices comply with legal standards. These laws prohibit discrimination based on grounds such as race, gender, age, disability, and other protected characteristics.
When implementing AI in hiring, employers may want to assess whether any of the tools used unintentionally promote bias or perpetuate discriminatory outcomes. AI systems, if not properly designed or monitored, can inadvertently reinforce bias by relying on historical data that may reflect past inequalities. For example, predictive models may favor certain demographic groups over others, which could lead to unintentional bias in hiring decisions.
Employers can play a key role in minimizing these risks by considering the following:

being involved in discussions about how AI tools work, ensuring transparency about the data being used, and the potential for bias in decision-making;
choosing AI tools that are explainable—meaning the algorithms and their decision-making processes are understandable to humans. This can help employers detect and correct biases before they impact hiring decisions.
regularly auditing AI tools to identify and addressing any unintentional bias, ensuring that these tools comply with both privacy and human rights obligations; and
for employers subject to PIPEDA or provincial privacy laws, providing clear, accessible notices explaining how personal information is collected, why it is collected, and who to contact with questions.

AI is increasingly common in recruitment, and with this advancement comes increased scrutiny. The new laws are intended to support transparency, fairness, and human oversight. By using explainable AI, having strong internal mechanisms, and communicating across departments, employers may not only reduce legal risks but also foster trust in the hiring process, ensuring that all candidates are treated fairly and equitably.
Next Steps
Tips and considerations for responsible AI use include the following:

understanding the AI technology, verifying that it complies with the requirements of transparency under data privacy law, and understanding what the tool is doing to determine if it is necessary to indicate this information in a job posting;
communicating across the organization by having company-wide discussions about the implementation of AI tools to avoid the risk of a tool being used without being advertised in a job posting or privacy notice;
revising job posting templates to include AI-use disclosures where applicable
creating plain-language descriptions of AI tools used in hiring, especially those that may lead to automated decisions;
implementing procedures that enable human review of AI decisions, as reflected under Quebec’s Law 25;
maintaining up-to-date privacy policies that explain AI usage, list contact information for privacy inquiries, and detail individual rights;
training hiring personnel on how AI tools function and how to respond to applicant questions related to privacy and automation;
limiting data collection to what is necessary and reasonable for recruitment purposes, in line with privacy obligations under applicable laws; and
verifying the applicability of exemptions. For example, Ontario’s AI disclosure requirement may not apply to employers with fewer than twenty-five employees, though privacy obligations may still be relevant.

AI is transforming the hiring process—and the legal landscape is evolving just as fast. Employers across Canada may want to proactively review their recruitment practices through the lens of employment standards, privacy laws, and human rights obligations. Embracing transparency doesn’t just reduce legal risk—it can build trust with candidates and unlock the full potential of AI while respecting individual rights.

Harnessing AI in Litigation: Techniques, Opportunities, and Risks [Video, Podcast]

What if the key to navigating your most complex legal challenges lies in the capabilities of artificial intelligence (AI)?
Join Epstein Becker Green attorneys Alkida Kacani and Christopher Farella as they sit down with Jonathan Murphy, Senior Manager of Forensics at BDO, to examine how AI is revolutionizing the practice of law.
Discover how advanced technologies are refining e-discovery, optimizing predictive analytics, and transforming document review processes. The discussion also takes a deep look into the ethical considerations of integrating AI into legal work, from safeguarding sensitive information to maintaining professional standards in a highly dynamic field.