Foley Automotive Update – November 2025
Key Developments
Foley & Lardner provided an overview on supply chain cyber threats, and best practices for mitigating cyber risks.
The Michigan Supreme Court intends to rule in the months ahead on a dispute over the “legitimacy of Stellantis’ supplier contracts,” according to an update from Crain’s Detroit.
U.S. new light-vehicle sales in October 2025 fell by over 4% year-over-year to a SAAR of 15.4 million units, according to preliminary analysis from Haver Analytics.
Foley & Lardner partner Gregory Husisian shared insight on how a possible tariff refund process could play out in the SupplyChainDive article, “What shippers need to know about potential tariff refunds.” The U.S. Supreme Court will hear oral arguments on November 5, 2025, regarding the legality of the Trump administration’s tariffs as imposed under the International Emergency Economic Powers Act (IEEPA).
China’s Commerce Ministry suggested it will offer exemptions to the recently imposed export restrictions on semiconductors made by Chinese-owned, Netherlands-based Nexperia, which supplies an estimated 40% of certain chips critical to automakers. The company has various issues to resolve with the Dutch government, and uncertainty remains over when the shipments will resume.
A new trade and economic deal between the U.S. and China includes a reprieve on certain rare-earth export controls recently imposed by China. Despite the two nations’ recently announced trade agreement, U.S. Trade Representative Jamieson Greer plans to continue an investigation into China’s compliance with a limited trade agreement reached during President Trump’s first term. The results of this Section 301 probe could result in new tariffs, or leverage in subsequent trade negotiations.
S&P Global Mobility assessed the impact of the Section 232 tariffs on truck and bus imports imposed by the Trump administration on November 1. The analysis notes the October 17 executive order announcing the levies also expanded the list of tariffed auto parts, with “more varieties of drive axles, wider application of engine components, and adds items including touch screen displays, certain engine control units and speakers.”
President Trump extended a November 1, 2025 deadline to reach a trade deal with Mexico for an unspecified number of weeks, resulting in a delay of higher tariffs on Mexican goods that do not meet the content rules of the U.S.-Mexico-Canada trade agreement.
President Trump intends to impose an additional 10% tariff on Canadian imports, and said he does not plan to resume trade negotiations with Canada due to an anti-tariff advertisement aired by the Ontario government. The Trump administration did not provide details on the implementation of the new tariffs or if USMCA-compliant goods would be exempt.
The U.S. and South Korea are reported to have finalized a trade deal that is expected to establish a 15% cap on U.S. tariffs on Korean goods. This follows a framework agreement the nations announced in July.
Last month the U.S. Senate narrowly passed three measures opposing President Trump’s global “reciprocal” tariffs, as well as the emergency authorities underpinning the tariffs on Canada and Brazil. The U.S. House is not expected to vote on the measures in the near future, and Congress would require a two-thirds majority to overcome a presidential veto.
OEMs/Suppliers
Revised projections for tariff-related costs in 2025 are between $3.5 billion to $4.5 billion for GM, up to $1.2 billion for Stellantis, and up to $1 billion for Ford.
Canada intends to reduce the number of vehicles GM and Stellantis can import tariff-free into the country in response to the automakers’ plans to reduce vehicle production in the nation.
A number of major automakers submitted filings to urge the U.S. Trade Representative’s Office to extend the USMCA, as it accounts “for tens of billions of dollars in annual savings.” The USMCA is scheduled for formal review in 2026.
Ford estimated the recent fire at a significant aluminum supplier will impact profitability by $1.5 billion to $2 billion in 2025, while noting mitigation efforts are expected to offset half of the cost.
American Axle plans to invest $133 million to increase production and upgrade its plant in Three Rivers, Michigan.
Nissan reduced its U.S. output by approximately 7,400 vehicles in October due to parts shortages.
Geely will acquire a 26.4% stake in Renault do Brasil, and the Chinese automaker expects the partnership will accelerate its plans to expand sales in the market.
Multiple European automakers have replaced CEOs this year, as ongoing economic and market challenges impact European vehicle sales.
Market Trends and Regulatory
According to the NADA Data 2025: Midyear Report released in October, there were 16,972 new-car dealerships in the U.S. as of June 2025, and the top five states with the most dealerships were California, Texas, Florida, Pennsylvania, and Ohio. In addition, 90.7% of all new light-vehicle dealers owned one to five stores, down from 95% in 2014.
WardsAuto provided a list of the top automotive conferences to consider in 2026.
Vulcan Elements and ReElement Technologies secured $1.4 billion in combined funding from the U.S. government and private investors to establish a domestic rare-earth magnet supply chain.
Autonomous Technologies and Vehicle Software
Waymo began testing its autonomous vehicles in Detroit. The company currently offers robotaxi service in parts of San Francisco, Los Angeles, Phoenix, Atlanta, and Austin, and it plans to expand services to cities including San Diego and Las Vegas next year.
Uber announced plans to launch autonomous taxi service in the San Francisco Bay Area in 2026 with vehicles developed in partnership with EV maker Lucid and self-driving technology company Nuro Inc. Uber has also established a goal to have a fleet of 100,000 autonomous vehicles in its fleet that are powered by Nvidia technology beginning in 2027.
Bloomberg provided an overview of Chinese companies’ robotaxi deployment plans within multiple regions.
Hybrid and Electric Vehicles
J.D. Power predicted U.S. EV sales in October 2025 will decline 3.4 percentage points to a market share of 5.2%, as the market recalibrates following the expiration of federal tax credits.
On November 4, Automotive News updated its list of U.S. EV models that have been delayed or canceled.
GM plans to lay off over 3,000 hourly workers across its EV and EV battery plants in Michigan, Ohio, and Tennessee in the coming months. Over half of the layoffs are expected to be indefinite. The automaker also laid off over 200 salaried workers at its Warren Tech Center in Michigan as part of a restructuring of its design-engineering team, and over 300 employees as part of the closure of its Georgia IT center.
BYD reported its third-quarter 2025 net profit declined 33% year-over-year, and revenue fell 3% YOY.
LG Energy Solution – Stellantis joint venture NextStar Energy will shift to producing batteries for energy storage systems at its Windsor, Ontario factory instead of EV batteries.
Volkswagen subsidiary PowerCo started construction on a $7 billion EV battery plant in St. Thomas, Ontario.
Pipeliners Podcast – PHMSA Rulemaking Update with Jim Curry [Podcast]
In this episode of the Pipeliners Podcast, Jim Curry returns for an in-depth discussion on current and upcoming PHMSA rulemakings.
The conversation covers the agency’s evolving focus on risk-based, technology-driven regulation — including the growing role of AI and advanced analytical tools — and what these changes mean for operators. Curry also provides insights into regulatory reform, repair criteria modernization, and how industry can engage proactively to shape durable, data-driven rules that improve safety and efficiency.
Creative Industry AI Fightback Sinks Into ‘Murky Waters’
The UK creative industry’s attempt to prevent tech giants from exploiting copyrighted work to train AI models has been largely thrown out by the High Court, in a move which one legal expert insists leaves “the legal waters of copyright and AI training as murky as before”.
The case dates back to January 2023 when Getty Images started legal proceedings against Stability AI in both the UK and the US, filing a broad claim encompassing copyright, database right, trademark infringement and passing off allegations relating to the Stable Diffusion model, its training and development process and images generated by it.
During the trial, Getty Images withdrew its core claims for primary copyright and database infringement after acknowledging a lack of evidence that the training activities took place within the UK jurisdiction, a requirement under UK law.
While this meant the court could not rule on the core issue of whether using copyrighted material for AI training is lawful in the UK, it also exposed how hard it could be to stop overseas developers from doing the same.
The decision left two claims. The first was whether Stability’s making the model weights for certain versions of Stable Diffusion available for download was a secondary infringement of Getty’s copyright. The second was whether Stability had infringed either the Getty or iStock trademarks due to the presence of watermark-like features in certain Stable Diffusion outputs identified by Getty.
In response, Stability argued that the development and training of Stable Diffusion took place outside the UK. As Getty’s claim was limited to its UK copyright and database right, Stability argued that those rights had not been engaged by the process of making Stable Diffusion.
Stability also challenged Getty’s assertion that outputs from Stable Diffusion could infringe Getty’s copyright, database right and/or trade marks, or that Stability had committed any act of secondary copyright infringement by dealing with the models in the UK.
In the judgment, published earlier this week, the court dismissed the secondary copyright infringement, ruling that the Stable Diffusion model itself was not an “infringing copy” because it learns patterns from data rather than storing or reproducing the original copyrighted works.
Even so, Getty Images did succeed in the trademark infringement part on the claim. The court found that Stability AI had infringed Getty’s trademarks by producing AI-generated images that contained their watermarks. However, the judge noted these findings were “both historic and extremely limited in scope”, applying mainly to older models and isolated instances.
Meanwhile, the court did not provide a definitive ruling on the passing off claim.
James Clark, data protection, AI and digital regulation partner at law firm Spencer West LLP, told Decision Marketing: “At the heart of the judgment is the finding that the training of Stable Diffusion’s AI model using copyright work did not result in the production of an infringing copy of that work.
“It is the finding that the AI model did not store any copy of the protected works, and the model itself was not an infringing copy of such work, that will cause concern for the creative industry, while giving encouragement to AI developers.”
Clark added that the judgment highlights the problem that the creative industry has in bringing a successful copyright infringement claim in relation to the training of large language models. “During the training process, the model is not making a copy of the work used to train it, and it does not reproduce that work when prompted for an output by its user. Rather, the model ‘learns’ from the work, in a similar way to the way a human might.”
Meanwhile, Nathan Smith, IP partner at Katten Muchin Rosenman LLP, added: “While Getty succeeded in part on limited trademark infringement claims relating to the unauthorised outputs of ‘iStock’ and ‘Getty Images’ watermarks by earlier models, these findings were based on specific examples and offer minimal practical impact.
“By dismissing Getty’s secondary infringement claims, the judgment appears to present a win for the AI community, but arguably leaves the legal waters of copyright and AI training as murky as before.”
The US case continues.
The minister of state for media, tourism and creative industries Chris Bryant has long insisted that Government will consider changes to the law to protect the UK creative industry from such issues but the result of its copyright and AI consultation – which closed in February this year – have yet to be released.
Is the Getty vs. Stability AI Ruling the Final Blow to Artists?
While many artists see the training of AI image generators using unlicensed work as theft, but the UK High Court has just decided otherwise, and the case could influence future rulings and public policy.
Getty Images sued the AI developer Stability AI, claiming that it had committed secondary copy infringement by training the Stable Diffusion AI image model on millions of images taken from Getty’s stock photo library. However, Judge Joanna Smith has mainly ruled in favour of Stability AI.
Although there was evidence that photos from Getty were used to train Stability’s model (Getty watermarks turned up on AI-generated images), the judge ruled that training an AI model on copyrighted works, without storing or reproducing those works in the model itself, does not amount to secondary copyright infringement under UK law.
The judge ruled: “An AI model such as Stable Diffusion which does not store or reproduce any copyright works (and has never done so) is not an ‘infringing copy’.”
She did, however, take Getty’s side in certain claims about trademark infringement related to its watermarks, raising the possibility that AI developers could be sued if models clearly reproduce trademarks.
Simon Barker, Partner and Head of Intellectual Property at law firm Freeths, said the judgment strikes a balance between protecting the interests of creative industries and enabling technological innovation.
“AI developers can take some comfort from the case that the mere act of training on large datasets will not of itself expose them to liability for copyright infringement in the UK,” he said. “However, the judgment also serves as a warning that if AI-generated outputs reproduce protected trade marks, for example where they appear as watermarks, in a way that could confuse people then they will risk infringing those trade marks. Each case will turn on its own facts and rights holders will need to evidence a likelihood of confusion or association with the relevant trade mark to succeed.”
That doesn’t seem to leave a lot of hope for artists. Rebecca Newman, a legal director at Addleshaw Goddard, has warned that the ruling is a blow to copyright owners’ exclusive right to profit from their work and means “the UK’s secondary copyright regime is not strong enough to protect its creators”.
For James Clark, Data Protection, AI and Digital Regulation partner at law firm Spencer West LLP, the ruling will cause concern for the creative industry. “The judgment usefully highlights the problem that the creative industry has in bringing a successful copyright infringement claim in relation to the training of large language models,” he says.
“During the training process, the model is not making a copy of the work used to train it, and it does not reproduce that work when prompted for an output by its user. Rather, the model ‘learns’ from the work, in a similar way to the way that you or I might do so. As an expert report quoted in the judgment explains: ‘Rather than storing their training data, diffusion models learn the statistics of patterns which are associated with certain concepts found in the text labels applied to their training data, i.e. they learn a probability distribution associated with certain concepts’.”
Some questions remain unaddressed, though. Getty withdrew part of its lawsuit relating to primary copyright infringement because Stability AI argued that its AI training had not happened in Britain. The company still has another case against Stability AI pending in the US.
Nathan Smith, IP partner at Katten Muchin Rosenman LLP says that while on the surface, the long-awaited ruling may appear to have provided some clarity, there remains significant uncertainty.
“The Court’s findings on the more important questions regarding copyright infringement were constrained by jurisdictional limitations, offering little insight on whether training AI models on copyrighted works infringes intellectual property rights,” he says.
He adds: “On the face of it, the judgment appears to present a win for the AI community, but arguably leaves the legal waters of copyright and AI training as murky as before.”
There are other issues around copyright with AI imagery too. So far, it’s been ruled that AI-generated images can’t be copyrighted themselves because they lack human authorship.
In a statement, Getty Images said: “We remain deeply concerned that even well-resourced companies such as Getty Images face significant challenges in protecting their creative works given the lack of transparency requirements. We invested millions of pounds to reach this point with only one provider that we need to continue to pursue in another venue.
“We urge governments, including the UK, to establish stronger transparency rules, which are essential to prevent costly legal battles and to allow creators to protect their rights.”
Legal Tech Event Feature – Relativity Fest 2025 Award Winners
On October 9th, legal technology company Relativity concluded Relativity Fest 2025, its three-day annual user conference, with an awards banquet. The Innovation Awards served as a celebration of the Relativity community, honoring partners and individuals who leveraged the platform (and their expertise) to demonstrate excellence in the field of legal data intelligence. The winners are listed below:
Best Innovation Winners
Beyond – CDS Vision Financial Analysis (CDS)
Transforms structured financial data into analyzable components within Relativity.
Uses generative AI to perform entity extraction and summarization.
Organize – Split! (Troutman eMerge)
AI‑assisted tool for document unitization in Relativity.
Achieves over 94 % accuracy in boundary detection.
Discover – HSF Kramer Snap (Herbert Smith Freehills Kramer)
AI for image classification and identity document recognition.
Enables automatic categorization and summarization of images within RelativityOne.
Act – Chronology Plus (Allens)
Integrates with Relativity to draft event summaries automatically via generative AI.
Synchronizes with document chronologies.
Workflows – Hatch Waxman Pipeline Protection & Redaction Workflow (IntrepidX)
Combines an AI‑driven triage step (aiR) with continuous active learning (CAL).
Reduces review workload while maintaining a defensible audit trail.
Best Innovator Winners
Access to Justice – Melissa Weberman (Arnold & Porter Kaye Scholer LLP)
Leads the largest global pro bono deployment of RelativityOne, supporting 86 pro bono matters (2.5 TB of data) in the past year.
Oversaw two of the first pro bono applications of aiR for Review and aiR for Case Strategy.
Customer Experience – Daniel Smith (A&O Shearman)
e‑Discovery consultant with approximately 25 years of experience.
Drove generative AI adoption and tailored training in his firm’s disputes practice.
Inclusion – Nicole Allen (Kilpatrick Townsend & Stockton LLP)
Advocate for mentorship, inclusion, and under‑represented communities.
Advances justice through tech innovation and pro bono work.
Legal Education – Andrew Pardieck (Southern Illinois University Simmons Law School)
Law professor teaching e‑discovery; directs the school’s e‑discovery pro bono project.
Stellar Women – Sarah Cole (Cimplifi)
Director of Client Engagements.
Mentors others toward technical confidence, career clarity, and authentic leadership while driving equity in tech.
Cybersecurity Awareness Month in Focus, Part III – The EU AI Act Is Here—What It Means for U.S. Employers
The European Union’s Artificial Intelligence (AI) Act, effective from August 1, 2024, imposes a risk-based framework on AI systems used within the EU, affecting U.S. employers that use AI for HR functions involving EU candidates or employees. With significant penalties for noncompliance, the AI Act categorizes many workplace AI uses as “high risk,” requiring immediate adherence to specific obligations and full compliance by August 2027.
Quick Hits
The EU AI Act took effect on August 1, 2024, and carries extraterritorial reach. U.S. employers can be covered even without a physical EU presence if AI outputs are intended to be used in the EU—e.g., recruiting EU candidates, evaluating EU-based workers or contractors, or deploying global HR tools used by EU teams.
Several obligations have begun to take effect, with more phasing in through 2026–27. The European Commission also released a voluntary General-Purpose AI (GPAI) Code of Practice to streamline compliance for model providers.
Employers’ use of AI in the workplace will be treated as potentially “high risk,” triggering duties such as worker notice, human oversight, monitoring for discrimination, logging, and adherence to applicable privacy laws when the core high-risk system requirements begin taking effect starting in August 2026.
Now is the time to inventory HR and workforce AI, align contracts and governance, and operationalize notice, oversight, and recordkeeping to meet EU requirements alongside evolving U.S. federal and state AI rules.
This is the third article in a four-part series aligned with Cybersecurity Awareness Month, which occurs annually in October. Part 1 discusses compliance tips for U.S. privacy leaders handling practical data rights requests, Part 2 addresses data rights requests in Canada, and Part 4 covers the considerations for responsible use of artificial intelligence (AI) and automated decision-making tools (ADMTs).
Why the EU AI Act Can Apply to U.S. Employers
The EU AI Act adopts a risk-based framework and applies extraterritorially where AI systems or their outputs are intended to be used in the EU. In practice, this means a U.S. employer may have EU AI Act obligations if it:
uses AI-enabled recruiting or screening tools for roles open to EU candidates, even if the hiring team and systems are U.S.-based;
applies AI to evaluate performance, promotion, or termination decisions for EU-based employees, contractors, or contingent workers; or
operates global HR or IT platforms that incorporate AI functionalities accessible to EU establishments.
For instance, a U.S. company using an AI-powered résumé screener for a global applicant pool could be covered if that system ranks or filters EU-based candidates. In practice, if any AI output influences employment outcomes within the EU, even indirectly, the law can potentially apply.
The law treats many workplace AI uses as “high risk.” While the most prescriptive requirements fall on “providers” that build high-risk AI, “deployers” (i.e., employers that implement those tools) also have direct obligations. As with the EU’s General Data Protection Regulation (GDPR), penalties can be significant—rising to the greater of multimillion-euro fines or a percentage of worldwide annual turnover, depending on the breach category.
Where Implementation Stands Today and What to Watch Next
The AI Act entered into force on August 1, 2024, following adoption by EU institutions earlier that year. For employers, the bottom line is twofold: some obligations already apply, and EU institutions are pressing forward on timelines and supporting guidance rather than pausing implementation:
Prohibited AI practices and AI literacy obligations took effect first on February 2, 2025, requiring discontinuation of certain banned uses particularly in HR contexts such as emotion recognition in workplaces or biometric categorization.
Codes of practice have begun to be published. For example, the Commission has released the voluntary General-Purpose AI (GPAI) Code of Practice and guidance in the form of frequently asked questions (FAQs) to support transparency and model documentation.
Governance, supervision, and penalty frameworks have begun to apply before the high-risk system obligations fully phase in, signaling that enforcement infrastructure is underway.
February 2, 2026: Guidance expected on compliance for high-risk AI systems and illustrative examples clarifying which workplace and HR uses of AI (e.g., recruiting, promotion, performance evaluation) qualify as high-risk systems, helping employers determine which tools must meet the AI Act’s documentation, oversight, and logging requirements.
August 2026–August 2027: High-risk system obligations fully apply, with a narrow subset delayed until August 2, 2027. Employers will be required to ensure human oversight, worker notice, and logging processes are operational by this point.
Employers will want to monitor additional Commission guidance and national implementation activities by EU member states, which may introduce supplemental detail or supervisory expectations impacting HR deployments.
What Counts as ‘High Risk’ in the Workplace—And What Employers Must Do
The AI Act’s risk tiers run from “unacceptable” (banned) to “high,” “limited,” and “minimal.” In the employment context, AI used for recruiting, screening, selection, performance evaluation, or other employment-related decision-making is explicitly listed as high risk.
In practical terms, this means that many AI tools already used in HR, such as chatbots that screen candidates, résumé-ranking software, or productivity analytics used in performance reviews, may fall within the AI Act’s “high-risk” category. Employers may also want to be careful about relying upon vendor assurances of compliance without independent validation.
In summary, employer obligations are as follows:
Worker notification is required before implementing high-risk AI in the workplace, including informing workers’ representatives where applicable.
Human oversight must be established by individuals with appropriate competence, training, authority, and support. Oversight should be meaningful, with the ability to intervene and override outputs where necessary.
Monitoring is required to detect issues such as discrimination or adverse impacts, with prompt suspension of use and notification obligations where issues arise.
Logs automatically generated by an AI system must be maintained for an appropriate period, with at least a six-month minimum retention baseline.
Data privacy compliance remains essential, including alignment with EU privacy laws that may apply to HR data processing and cross-border transfers.
Planning for Compliance
Mapping the corporate AI footprint in HR and beyond. This includes inventorying where AI or algorithmic logic influences employment decisions, candidate sourcing, screening, assessments, performance management, scheduling, or compensation. Mapping also includes identifying whether outputs are used for or affect EU candidates, employees, or contingent workers. Employers may also want to classify use cases against the AI Act’s risk tiers and flag any functionality that could edge toward prohibited categories.
Assigning internal roles and accountability. Employers may want to clarify who acts as a “deployer” within the organization for each tool, who owns worker notices, who is responsible for human oversight design and day-to-day review, and who tracks compliance and metrics. Employers may also want to establish escalation paths if potentially discriminatory or inaccurate outputs are detected.
Operationalizing worker notice and human oversight. Building template notices for workers and, where needed, workers’ representatives that explain the AI system’s purpose and use is an important compliance tool. Template notices typically include defined oversight procedures specifying when humans must review, override, or decline to rely on AI outputs, and how those interventions are documented. Employers may want to ensure that overseers are trained on the technology, its limitations, and AI bias risks.
Strengthening vendor diligence and contracts. Requiring providers to supply “instructions of use,” model documentation, and transparency information aligned to the AI Act is another key compliance tool. Employers may want to embed cooperation commitments for discrimination monitoring, prompt remediation, logging, incident reporting, and audit support. Employers may also want to ensure deletion, correction, and security obligations are extended through sub-processors.
Implementing logging and recordkeeping. Employers may want to confirm that high-risk systems automatically generate adequate logs and that the retention schedule meets or exceeds the AI Act’s minimums. Ensuring that logs can support investigations, fairness reviews, and regulator inquiries is a step included in this task.
Monitoring for discriminatory or adverse impacts. Establishing metrics, thresholds, and cadence for fairness and accuracy reviews will help employers maintain compliance with the AI Act. If issues arise, consider suspending use, notifying as required, and coordinating with providers to address root causes, retraining, or configuration changes.
Aligning with privacy and data governance. Employers may want to cross-check AI deployments against EU and U.S. privacy regimes, including transparency, purpose limitation, data minimization, access controls, and security requirements. Harmonizing HR data handling across jurisdictions will help avoid fragmented practices and inconsistent risk controls.
Calibrating globally to avoid conflicts. The U.S. landscape is evolving—federal agencies have issued AI-related guidance, and states and cities continue to explore rules affecting employment decision tools. Design controls that satisfy EU requirements while anticipating U.S. developments to reduce rework and ensure consistency across the company’s global HR stack.
How the EU AI Act Fits Within a Broader Compliance Posture
For many organizations, the AI Act will layer onto existing data subject access request (DSAR) and privacy workflows, ethics reviews, and vendor management practices established under U.S. state privacy laws and the GDPR. Lessons learned from data privacy compliance requirements—such as notification and recordkeeping requirements—translate directly to AI governance.
For in-house teams already managing privacy impact assessments, bias audits, or vendor risk reviews, integrating AI Act compliance into those existing workflows is often the most efficient approach. EU authorities, including the new AI Office, are continuing to develop sector guidance and templates for compliance documentation.
AI in Recruiting and Employment Decision-Making – New California AI Regulations Strike a Balance Between Efficiency and Algorithmic Accountability
Introduction
The use of artificial intelligence (AI) in employment decision-making is no longer a theoretical, future-tense possibility. It is here and is reshaping how employers find, assess, and promote talent. As employers’ use of AI has increased, so has the development of AI regulation at the state and local level, including in California. As discussed in K&L Gates’ 29 March 2025 alert, California took a number of steps in 2025 to regulate the development and use of AI in employment to ensure that California employers’ use of AI tools is free of discrimination and bias.1 This alert takes a closer look at one of those recently implemented regulatory actions. On 1 October 2025, the California Civil Rights Council’s (CRC) March 2025 “Employment Regulations Regarding Automated-Decision Systems” took effect (CRC Regulations) under the Fair Employment and Housing Act (FEHA). Now, every California employer covered by the FEHA must practice algorithmic accountability when using Automated Decision Systems (ADS) and AI in employment decisions.2
The intent of the CRC Regulations is clear: innovation must serve fairness and equity, not undermine it. An AI tool’s efficiency, while powerful, cannot replace human oversight, judgment, and analysis. Under the CRC Regulations, human participation is required not only to understand how the tool impacts a candidate or employee’s opportunities but also to determine when and how to intervene when an ADS is used.
Defining Automated Decision System
Under the CRC Regulations, an ADS is defined as:
“A computational process that makes a decision or facilitates human decision making regarding an employment benefit…derived from [or] using artificial intelligence, machine learning, algorithms, statistics, or other data processing techniques.”3
The CRC Regulation’s definition of an “Artificial Intelligence System” is similarly broad—any “machine-based system that infers, from the input it receives, how to generate outputs,” whether those outputs are predictions, recommendations, or decisions.4
In practice, that scope captures most of the AI-based technology now shaping employment decisions, such as:
Resume filters that rank or score candidates;
Online assessments measuring aptitude, personality, or “fit;”
Algorithms targeting specific audiences for job postings;
Video-interview analytics evaluating tone, word choice, or expression; and
Predictive tools drawing on third-party data.
If a tool influences an employment outcome, directly or indirectly, it likely qualifies as an ADS under the CRC Regulations.
Key Compliance Duties and Risks
The CRC Regulations establish a framework that blends civil rights principles with technical oversight. Employers must now take the following steps when implementing ADS and Artificial Intelligence Systems:
Prevent Discrimination (Direct and Indirect)
It is unlawful to use any ADS or selection criteria that creates a disparate impact against a protected class under FEHA. Scrutinizing liability does not stop with the question of intent. Impact must be considered.
Conduct Bias Testing and Audits
ADS tools must undergo anti-bias testing or independent audits that are timely, repeatable, and transparent. A single validation at launch is not enough and will not demonstrate sufficient reasonable measures. Fairness checks must be integrated as regular and systemized maintenance practices.
Provide Notice and Transparency
Applicants and employees must receive pre-use and post-use notices explaining when and how ADS tools are used, what rights they have to opt out, and how to appeal or request human review.
Assert an Affirmative Defense Through Good-Faith Efforts
Employers facing claims under FEHA may defend themselves by showing reasonable, well-documented anti-bias measures including but not limited to: audits, corrective actions, and continuous oversight. But that defense is only as strong as the evidence supporting it.
Assume Responsibility for Vendors and Agents
Employers cannot outsource accountability. Bias introduced by a vendor or third-party platform remains the employer’s legal and ethical burden.
Retain Records for Four Years
FEHA now requires retention of ADS-related documentation for at least four years. This retention requirement includes but is not limited to: data inputs, outputs, decision criteria, audit results, and correspondence.
Through these requirements, the CRC makes it clear that, while automation in decision-making is not prohibited, employers must be responsible stewards when implementing such tools.
Practicing Algorithmic Accountability
At the crux of its framework, the CRC Regulations reflect a push towards algorithmic accountability. Algorithmic accountability requires that technology partners with human judgment. Employers cannot claim ignorance of how an algorithm operates or what data an AI tool uses. To the contrary, under the CRC Regulations, an employer that uses AI without understanding its foundation and logic now creates its own form of negligence and potential liability.
The CRC highlights the importance of retaining human input in decision-making despite the use of AI tools. At a minimum, employers must incorporate a human element at some point in the lifecycle of an employment decision to avoid running afoul of the CRC Regulations. Accountability means transparency in process, traceability in data, and intervention when fairness is jeopardized. It means partnering with AI and leveraging its strengths without surrendering ethical, legal, and managerial responsibilities.
Best Practices
To comply with the CRC Regulations, facilitate a culture of algorithmic accountability, and reduce risk, employers should consider the following practices:
Invest in Education and Awareness
Empower Human Resources and leadership teams with foundational understanding of ADS, its potential, its blind spots, and the social dynamics it can amplify. Oversight begins with literacy.
Engage Independent Auditors
External bias audits and model validations provide both credibility and objectivity. They also strengthen an employer’s affirmative defense by demonstrating due diligence.
Adopt Continuous Review and Monitoring
Bias is not a linear risk, and it can shift as data, users, and markets evolve. Regular audits, outcome monitoring, and feedback loops should become part of daily governance. Consult with outside counsel to build an appropriate cadence of audit-related protocol.
Institutionalize Documentation
Establish systems that capture, retain, and preserve ADS-related records including but not limited to: inputs, model parameters, audit logs, and decisions. These records must be maintained for at least the required four years.
Preserve Human Oversight
Employers should design decision flows that invite human touch, review, challenge, correction, and intervention.
The Bottom Line: Partner with AI, Do Not Defer to It
Ignorance of the law has never been a defense. Now, neither is efficiency. The CRC Regulations make clear that progress in automation must be matched by equal progress in accountability and must not replace human oversight.
1 See K&L Gates Legal Alert, 2025 Year-To-Date Review of AI and Employment Law in California, May 29, 2025, https://www.klgates.com/2025-Review-of-AI-and-Employment-Law-in-California-5-29-2025.
2 Employer is defined as “[a]ny person or individual engaged in any business or enterprise regularly employing five or more individuals, including individuals performing any service under any appointment, contract of hire or apprenticeship, express or implied, oral or written… Employees located inside and outside of California are counted in determining whether employers are covered under the Act. However, employees located outside of California are not themselves covered by the protections of the [California Fair Employment and Housing Act] if the allegedly unlawful conduct did not occur in California or the allegedly unlawful conduct was not ratified by decision makers or participants in unlawful conduct located in California.” Cal. Code Regs. tit. 2, § 11008.1(e).
3 Cal. Code Regs. tit. 2, § 11008.1(a).
4 Cal. Code Regs. tit. 2, § 11008.1(c).
US and Japan Agree to Trade Framework on Energy Infrastructure and Critical Mineral Investments
What Happened?
On October 28, 2025, the governments of the United States and Japan signed the United States-Japan Framework (US-Japan Framework) to coordinate on securing and refining important minerals. The US-Japan Framework is part of a bilateral strategic trade and investment agreement between the two countries and includes provisions to encourage Japanese investment in US critical energy infrastructure and critical minerals, and to outline the US commitment for reduced tariffs for Japanese goods. The Framework follows Executive Order 14345 and the September announcements outlining the trade deal.
The Bottom Line
The US-Japan Framework announces investment in key US sectors for energy, critical minerals, AI and electronics, and signals an easing of trade tensions between the two countries. Companies operating in these sectors and with cross-border relationships between the US and Japan should note these developments. The US-Japan Framework also signals US trade priorities and indicates how similar trade agreements between the United States and other countries may take shape in the future.
The Full Story
In July 2025, the Trump administration announced that the United States reached a trade deal with Japan that included a flat 15 percent tariff on nearly all Japanese goods and commitments from Japan to reduce non-tariff barriers and invest billions of dollars in the United States. In Executive Order 14345 of September 4, 2025, President Trump directed the Secretary of Commerce, in consultation with the United States Trade Representative, the Secretary of Homeland Security and the Chair of the United States International Trade Commission, to reduce stacked tariffs and impose a flat 15 percent duty on almost all Japanese imports. The United States and Japan signed a Memorandum of Understanding (MOU) in September outlining strategic investments in the United States. The US-Japan Framework memorializes trade commitments between the two countries and investments under the MOU in the United States.
Tariffs on Japanese Goods
Under Executive Order 14345, the 15 percent rate for Japanese imports will not “stack” on top of existing tariff rates. The 15 percent rate is inclusive of any most-favored nation (MFN) tariff rate above zero applied by the United States to goods not traded under a free trade agreement. If the MFN rate is above 15 percent, no additional tariff pursuant to the International Emergency Economic Powers Act (IEEPA) will be applied to Japanese goods. The new 15 percent tariff rate is retroactive to August 7, 2025, subject to an exception for Japanese automotive products, which face a 15 percent tariff, inclusive of MFN rates, effective September 16, 2025. Japanese products under the World Trade Organization Agreement on Trade in Civil Aircraft (except for unmanned aircraft) will be exempt from additional tariffs under IEEPA and Section 232 of the Trade Expansion Act on steel, aluminum, and copper. Executive Order 14345 leaves open the possibility that other Japanese products, including natural resources and generic pharmaceuticals, could be granted duty-free treatment following an assessment of the Secretary of Commerce regarding US national interests and Japan’s actions to carry out its commitments, including those under the US-Japan Framework.
Investment Commitments
According to a Fact Sheet published to announce the US-Japan Framework, Japan and various Japanese companies have committed to significant investments and collaborations in energy, infrastructure and industrial projects in the United States, to include:
Critical Energy Infrastructure Investments:
Up to $332 billion to support critical energy infrastructure in the United States, including the construction of AP1000 nuclear power plants and small modular reactors (SMRs), the supply of large-scale baseload power infrastructure, engineering, procurement and other services to build critical power plants, substations and transmission systems, design, procurement and maintenance services for large-scale power infrastructure, and natural gas transmission and power infrastructure services.
Up to $25 billion to supply large-scale power equipment such as gas turbines, steam turbines and generators for grid electrification and stabilization systems, including high-voltage direct current and substation solutions for mission-critical facilities.
Up to $25 billion to supply electrical power modules, transformers and other power-generation substation equipment.
Up to $20 billion to supply thermal cooling systems and solutions, including chillers, air handling systems and coolant distribution units essential for power infrastructure.
AI Infrastructure Investments:
Up to $30 billion to supply power station systems and equipment for data centers.
Up to $25 billion for advanced electronic components and power modules.
Up to $20 billion to supply fiber optic cables.
Electronics and Supply Chain Investments:
Up to $15 billion to produce advanced electronic components, including multilayer ceramic capacitors, inductors and electromagnetic interference suppression filters.
Up to $15 billion to supply energy storage systems and electronic devices and components.
Critical Minerals Investments:
Up to $3 billion to construct an ammonia and urea fertilizer facility in the United States
Up to$2 billion to construct a copper smelting and refining facility in the western United States.
Manufacturing and Logistics Investments:
$600 million to upgrade ports and waterways across the southern United States to facilitate the export of US crude oil.
$500 million to establish a high-pressure, high-temperature diamond grit manufacturing facility in the United States.
$350 million to construct a lithium-iron-phosphate production facility in the United States.
Trade and Defense Commitments
According to the Fact Sheet, Japan has also committed to further expand opportunities for US exports to Japan, including exports of some US-made vehicles to Japan, and to enhance distribution platform in Japan for US automakers to facilitate the sale of US manufactured and US safety-certified vehicles without additional testing in Japan. According to a Joint Statement issued in July, Japan already agreed to expedite increased purchases of US rice, and to make significant annual purchases of US agricultural goods and US energy. Japan has also committed to implement its anti-competition law applicable to smartphone software in a way that does not discriminate against US companies, balances the need for fair and free competition with user safety and convenience, and respects the legitimate exercise of intellectual property rights. The Fact Sheet notes US commitments to Japan’s defense sector, AI collaboration, and intelligence sharing.
The New Rules of AI-Driven Search Visibility for Law Firms
For more than two decades, law firms have relied on Google as a primary channel for attracting new clients. But that reality is shifting. Generative AI platforms such as ChatGPT, Google AI Overviews, and Perplexity are redefining how people find and evaluate professional services, including legal counsel.
The traditional experience of typing a few words into a search bar and scanning a page of results is giving way to direct, conversational searches and answers drawn from trusted sources, client reviews, and credible mentions. Instead of lists of firms, consumers are now being presented with just a few recommendations.
For law firms, appearing in traditional Google search results is no longer enough. If a firm is not visible to AI-powered discovery tools, it risks being invisible and losing potential clients.
What Has Changed with AI and Search Visibility
Generative AI does not rank websites the way Google’s search algorithm does. Instead, it looks for signals of authority, trust, and corroboration across the web. A first-page Google ranking does not guarantee inclusion in an AI-generated answer.
This has created what can be called the AI visibility gap. Many firms are absent from AI-generated results because their digital credibility signals—such as reviews, citations, structured data, and mentions—are not strong enough.
AI-driven traffic also behaves differently. These potential clients may click less, but when they do, they are often further along in their decision process. The implication is clear: firms must adapt how they are discovered and how they convert that visibility into consultations and cases.
How Law Firms Can Stay Visible
The fundamentals of visibility are evolving, but the roadmap is coming into focus. To remain relevant in an AI-first world, firms should focus on three core areas:
Strengthen authority and relevance signals.Publish content that answers client questions directly and in plain language. FAQs, explainer articles, and resource pages written in a conversational tone help AI models understand your expertise.
Build external credibility.Encourage client reviews on trusted platforms. Seek mentions in directories, legal associations, and reputable media outlets. Reviews and citations are key trust indicators for AI systems determining which firms to recommend.
Optimize for AI-driven leads.Even if the way people find your firm changes, pay attention to which sources bring them in and what they do once they arrive. Ensure the calls-to-action (whether calls, forms, or chat) are clear, accessible, and easy to complete.
A firm’s local presence still matters. If your Google Business Profile is not accurate or consistent, AI will not trust it. Visibility starts with the basics.
Why a Clear Visibility Strategy Matters
Ranking for keywords is no longer the goal. Firms should align their optimization and content efforts with measurable business outcomes such as qualified leads, consultations booked, and new client retainers.
It is important to recognize the different types of visibility and how they play distinct roles for potential clients:
Google Maps for location-based searches
AI search results for general questions, such as “how much does it cost to get a divorce?”
Voice assistants for people searching verbally while driving or multitasking
Directories and review platforms for building and verifying trust
Each of these requires a slightly different approach. Firms cannot afford to chase every channel. The key is prioritizing the tactics and platforms that best align with your goals, resources, and competitive landscape.
Feeling Overwhelmed? You’re Not Alone
Keeping up with these changes can feel daunting. Most firms do not have the time or resources to track AI search trends, manage structured data, or stay on top of reviews and directory updates.
That is where the right marketing partner makes a difference. Experienced partners bring the tools, scale, and expertise to help firms test new strategies, whether optimizing for AI discovery, refining website content, or integrating new communication tools, without distracting from the work of serving clients.
The payoff is real: law firms that strengthen their online authority and maintain a consistent digital presence are more likely to appear in AI-generated results and be recommended by modern search tools.
Measuring Success in the AI Era
Traditional SEO metrics still matter, but they no longer tell the whole story. Firms should also track:
Conversions from calls, chats, or form fills tied to AI-originated traffic
On-site engagement signals, such as time on page and bounce rate
Growth in client reviews and reputation strength
Combining traditional SEO data with new AI-specific insights provides a more complete view of how visibility translates to client growth.
Adapting to Win in an AI-Driven World
The way people search for legal help is evolving rapidly. What has not changed is their need for fast, trusted answers. Increasingly, those answers are being delivered by AI platforms that surface only a select few firms.
The takeaway for law firms is clear: Visibility now depends on authority, consistency, and credibility across the web. Firms that adapt early, strengthening digital signals, refining strategy, and aligning marketing with business outcomes, will be best positioned to stand out and win more clients.
The rules are changing. The opportunity is real. For firms ready to lead, now is the time to prepare for the future of search, because the future is already here.
Legal Tech Event Feature: Relativity Fest 2025
The e-discovery platform Relativity brought around 2,000 legal and technology professionals to the Hyatt Regency Chicago earlier this month (October 7–9) for Relativity Fest 2025. The legal data community gathered with Relativity and its partners to discuss how AI will reshape discovery and case strategy through a series of keynotes, judicial panels, breakout sessions, and learning labs.
In the opening keynote, CEO Phil Saunders and the executive team highlighted exciting new developments at the company including:
Generative AI becomes standard: Relativity announced that aiR for Review and aiR for Privilege, previously paid options which enabled generative-AI-assisted relevance and privilege review respectively, will be integrated into the flagship RelativityOne subscription. Relativity framed this move as a means of democratizing access to powerful AI tools.
aiR Assist for early case assessment: Saunders introduced aiR Assist, a tool designed to move generative AI “to the left” in the Electronic Discovery Reference Model by unpacking, summarizing, and triangulating documents during early case assessment.
Rel Labs investment arm: In partnership with the LegalTech Fund, Relativity launched Rel Labs—a clever reference to Nokia’s Bell Labs—to accelerate legal tech development of Relativity’s own product suite, as well as for promising legal tech startups.
In addition to the principal announcements, speakers also shared a number of important updates to existing Relativity products, including:
Draft with AI: Relativity developed the new Draft with AI feature within aiR for Review, which empowers lawyers to kick-start aiR projects with immediate case context, ultimately reducing friction around initial prompt iteration.
New analysis types and sensitive business information analysis in aiR for Review: Relativity is adding new analysis types to help users protect confidential business and personal information, empowering users to quickly detect sensitive content and establish appropriate safeguards from the start.
Generative AI for data breach response: Relativity Data Breach Response leverages AI to speed personal information extraction, cutting review time by 50–70% based on company data.
aiR for Case Strategy: The enhanced platform supports conversational search, enabling attorneys to ask questions and receive AI-supported answers while the system maintains context. It provides knowledge mapping by extracting key people, terms and relationships, uses AI agents to detect collection gaps, and streamlines deposition preparation by linking facts to witnesses.
While the keynote was primarily forward-facing, it also offered a valuable opportunity to look back as Relativity celebrated the five-year anniversary of its Justice for Change initiative. The program, which provides free RelativityOne access to social-justice organizations, covers more than 250 matters, while processing over 16.2 million documents. Speakers reaffirmed that Justice for Change will continue to provide groundbreaking tools and aid in the mission of those who provide legal support to marginalized communities.
The three-day e-discovery extravaganza concluded with the Innovation Awards, a banquet where Relativity honored the partners and individuals who best exemplified the company’s dedication to excellence, progress, and community.
Privacy Tip #465 – Privacy Risks Associated with AI
The use of AI tools is revolutionizing our society. The efficiency it presents is like nothing we have ever experienced. That said, there are risks worth considering.
“AI poses risks including job loss, deepfakes, biased algorithms, privacy violations, weapons automation and social manipulation. Some experts and leaders are calling for stronger regulation and ethical oversight as AI grows more powerful and integrated into daily life.”
The risks are not theoretical—they are real. Individuals who have devoted their lives to developing AI tools are warning society about its dangers.
The linked article above provides an excellent summary of 15 risks posed by AI and is well worth the read.
Inside the Exclusive – AI-Driven Hiring and Recruitment—Key Compliance Considerations for In-House Counsel [Podcast]
In this podcast recorded at Ogletree’s recent Corporate Labor and Employment Counsel Exclusive® seminar, Kristin Higgins (office managing shareholder, Dallas) and Jenn Betts (office managing shareholder, Pittsburgh) discuss the use of artificial intelligence (AI) by employers, including in hiring and recruiting. Jenn, who is co-chair of Ogletree Deakins’ Technology Practice Group, and Kristin provide an overview of California’s newly effective regulations prohibiting employers from using an “automated decision system” to discriminate against applicants or employees on a basis protected by the California Fair Employment and Housing Act. Kristin offers an overview of the consumer-focused Texas Responsible Artificial Intelligence Governance Act, which goes into effect in January. They conclude the discussion with pointers for employers, such as forming workgroups to evaluate new AI tools before deploying them in the workplace.