LinkedIn Sued for Using Private DMs to Train AI

LinkedIn Sued for Using Private DMs to Train AI. A class-action lawsuit has been filed against LinkedIn, accusing the social networking giant of using private direct messages (DMs) to train its artificial intelligence (AI) models starting in August 2024, without obtaining explicit consent from its users. The legal action, filed Tuesday in the U.S. District […]

New York’s Impending WARN Notice Requirement for Artificial Intelligence Related Layoffs Highlights Proliferating Nationwide Requirements

During her 2025 State of the State Address on January 14, 2025, New York Governor Kathy Hochul announced a plan to support workers displaced by Artificial Intelligence (AI) by requiring employers who engage in mass layoffs or closings subject to New York’s state Worker Adjustment and Retraining Notification law (“NY WARN”) to disclose whether AI automation played a role in the layoffs. Governor Hochul stated that the goal of these disclosures is to understand “the potential impact of new technologies through real data.” 
The Governor’s announcement states that she is directing the New York Department of Labor to impose this requirement, so presumably the change will be imposed without the need for legislative action. Specific details about the scope of the new disclosure requirement are not yet available.
The rise of AI in the workplace has been a matter of concern to many state lawmakers across the nation, as well as federal regulators. In New York, for example, New York City’s 2021 Local Law 144 placed guardrails on employers utilizing AI and other Automated Employment Decision Tools (“AEDTs”) in employment related decisions by requiring bias audits of AEDT tools and employer notice to employees and candidates of their use. Similarly, California nearly passed a law in 2024, SB 1047, requiring notice to employees when an AI system is used in employment decisions. While the bill was stalled out at the end of the 2024 California legislative session, California is expected to propose more AI safety legislation in 2025. Colorado will also impose a new requirement in 2026 for developers and users of employment-related AI to “use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination in the high-risk system.”
At the federal level, the Equal Employment Opportunity Commission (EEOC) issued two guidance documents in 2023 concerning the issues of adverse impact and disability accommodations in the use of AI and machine learning tools in making workplace decisions.
These proliferating laws show the need for employers to be intentional about their use of AI tools in making employment decisions. Legal and human resources leaders should familiarize themselves with how their organizations are using AI tools in the employment context, and design policies to ensure that the rapidly proliferating state and local requirements around AI usage are met. 

5 Trends to Watch: 2025 Financial Services Litigation

Increasing Focus on Payments — Payments litigation will likely continue and increase in 2025 in the United States and globally, along with increased use of Automated Clearing House (ACH) transfers and wires, bank and non-bank competition, state regulation, and more sophisticated fraud schemes. This trend should continue regardless of the incoming administration’s enforcement priorities. Borrowing from Europe, the United States could see increasing pressure for a Payment Services Regulation or other laws to shift more risk of payment fraud to financial institutions. State-based efforts to regulate interchange fees may create additional risk.
Increasing Use of Mass Arbitration and Rise of International Arbitration — Mass arbitration in the United States is likely to continue and increase, particularly as plaintiffs’ counsels become more equipped, efficient, and coordinated at lodging these attacks. International arbitration also is likely to increase, given globalization and diversification, driven by the growing complexity of cross-border issues. The strategic advantage of leveraging global litigation offices in regions like Latin America, Europe, and the Middle East will be crucial, as these areas continue to be hot spots for international business activities and disputes. Reliance on local knowledge will become increasingly important as parties seek more efficient and culturally sensitive resolutions.
Anti-Money Laundering (AML), Know Your Customer (KYC), and Compliance-Related Issues — There was increased activity over the past year on AML-related matters globally, and this trend appears likely to continue. This increase also is likely to carry over to civil litigation, including complex fraud and Ponzi schemes and allegations relating to improper asset management or trust disputes, where financial institutions are being more heavily scrutinized over actions taken by their customers, and the plaintiffs’ bar is expected to try to create more hospitable case law and jurisdictions. As regulatory scrutiny intensifies globally, financial institutions will continue to find themselves at the intersection of civil litigation and concurrent regulatory/criminal investigations, creating additional risks. The growing complexity of these cases underscores the need for banks to maintain vigilance and adaptability.
Changing Enforcement and Regulatory Risks — A slowdown of Consumer Financial Protection Bureau (CFPB)-related activity, including a relative slowdown of crypto enforcement, could take place over the course of the year due to the change of administration and agency leadership, but there could be an increase in certain states’ attorneys general activity. State-based regulation and legislation would pose additional risks, creating jurisdictional and other challenges. State regulatory agencies may continue enforcement efforts related to consumer protections in the financial services space. There also may be continued focus on fair lending practices, with potential litigation concerning artificial intelligence’s (AI) role in lending or other decisions. The rise of digital currencies also has introduced new legal challenges. Cryptocurrency exchanges are being held accountable for frauds occurring on their platforms and ongoing uncertainties in digital asset regulations are resulting in compliance challenges and related litigation.
Information Use and Security — The increasing use of new technologies and AI likely will result in increased risks and a rise in civil litigation. Litigation may emerge over AI tools allegedly infringing on copyrights. Another area would be AI-based pricing algorithms being scrutinized for potential collusion and antitrust violations or discrimination and bias. More U.S. states are proposing and passing comprehensive AI and other laws that do not have broad financial institution or Graham Leach Bliley Act-type exemptions, so there could be additional regulation. States also could continue efforts to pass new laws in the privacy area to address areas not currently regulated through federal laws.

CNIL Publishes 2025-2028 Strategic Plan

On January 16, 2025, the French Data Protection Authority (“CNIL”) unveiled its strategic plan for 2025-2028, highlighting its priorities for the coming years. Summarized below are the four key focus areas outlined in the CNIL’s strategic plan:

Artificial Intelligence (“AI”): With respect to AI, the CNIL commits to: (1) collaborating with European and international partners to promote harmonized AI governance; (2) providing guidance to stakeholders, clarifying applicable rules and implementing effective and balanced regulation of AI; (3) raising public awareness of the challenges raised by AI and the importance of exercising individuals’ rights; and (4) ensuring AI systems comply with applicable rules, including by creating a methodology and tools allowing such monitoring throughout the lifecycle of an AI system, and collaborating with other data protection authorities on EU-wide monitoring actions.
Protection of Minors: Recognizing the vulnerabilities of children in digital environments, the CNIL will prioritize safeguarding their personal data. Key actions include: (1) strengthening requirements for online platforms to ensure age-appropriate protections; (2) promoting tools and resources to enhance children’s understanding of their digital rights; (3) allowing minors to effectively exercise their rights; and (4) engaging with educators, parents, and industry stakeholders to create safer digital spaces for minors.
Cybersecurity and Resilience: With increasing cyber threats targeting organizations and individuals, the CNIL will focus on: (1) strengthening cooperation with all cybersecurity stakeholders; (2) supporting businesses and individuals in enhancing their data security practices and with facing cyber risks; (3) advocating for privacy-by-design approaches to mitigate cybersecurity risks; and (4) conducting investigations and enforcing sanctions to reinforce compliance with data breach notification requirements under the EU General Data Protection Regulation.
Everyday Digital Life: Apps and Online Identity: To address the pervasive role of technology in daily life, the CNIL commits to: (1) continuing the implementation of its apps strategy to protect individuals’ privacy, including by raising public awareness of the importance of privacy, monitoring the compliance of apps with applicable rules, and updating its guidelines for professionals working with apps; and (2) monitoring the development of apps and encouraging companies to adopt user-centric approaches that respect privacy.

Read the CNIL’s press release and strategic plan (in French).

Mr. Robot Goes To Washington: The Shifting Federal AI Landscape Under the Second Trump Administration

President Trump’s inauguration on January 20, 2025, has already resulted in significant changes to federal artificial intelligence (AI) policy, marking a departure from the regulatory frameworks established during the Biden administration. This shift promises to reshape how businesses approach AI development, deployment, governance, and compliance in the United States.
Historical Context and Initial Actions
The first Trump administration (2017–2021) prioritized maintaining US leadership in AI through executive actions, including the 2019 Executive Order (EO) on Maintaining American Leadership in Artificial Intelligence and the establishment of the National Artificial Intelligence Initiative Office. This approach emphasized US technological preeminence, particularly in relation to global competition.
For its part, the Biden administration’s approach to AI development emphasized “responsible diffusion” — allowing AI advances and deployment while maintaining strategic control over frontier capabilities.
In a swift and significant move, President Trump revoked President Biden’s October 2023 Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence on his first day in office. This action signals a clear pivot toward prioritizing innovation and private sector growth and development over regulatory oversight and AI safety (or at least a move away from government mandates and toward market-driven safety measures).
Emerging Policy Priorities
Several key priorities have emerged that will likely shape AI development under the second Trump administration:

Focus on National Security: The Trump administration EO framed AI development as a matter of national security, particularly with respect to competition with China. This is an area where the administration is likely to enjoy bipartisan support.
Energy Infrastructure: The Trump administration’s declaration of a national energy emergency on his first day in office highlights the administration’s recognition of AI’s substantial computational and energy demands. And on his second day in office, President Trump followed the declaration with the announcement of a private-sector $500 billion investment in AI infrastructure assets code-named “Project Stargate,” with the first of the project’s data centers already under construction in Texas.
Defense Integration: Increased military spending on AI capabilities and the administration’s focus on military might indicate an emphasis on accelerated development of defense-related AI applications.

Regulatory Shifts and Business Impacts
The new administration’s approach signals several potential changes to the AI regulatory landscape:

Federal Agency Realignment: Key agencies like the Federal Trade Commission may relax their focus on consumer protection to allow more free market competition and innovation.
Preemption Considerations: The administration might pursue federal legislation to create uniform standards that preempt the current patchwork of state and local AI laws and regulations.
International Engagement: Restrictions on international AI collaboration and technology sharing, particularly regarding semiconductor exports used for AI (which had already been tightened under the Biden administration), are likely to be enhanced.

Strategic Planning Considerations
The AI policy shift creates new imperatives for business leaders, including:

Multi-jurisdictional Compliance: Despite potential reduced federal oversight, businesses must maintain compliance with any applicable federal, state, and local regulations and international requirements, including the EU AI Act for those organizations doing business in EU countries.
Investment Strategy: Changes in federal policy and potential international trade restrictions could transform AI development costs, investment patterns, and technology budgets.
Risk Management: Businesses should maintain robust internal governance frameworks regardless of regulatory requirements, particularly considering the ongoing operational and reputational risks.

Looking Ahead
While specific policy developments remain in flux, the Trump administration’s emphasis on technological leadership and reduced regulatory oversight suggests a significant departure from previous approaches. The continued integration of AI into critical business functions, however, necessitates continued attention to responsible development and deployment practices, even as the regulatory landscape evolves.
Businesses should stay informed of policy developments while maintaining robust AI governance and compliance frameworks that can adapt to changing federal priorities while ensuring compliance with any applicable legal and regulatory obligations and standards.

Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity and Potential Implications Under the Trump Administration

On January 16, 2025, President Joe Biden signed the “Executive Order on Strengthening and Promoting Innovation in the Nation’s Cybersecurity.” This directive seeks to tackle the increasingly complex and evolving cybersecurity threats confronting the United States. From nation-state actors to sophisticated cybercriminal organizations, the U.S. faces unprecedented challenges to its critical infrastructure, government systems, and private sector networks. The executive order outlines a multifaceted strategy aimed at safeguarding the nation’s digital landscape while encouraging innovation and collaboration in cybersecurity technologies.
However, the future of this order has come into question following President Donald Trump’s inauguration on January 20, 2025. President Trump has shown a readiness to reassess policies set by his predecessor, including the potential revocation of previous executive orders. This client alert offers a summary of President Biden’s cybersecurity order, explores potential implications under the Trump administration, and provides guidance for businesses navigating this uncertain regulatory landscape.
Overview of President Biden’s Executive Order
President Biden’s executive order is a comprehensive initiative aimed at addressing the most pressing challenges in cybersecurity. The directive outlines crucial measures that federal agencies, contractors, and private sector partners are required to adopt to enhance their resilience against cyber threats. Key components of the order include:
Development of Minimum Cybersecurity Standards
The order requires the development of baseline cybersecurity standards for federal contractors and suppliers. These standards encompass requirements for multi-factor authentication (MFA), endpoint detection and response (EDR) systems, and the encryption of sensitive data both in transit and at rest. Contractors must demonstrate compliance to secure or maintain government contracts.
Enhanced Public-Private Collaboration
Acknowledging the interconnected nature of the public and private sectors, the order establishes a framework for improved information sharing. Federal agencies are directed to share threat intelligence and vulnerability information with private entities to enable faster responses to emerging threats.
Sanctions on Foreign Cyber Actors
To deter nation-state-sponsored cyberattacks, the executive order allows for sanctions against foreign actors targeting U.S. entities, including critical infrastructure such as health care facilities and energy systems. This provision underscores the administration’s commitment to holding adversaries accountable for malicious cyber activities.
Quantum-Resistant Cryptography
The order prioritizes transitioning federal systems to quantum-resistant cryptographic algorithms to safeguard sensitive data from future quantum computing threats. Agencies are required to develop implementation plans and timelines for this transition.
Artificial Intelligence in Cybersecurity
The executive order calls for pilot programs to investigate the use of artificial intelligence (AI) in cybersecurity applications, particularly in the energy sector. These programs seek to leverage AI for real-time threat detection, automated responses, and enhanced incident recovery.
Potential Impacts Under the Trump Administration
The Trump administration’s approach to cybersecurity remains uncertain, but early signs indicate possible adjustments to Biden’s executive order. Historically, the administration has focused on minimizing regulatory burdens and encouraging industry-led solutions, which may influence the implementation of this directive.
Adjustments to Cybersecurity Standards
The administration may choose to implement less prescriptive cybersecurity requirements, encouraging businesses to adopt voluntary best practices rather than enforceable mandates for federal contractors. This could lead to greater flexibility but might also introduce variability in security practices.
Reevaluation of Quantum-Resistant Cryptography
While quantum-resistant cryptography addresses long-term risks, the administration might prioritize immediate cybersecurity challenges, potentially delaying the transition to quantum-resistant algorithms.
Focus on Targeted Sanctions
The Trump administration may refine its sanctions policy to focus on specific high-impact cases rather than broad deterrence, which could influence the overall effectiveness of this measure.
Shifts in Public-Private Collaboration
Efforts to enhance public-private collaboration may evolve, with businesses potentially taking on a larger role in independently managing cybersecurity risks. This could lessen the emphasis on centralized federal support for information sharing.
Guidance for Companies
In light of these developments, businesses must proactively adapt to an evolving cybersecurity landscape. Regardless of whether the executive order remains in effect, organizations should prioritize cybersecurity to mitigate risks and uphold resilience. Below are suggested actions for companies:
Strengthen Internal Cybersecurity Measures

Conduct a thorough assessment of existing cybersecurity protocols to identify vulnerabilities and opportunities for enhancement.
Implement multi-factor authentication (MFA), endpoint detection and response (EDR) tools, and robust encryption practices to protect sensitive data.
Develop and test incident response plans to ensure rapid recovery from cyber incidents.

Monitor Regulatory Changes

Stay updated on possible changes to the executive order and associated cybersecurity policies from the Trump administration.
Engage with legal and compliance teams to assess the effects of regulatory changes on business operations.
Monitor state and international regulations to ensure compliance with relevant standards.

Invest in Cybersecurity Innovation

Investigate emerging technologies, such as AI-driven cybersecurity tools, to enhance threat detection and response capabilities.
Evaluate the feasibility of transitioning to quantum-resistant cryptographic algorithms, even in the absence of federal mandates.
Collaborate with industry partners to embrace innovative solutions and exchange best practices.

Foster Public-Private Partnerships

Engage in information-sharing initiatives like the Cybersecurity and Infrastructure Security Agency’s (CISA) programs to remain informed about threat intelligence.
Promote policies that encourage collaboration between the public and private sectors to strengthen collective security.

Prepare for Geopolitical Risks

Monitor geopolitical developments and their potential impact on cyber threats, particularly those originating from nation-states.
Strengthen supply chain security to reduce risks associated with foreign adversaries.
Conduct tabletop exercises to simulate responses to nation-state cyberattacks.

Implications for the Private Sector
The uncertainty surrounding the executive order underscores the necessity for businesses to adopt a proactive and flexible approach to cybersecurity. Key implications include:
Increased Responsibility on Businesses
With potential adjustments to federal oversight, companies may need to be more proactive in managing their cybersecurity risks. Implementing strong internal policies and investing in advanced security technologies will be crucial.
Fragmented Regulatory Environment
If federal mandates are modified, businesses may face a patchwork of state and international regulations. Navigating this fragmented landscape will demand considerable resources and expertise.
Heightened Cyber Threats
The evolving threat landscape, along with potential policy changes, could make critical infrastructure and private networks more vulnerable to sophisticated attacks. Companies must remain vigilant and prepared to respond to emerging threats.
Competitive Differentiation
Organizations that prioritize cybersecurity and demonstrate a commitment to protecting customer data may gain a competitive advantage in the market. Establishing trust with stakeholders through transparency and robust security measures will be crucial.
Final Thoughts
President Biden’s executive order marks a significant advancement in addressing the nation’s cybersecurity challenges. However, its future under the Trump administration remains uncertain, with the potential for policy adjustments. Businesses must navigate this evolving landscape by bolstering internal measures, staying updated on regulatory shifts, and investing in innovation.
While the federal government’s role in cybersecurity may evolve, the responsibility for safeguarding critical systems and data ultimately rests with the private sector. By implementing proactive strategies and encouraging collaboration, companies can enhance their resilience against cyber threats and contribute to a more secure digital ecosystem.
For additional information about President Biden’s executive order, check out President Biden Issues Second Cybersecurity Executive Order.

Key Developments in German Labor and Employment Law for 2025

Labor and employment law in Germany will see a number of important developments in 2025. The Bureaucracy Relief Act IV took effect on January 1; the EU AI Act’s initial provisions on unauthorized AI take effect on February 2; and the Self-Determination Act took effect late last year.
Important developments are also on the near horizon regarding pay equity and professional validation for experienced employees who lack degrees. The following is a summary of some of the key developments for employers to put on their radars.
Quick Hits

The Fourth Bureaucracy Reduction Act, effective January 1, 2025, simplifies requirements for giving written evidence of employment contracts, allowing now digital agreements for open-ended contracts while maintaining written form for fixed-term contracts.
The EU’s AI Act, effective from August 2024, introduces regulations on AI systems, with initial provisions on unauthorized AI use starting February 2, 2025, and further regulations on general-purpose AI models and sanctions taking effect on August 2, 2025.
The Self-Determination Act, effective late 2024, mandates that employers update relevant documents for transgender, intersex, and nonbinary employees upon request, with fines up to EU 10,000 for noncompliance.

Bureaucracy Reduction Act IV
The Fourth Bureaucracy Reduction Act (BEG IV) took effect on January 1, 2025. The aim is to reduce bureaucratic hurdles and relieve the burden on employers. Here is an overview of the most important points:
Simplification of the Formal Requirement of the Evidence Act
The formal requirements of the Evidence Act introduced in 2022 will be partially simplified by the BEG IV. Significant terms and conditions of employment and changes to them will no longer have to be made in writing (i.e., signed by hand) but can be drawn up and transmitted in text form. This means that permanent employment contracts can be concluded completely digitally if the employment contract is agreed in text form. In the future, an email with a scanned signature may be sufficient for an open-ended employment contract. Therefore, it is necessary that the essential terms and conditions of employment can be stored, accessed, and printed by the employee. In addition, the employer must request the employee to confirm receipt of the transmission. However, these changes do not apply to certain sectors—listed in § 2a of the Act to Combat Clandestine Employment (“SchwarzArbG”)—for which a handwritten signature is still required.
Fixed-Term Contracts Remain Strictly Regulated
While there are simplifications for open-ended contracts, the written form is still required for fixed-term contracts. Such agreements must still be set out in writing to ensure legal certainty. A purely electronic fixed-term contract will still be inadmissible and open to challenge in 2025. The exception here is the fixed term for reaching retirement age, for which the text form will also be sufficient. In this respect, the legislator has taken on board the criticisms of the draft law. If the written form had also been maintained for the contractual clause on termination of the contract upon reaching retirement age, no employer would be able to use the bureaucratic relief without concern. As a precautionary measure, almost all open-ended contracts contain a termination clause for reaching retirement age, as the employment relationship does not automatically end when employees (can) draw a pension.
Job References in Electronic Form Permitted
A new development is that employers may now issue employment references electronically, provided that a qualified electronic signature is used and the employees agree. To check the validity of the e-signature of a PDF, the EU Commission has created a general tool. In what regard the option of issuing employment references electronically will be accepted in practice remains to be seen, not only because of the more extensive procedure. If employment references are signed electronically, the time of the electronic signature is unalterably recognizable and the usual backdating to the leaving date is not possible. Any discrepancy between the date of issue and the date of signature and thus conclusions about a supposedly nonconsensual separation would then be impossible to conceal. Employees can, however, continue to request the traditional paper form if they prefer it.
No Changes to Termination Notices and Termination Agreements
The written form (wet ink signature) continues to apply to notices of termination and termination agreements. These must be signed by hand—electronic formats are expressly excluded here.
First Provisions of EU AI Act Take Effect
The European Union’s AI Act came into force at the beginning of August 2024 as the world’s first comprehensive law on the regulation of artificial intelligence. The regulation classifies AI systems according to their risk and sets standards and requirements for them accordingly. Most of the regulations are aimed at the developers of the systems, although users are also subject to obligations. The EU regulation does not need to be transposed into national law, and the individual regulations will come into force in stages over the next few years.
Initially, the applicability of the provisions on the unauthorized use of artificial intelligence will begin on February 2, 2025. Art. 5 of the AI Act lists various types of AI-based practices whose use is generally prohibited. Examples include systems for social scoring or monitoring emotions in the workplace. In the opinion of the EU legislator, these violate central European values, above all fundamental rights, and are unacceptable as a broad risk.
One year after the directive comes into force, the regulations on AI models with a general purpose come into force on August 2, 2025. Such models are not limited to one application purpose but are generally valid and capable of competently performing a wide range of different tasks. The providers of such models must ensure that copyrights are observed, keep detailed technical records of the development and testing of their AI and make these available to other companies that wish to use their model. For providers of AI models that are open source and freely available to the public but do not pose a systemic risk, a reduced obligation applies to the extent that they must comply with copyright law and publish a summary of the training data.
In addition, as of August 2, 2025, the sanctions provisions of the AI Act will apply, apart from fines for providers of AI models with a general purpose.
Self-Determination Act
Shortly before the turn of the year, the Self-Determination Act (SBGG) came into force. It makes it easier for transgender, intersex, and nonbinary people to change their gender entry and their first names in the civil status register. This also has implications for the employment relationship. Employers are obliged to amend all relevant documents at the request of the employees concerned. This includes, for example, employment contracts, certificates, performance records and payment cards.
If the gender entry or first name has been changed, previous details may not be disclosed or researched without the consent of the employees concerned. A violation of this prohibition of disclosure can result in a fine of up to EUR 10,000.
New Attempt to Amend the Pay Transparency Act
The EU Directive on pay transparency (Directive (EU) 2023/970) aims to reduce gender pay gaps and promote equal pay. It has been in force since May 17, 2023, and must be transposed into national law by June 7, 2026. A draft bill had been announced for summer 2024 but not been published by the end of the year. In view of the necessary lead time for a legislative procedure and the preparation time to be granted to companies for far-reaching changes, it is to be expected that a new federal government will address the matter promptly. Germany will probably closely follow the directive when implementing it.
The directive will require employers to provide applicants with information on starting salaries or salary ranges—either directly in job advertisements or at the latest before an interview. However, questions about the previous salary development are no longer permitted for employers. Employees are also entitled to comprehensive information on pay criteria, individual remuneration and average salaries, broken down by gender and employee group. This information must be provided in writing within two months, regardless of the size of the company.
The reporting obligations will also be expanded: Companies with at least one hundred employees will have to prepare regular reports on pay-related indicators such as the gender pay gap. If such reports identify an unjustifiable gender pay gap of more than 5 percent, a pay assessment is required, usually carried out together with the works council or an alternative body.
The directive stipulates serious consequences for violations: Those affected can claim damages or compensation, and employers bear the burden of proof that there has been no discrimination. In addition, fines may be imposed. Trade unions and anti-discrimination bodies are given the right to sue to actively support those affected or to sue on their behalf.
Employers may want to start preparing early to comply with these upcoming requirements.
Professional Validation: New Opportunities for Experienced Employees Without a Degree
The Vocational Training Validation and Digitization Act (BVaDiG), which took effect on August 1, 2024, creates the possibility of officially recognizing the professional skills of people without a formal qualification as of January 2025. The aim is to assess skills based on the training regulations for a referenced occupation and to certify in a Chamber of Industry and Commerce (IHK) certificate, to the extent that these are comparable with completed vocational training. Important: Professional validation is not possible for advanced training qualifications such as the master craftsman.
The procedure is aimed at adults who:

have several years of professional experience;
do not have a formal vocational qualification in their profession;
are seeking proof of their professional competencies; and
for whom an external examination is currently not an option.

The BVaDiG offers employers a valuable opportunity to better utilize the potential of their workforce. The validation process gives long-serving employees who do not have a formal qualification the chance to have their skills officially recognized. This not only boosts employees’ motivation, but also increases their opportunities for deployment in the company.
In addition, employers can use professional validation to secure existing know-how within the company and close potential gaps in the supply of skilled workers.
Additional Employment Law–Related Developments
As has become customary in recent years, the minimum wage will be raised on January 1, 2025. It will rise from EUR 12.41 per hour to EUR 12.82 gross. At the same time, the mini-job threshold will increase from EUR 538 to EUR 556 gross. The proposals for the further development of the statutory minimum wage from the independent Minimum Wage Commission are expected in June 2025.
The Growth Opportunities Act amends Section 34 of the Income Tax Act, making it easier for employers to account for severance payments. Previously, the tax benefit of severance pay as a large one-off payment was only partially considered by the employer in payroll accounting. As of the beginning of the new year, employers no longer have to observe the so-called fifth rule for severance payments but can account for them without any special features. However, employees can still claim the privileged treatment of severance pay in their income tax assessment.
For births from April 1, 2025, the income limit above which parents are no longer entitled to parental allowance will fall from EUR 200,000 to EUR 175,000. This limit applies equally to couples and single parents. Furthermore, it is only possible to simultaneously receive basic parental allowance for a maximum of one month at a time and only within the first twelve months of the child’s life.
The new version of the Postal Act gives the postal service more time to deliver letters. Previously, 95 percent of letters had to arrive two working days after posting and 80 percent on the following working day. Section 18 I PostG now stipulates that only 95 percent of all letters must be delivered on the third working day after posting and 99 percent on the fourth. According to Deutsche Post AG, the standard for ordinary letters will shift so that they will generally be delivered on the working day after next. At the same time, Deutsche Post AG increased postage prices on January 1.
The rates for the minimum training allowance will also be increased at the turn of the year. In the first year of training, apprentices will receive EUR 682 per month (2024: EUR 649), and in the second year EUR 805 instead of the previous EUR 766. In the third year of training, this will be at least EUR 921 (2024: EUR 876), and in the fourth year, the prospective skilled workers can expect to receive at least EUR 955 (2024: EUR 909).
For various products placed on the market after June 28, 2025, as well as for various services provided to consumers after June 28, 2025, the provisions of the Accessibility Act (“BFSG”) will then apply. Among other things, online commerce, e-commerce services, and electronic communication services must be more accessible. Small companies are exempt from the obligations.
The German government has decided to double the maximum period of entitlement to short-time working allowance to twenty-four months. This regulation came into force on January 1 and is limited until December 31, 2025. From 2026, the regular maximum entitlement period of twelve months will apply again. Entitlements that extend beyond this period will expire at the end of December 31, 2025. With this measure, the German government is responding to the increase in short time working in Germany.

Potential Changes in the Regulation of Artificial Intelligence in 2025

On January 20, 2025, within hours of taking the oath of office for the second time, President Donald Trump issued an executive order that revoked an executive order from the prior administration regarding the use of artificial intelligence (AI). During the first term of President Trump from 2017 until 2021, the use of AI, in particular generative AI, was not as prevalent as it is today. As such, the number of regulations of AI at the federal, state, or local level was more limited as well. The significant technology advance and adoption over the past four years has impacted government regulation and will likely continue to do so.
On the first day of his second term in office, President Trump issued an executive order titled “Initial Rescissions of Harmful Executive Orders and Actions.” It reads that to “commence the policies that will make our Nation united, fair, safe, and prosperous again, it is the policy of the United States to restore common sense to the federal government and unleash the potential of the American citizen.” In connection with that, the executive order revokes more than 50 prior executive orders, including Executive Order 14110 of October 30, 2023 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence).
Executive Order 14110 outlined its purpose as “governing the development and use of AI safely and responsibly, and is therefore advancing a coordinated, Federal Government–wide approach to doing so.” The executive order also outlined eight “guiding principles and priorities”: 

Making AI safe and secure
Promoting “responsible innovation, competition, and collaboration” 
Committing to supporting American workers in the development and use of AI
Advancing equity and civil rights with AI
Protecting the interest of Americans using AI and AI-enabled products in their daily lives
Protecting Americans’ privacy and civil liberties
Managing the risks from the federal government’s own use of AI
Engaging with international allies and partners in developing a framework to manage AI’s risks, unlock AI’s potential for good, and promote common approaches to shared challenges.

Executive Order 14100 was one of many executive orders issued in the past four years. Some other prior executive actions on AI are the October 2024 “Framework to Advance AI Governance and Risk Management in National Security”; the U.S. Office of Special Counsel’s principles and policies for the use of AI; and most recently in January 2025, on federal support for AI Data Centers.
Chain of Executive Orders on AI
In his first term in office, President Trump did issue Executive Order 13859 in February 2019, “Maintaining American Leadership in Artificial Intelligence.” Under this executive order, “Agencies are encouraged to continue to use AI, when appropriate, to benefit the American people.” It also noted that “Agencies must therefore design, develop, acquire, and use AI in a manner that fosters public trust and confidence while protecting privacy, civil rights, civil liberties, and American values, consistent with applicable law and the goals of Executive Order 13859.” It outlined nine principles for the use of AI in government: 

Lawful and respectful of the Nation’s values
Performance driven
Accurate
Safe and secure
Understandable
Responsible and traceable
Regularly monitored
Transparent
Accountable. 

Many of these same principles can be found in the October 2023 Executive Order 14110 that has now been revoked, with the main difference of the latter addressing concerns about the implementation of AI in the private sector and in the government, while the 2019 Executive Order 13859 focused only on the government’s implementation of artificial intelligence.
The recent executive order revoking the 2023 Executive Order 14110 appears to reflect this difference, as the other executive orders on AI focused on the government’s use of AI were not revoked. This concern on regulation of the private sector can be found in the 2024 Republican Party platform on AI, which noted that certain executive orders hindered AI innovation and that they “support AI Development rooted in Free Speech and Human Flourishing.” Potentially as a sign of things to come, CEOs of large technology companies such as Amazon, Meta, Google, and Tesla were in attendance at the recent inauguration. 
Conclusion
It is worth noting that the recent executive order states that the “revocations within this order will be the first of many steps the United States federal government will take to repair our institutions and our economy.” Thus, it remains to be seen how future government action at the federal level in 2025 will impact the regulation of artificial intelligence going forward.

5 Trends to Watch: 2025 Trade Secrets

Large Damage Awards May Face Scrutiny. In 2024, courts issued several significant decisions concerning damages awards in trade secret misappropriation cases. Three recent federal court decisions overturned exceptionally large damages awards on the ground the plaintiff failed to prove causation between the proven misappropriation and the claimed damages. These cases illustrate that plaintiffs seeking recovery for misappropriation still have powerful tools at their disposal, including the potential for large damages awards and injunctive relief. But plaintiffs should consider taking care to show the causal nexus between the claimed damages and the proven misappropriation at trial.
Noncompete Enforceability Limited. The Federal Trade Commission (FTC) under President Trump’s nominee, Andrew Ferguson, may well rescind the FTC ban on noncompete agreements and withdraw appeals in the Fifth and Eleventh Circuits. This action may prompt states that have allowed their own proposed noncompete legislation to languish (e.g. New York, Illinois) to refocus on narrowing the ability of employers to impose such restrictions. The health care space is expected to see more states narrowing the enforceability of noncompetes as well (such as Rhode Island, Pennsylvania, Maryland, Iowa). Within this unpredictable and inconsistent landscape concerning the enforcement of noncompetes, it is critical for companies to protect and defend their trade secrets.
Plaintiffs Prevail in Trade Secret Trials. A recent analysis of federal trade secret cases that go to trial may be either alarming or heart-warming, depending on which side clients find themselves (see Stout’s “Trends in Trade Secret Litigation Report,” Nov. 4, 2024). The report’s findings include that out of 271 federal trade secret cases that went to verdict since 2017, 84% went in favor of the claimant. This may mean that more trade secret misappropriation filings will occur, and perhaps settlements before verdict will also be likely. Such settlements may reach higher amounts, further ratcheting up filings. As more cases are tried to verdict, the odds may stabilize toward more even outcomes, but clients and counsel should take note of these numbers.
The AI Revolution Could Dramatically Affect Trade Secrets. AI may create innumerable systems, algorithms and other material that constitute trade secrets, raising a host of issues, like who owns them and how to protect them. AI also poses a threat to owners of trade secrets that can be reverse-engineered by AI but perhaps not nearly as easily (or at all) by a human. We expect the law to evolve to address ownership of trade secrets created by AI and to bolster protection against AI-generated reverse-engineering.
Foreign Damages Are Available for Trade Secret Misappropriation. In 2024, the Seventh Circuit held that the federal Defend Trade Secrets Act (DTSA) has extraterritorial reach. This was the first circuit court in the country to find this explicitly. In so holding, the Seventh Circuit affirmed a nine-figure compensatory damages award that consisted entirely of the defendant’s foreign sales. All that is necessary to obtain foreign damages is an “act in furtherance” of the misappropriation in the United States, such as advertising products at a trade show that make use of the misappropriated information. Importantly, proximate causation between the act in the United States and damages is not required. The Seventh Circuit’s decision is likely to lead to an uptick in discovery battles over foreign damages and has the potential to increase damages in trade secret cases.

Bryan Harrison also contributed to this article.

Managing Artificial Intelligence: The Monetary Authority of Singapore’s Recommendations on AI Model Risk Management

This publication is issued by K&L Gates Straits Law LLC, a Singapore law firm with full Singapore law and representation capacity, and to whom any Singapore law queries should be addressed. K&L Gates Straits Law is the Singapore office of K&L Gates, a fully integrated global law firm with lawyers located on five continents.
Introduction and Background
On 5 December 2024, as part of the Monetary Authority of Singapore’s (MAS) incremental efforts to ensure responsible use of artificial intelligence (AI) in Singapore’s financial sector, MAS published recommendations on AI model risk management in an information paper1 following a review of AI-related practices of selected banks.
In the information paper, MAS stressed that the good practices highlighted in the information paper should apply to other financial institutions. This alert briefly outlines key recommendations in the information paper, with three key focus areas that MAS expects banks and financial institutions to keep in mind when developing and deploying AI, which covers (1) oversight and governance of AI, (2) key risk management systems and processes for AI, and (3) development, validation and deployment of AI.
Key Focus Area 1: Oversight and Governance of AI2 
Existing risk governance frameworks and structures (such as those related to data, technology and cyber; third-party risk management; and legal and compliance) remain relevant for AI governance and risk management. In tandem with these existing control functions, MAS deems it good practice for banks to do the following: 

Establish cross-functional oversight forums to avoid gaps in AI risk management and ensure that the bank’s standards and processes are aligned across the bank and kept in pace with the state of the bank’s AI usage.
Update control standards to keep pace with the increasing use of AI or new AI developments, policies and procedures relating to performance testing of AI for new use cases and clearly setting out roles and responsibilities to address AI risk. 
Develop clear statements and guidelines to govern areas such as fair, ethical, accountable and transparent use of AI across the bank to prevent potential harms to consumers and other stakeholders arising from the use of AI.
Build capabilities in AI across the bank to support both innovation and risk management.

Key Focus Area 2: Key Risk Management Systems and Processes3
MAS also recognised from most banks the need to establish or update key risk management systems and processes for AI, particularly in the following areas: 

Policies and procedures for identifying AI usage and risk across the bank, so that commensurate risk management can be applied to the respective AI model.
Systems and processes to ensure the completeness of a bank’s AI inventories, which also capture the approved scope of use for that particular AI (e.g., the purpose, use case, application, system and other relevant conditions) and provide a central view of AI usage to support oversight.
Assessment of the risk materiality of AI that covers key risk dimensions, such as AI’s impact on the customer, bank and stakeholders; the complexity of AI model or system used; and the bank’s reliance on AI, which takes into account the autonomy granted to AI and the involvement of humans, so that relevant controls can be applied proportionately. 

Key Focus Area 3: Development and Deployment of AI4
Most banks have established standards and processes for development, validation and deployment of AI to address key risks. MAS deems it good practice for banks and financial institutions to do the following:

In relation to the development of AI, to focus on data management, model selection, robustness and stability, explainability and fairness, as well as reproducibility and auditability. 
In relation to the validation of AI, to require independent validations or reviews of AI of higher risk materiality prior to deployment and to ensure that development and deployment standards have been adhered to. For AI of lower risk materiality, most banks conduct peer reviews that are calibrated to the risks posed by the use of AI prior to deployment. 
In relation to the deployment, monitoring and change management of AI, to perform predeployment checks, closely monitor deployed AI based on appropriate metrics, and apply the appropriate change management standards and processes to ensure that AI would behave as intended when deployed.

Generative AI and Third-Party AI5
MAS has noted that the use of generative AI is still in its early stages in banks and financial institutions. However, MAS suggests that banks and financial institutions should generally try to apply existing governance and risk management structures and processes where relevant and practicable. Innovation and risk management should be balanced by adopting the following: 

Strategies and approaches, in which a bank leverages on the general-purpose nature of generative AI for key enabling modules or services, but limits the current scope of generative AI to use cases for assisting or augmenting human and operational efficiencies that are not directly customer-facing. 
Process controls, such as setting up cross-functional risk control checks at key stages of the generative AI’s life cycle and requiring human oversight for generative AI decisions with attention on user education and training on the limitations of generative AI tools.
Technical controls, such as selection, testing and evaluation of generative AI models in the bank’s use cases; developing reusable modules to facilitate testing and evaluation; assessing different aspects of generative AI model performance and risks; establishing input and output filters as guardrails to address toxicity, bias and privacy issues; and mitigating data security risk via measures such as the use of private clouds or on-premise servers and limiting the access of generative AI to sensitive information.

In relation to third-party AI, existing third-party risk management standards and processes continue to play an important role in banks’ efforts to mitigate risks. As far as practicable, MAS suggests that banks extend controls for internally developed AI to third-party AI. Banks should also supplement controls for third-party AI with other approaches to mitigate additional risks. These include the following:

Conducting compensatory testing to verify the third-party AI model’s robustness and stability and detect potential biases.
Developing robust contingency plans to address potential failures, unexpected behaviour of third-party AI or discontinuing support by vendors.
Updating legal agreements and contracts with third-party AI providers to include clauses that provide for performance guarantees, data protection, the right to audit and notification when AI is introduced in third-party providers’ solutions to the banks and financial institutions.
Improving the staff training on AI literacy, risk awareness and mitigation. 

Conclusion
In conclusion, MAS has highlighted that robust oversight and governance of AI, supported by comprehensive identification, recording of AI inventories and appropriate risk materiality assessment, along with development, validation and deployment standards, are important areas that financial institutions and banks will need to focus on when using AI. Financial institutions and banks will need to keep in mind that the AI landscape will continue to evolve, and existing standards and process will need to reviewed and updated in consultation with MAS and industry best practices to ensure proper governance and risk management of AI and generative AI.

Footnotes

1 “Artificial Intelligence Model Risk Management: Observations from a Thematic Review,” accessible at https://www.mas.gov.sg/publications/monographs-or-information-paper/2024/artificial-intelligence-model-risk-management (the Information Paper).
2 Information Paper paras. 4.1–4.5.
3 Information Paper paras. 5.1–5.3.
4 Information Paper paras. 6.1–6.5.
5 Information Paper paras. 7.1–7.2.

AI Tools on Trial: Emerging Litigation Trends Impacting AI-Powered Technologies

With the increase in AI-related litigation and regulatory action, it is critical for companies to monitor the AI technology landscape and think proactively about how to minimize risk. To help companies navigate these increasingly choppy waters, we’re pleased to present part one of our series discussing emerging legal trends. Future alerts in the series will cover:

Deep dives into regulatory activity at both the federal and state levels.
Risk mitigation steps companies can take when vetting and adopting new AI-based technologies, including chatbots, virtual assistants, speech analytics, and predictive analytics.
Strategies for companies that find themselves in court or in front of a regulator with respect to their use of AI-based technologies.

But first, some background.
AI on Trial: How Did We Get Here?
In October 2023, we wrote about the emerging legal risks impacting businesses using new technologies, such as AI-powered website chat functions. In Chat with Caution, we discussed a new wave of privacy litigation seeking to dramatically expand state wiretapping laws to encompass new customer service technologies, and we identified measures that companies should take to avoid being targeted.
During 2024, class action plaintiffs increased their focus on new technologies and wiretapping laws, and courts began to address some of the thornier legal issues as claims proceeded past initial pleading stages. In late October 2024, we wrote about a critical decision at the Massachusetts Supreme Judicial Court rejecting plaintiffs’ theory that wiretapping laws could be extended to website tracking technologies. Nonetheless, we noted that the decision in Massachusetts doesn’t help to resolve cases in courts in other states, especially in California, where judges may reach different conclusions.
Meanwhile, regulators at the Federal Trade Commission (FTC) recently launched “Operation AI Comply,” which it describes as a “new law enforcement sweep,” related to using new AI technologies in misleading or deceptive ways.
And, while the incoming administration may have different enforcement priorities regarding AI, Operation AI Comply is rooted in concerns about Big Tech that are shared by the leadership in both political parties.
Additionally, the FTC’s actions to date under Operation AI Comply are tied to their longstanding authority over deceptive trade practices — meaning that any substantial shifts in focus are unlikely in the short-term. Regulatory action is not limited to the federal level – state attorneys general are taking notice as well, including in states that are litigation hotbeds, such as Massachusetts and California.
Plaintiffs Break Through in Federal Court
Though many courts have rejected the argument that wiretapping laws apply to new technologies such as chatbots, including the recent decision by the Massachusetts Supreme Judicial Court in Vita v. New England Baptist Hospital, plaintiffs have found success in the U.S. District Court for the Northern District of California.
In Yockey v. Salesforce, plaintiffs survived a motion to dismiss after sufficiently arguing that an undisclosed third-party chatbot service provider violated wiretapping laws because it both intercepted the chats before they arrived at the intended recipient (pharmacies with whom the customers thought they were chatting) and had the ability to use the intercepted chats for their own purposes, such as to improve its own products and services and for analytics.
Even though the pharmacies authorized the third party to provide a chatbot service, users were not made aware of this arrangement and did not consent to that third party receiving and transmitting their communications.
In courts that adopt a broad interpretation of wiretap laws, future cases could extend to other technologies beyond chatbots, such as scribing technologies, customer service center analytics and evaluation software, and other digital customer service tools.1
By the same token, future decisions could reject the reasoning in Yockey, at least with respect to technologies that simply record basic data points about a user’s behavior on a site but do not record the contents of a communication.
Even in courts that have rejected a broad interpretation of state wiretap laws, the risk of litigation remains, as plaintiffs shift their focus to federal wiretap laws paired with other state laws. This strategy was seen most recently in the amended complaint in Doe v. Atrius Health, Inc., where in response to the Vita decision, the plaintiff replaced a Massachusetts Wiretap Act claim with a Federal Wiretap Act claim and six state law claims, and subsequently removed the case to federal court.2 A similar strategy was also adopted in McManus v. Tufts Medical Center, Inc.3
Moving claims to federal court, however, may not be a panacea, as plaintiffs are likely to face additional hurdles. A recent example can be found in the Ninth Circuit. In Daghaly v. Bloomingdales, LLC, the plaintiff argued that Bloomingdales’ use of advertising technologies on its website violated the California Invasion of Privacy Act (CIPA).
The Ninth Circuit affirmed dismissal of the case without reaching the question of whether CIPA applies. Because the data transmitted to third parties was limited to information about the site visit and did not include meaningful communications, the court found that the plaintiff had not met the injury threshold required to access federal courts.
The law in this area is quickly evolving, and courts will likely continue to adopt differing views. We will continue to monitor the evolving case law and share additional information in future alerts in this series.
1See, e.g., Class Action Complaint and Demand for Jury Trial, Paulino v. Navy Federal Credit Union, No. 24-cv-03298 (N.D. Cal. May 31, 2024) (customer call center technology). The case has since been voluntarily dismissed and, at the time of writing, no information could be found on subsequent filings by the plaintiff.
2See Defendant’s Notice of Removal, at 3, Doe v. Atrius Health, Inc., No. 1:25-cv-10020 (D. Mass. Jan. 3, 2025).
3No. 1:25-cv-10008 (D. Mass. Jan. 2, 2025).

New Jersey Division on Civil Rights Issues New Guidance on ‘Algorithmic Discrimination’

On January 9, 2024, New Jersey Attorney General Matthew J. Platkin and the New Jersey Division on Civil Rights (DCR) issued a thirteen-page “Guidance on Algorithmic Discrimination and the New Jersey Law Against Discrimination.”

Quick Hits

The New Jersey Division on Civil Rights (DCR) issued guidance that explains how an employer’s use of automated decision-making tools can lead to algorithmic discrimination that violates the New Jersey Law Against Discrimination (NJLAD).
The guidance does not impose any new obligations on employers but reinforces the importance of NJLAD compliance and instructs that the NJLAD “draws no distinctions based on the mechanism of discrimination.”
Given the increasing use of AI tools to make employment decisions, the DCR explains that all “New Jerseyans [should] understand what these tools are, how they are being used, and the risks and benefits associated with their use.”

The DCR rolled out the guidance as part of the agency’s launch of a new Civil Rights and Technology Initiative “to address the risks of discrimination and bias-based harassment stemming from the use of artificial intelligence (AI) and other advanced technologies” and provide guidance concerning how the New Jersey Law Against Discrimination (NJLAD) applies to these new technologies. The guidance does not impose any new requirements that are not already included in the NJLAD or establish any new rights or obligations. However, given the DCR’s decision to release guidance on the topic, employers doing business in New Jersey may wish to audit their existing uses of AI to ensure that their policies and practices comply with the NJLAD. While AI technology can be complex, and an employer may not fully grasp how a particular tool generates results, the guidance reinforces that employers are fully responsible for the AI technology they utilize and may not delegate their compliance responsibilities to third parties.
What Are Automatic Decision-Making Tools?
The guidance defines an “automatic decision-making tool” as “any technological tool, including but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process.” An automated decision-making tool might be used to determine whether a human resources professional will review a certain resume, hire a job applicant or terminate an employee. The DCR referenced May 18, 2023, guidance from the U.S. Equal Employment Opportunity Commission (EEOC) in providing these examples. See “Select Issues: Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.”
The DCR explained that automated decision-making tools “accomplish their aims by using algorithms, or sets of instructions that can be followed, typically by a computer, to achieve a desired outcome.” Depending upon how the algorithms are designed, the tools “can create classes of individuals who will be either advantaged or disadvantaged in ways that may exclude or burden them based on their protected characteristics.” Given the role that algorithms play in the operation of these tools, the DCR defines any discrimination resulting from them as “algorithmic discrimination.”
Citing recent studies, the guidance explains how, for example, an automated decision-making tool that ranks job applicants of a particular race or gender more favorably (or less favorably) than applicants in another group could lead to discriminatory hiring. The DCR further explained that while these tools can also be used in a positive way to reduce bias and discrimination, given the risk of achieving the wrong outcome, employers must fully understand the mechanics of any AI tool upon which they rely to make employment decisions, including the risks and benefits involved.
How Do Automated Decision-Making Tools Lead to Discriminatory Outcomes?
The DCR acknowledges that it may not be easy to detect whether a particular automated decision-making tool might lead to discriminatory outcomes because the calculations made by these tools “can be invisible and not well understood.” Nevertheless, the agency explained that when discriminatory outcomes do arise, it is generally because of the way the tools are (1) designed, (2) trained, or (3) deployed.
Design
The guidance explains that a tool’s design may be intentionally or unintentionally skewed. The tool’s developer makes decisions about “the output the tool provides, the model or algorithms the tool uses, and what inputs the tool assesses.” Each of these decisions could introduce bias into the tool, which could then generate discriminatory outcomes. Referring to an example from an EEOC enforcement action, the agency explains how a tool was programmed to exclude job applicants who were of a certain age or older depending upon their gender. The case resolved with the company agreeing to stop requesting age-related information from applicants in the future.
Training
The DCR explained that before an automated decision-making tool is used in a real-world environment, the tool must be “trained.” This training “occurs by exposing the tool to training data from which the tool learns correlations or rules.” If the training data that is relied upon reflects the developer’s own biases, or otherwise reflects institutional inequities, the tool can become biased through the training process itself.
Deployment
Finally, the guidance explains that algorithmic discrimination can occur once the tool is utilized in the real world. If, for example, the employer intentionally uses the tool with members of a particular protected class, doing so can lead to purposeful discrimination. Or “[i]f a tool is used to make decisions that it was not designed to assess, its deployment may amplify any bias in the tool and systemic inequities that exist outside of the tool.” Real-world use of the tool may also reveal biases that did not reveal themselves during the testing process. If the tool is flawed, it can contribute to discriminatory decisions that are then fed back into the tool for further training. “Each iteration of this loop exacerbates the tool’s bias.”
NJLAD ‘Draws No Distinctions’ Based on Discrimination Mechanism
The DCR concluded its guidance by reinforcing the NJLAD’s prohibitions on employment discrimination. Whether prohibited discrimination occurs because of the actions of a “live” human being or based on the decisions of an AI tool is immaterial. As always, the impact of an employer’s decision is the critical issue. As the DCR put it, “the LAD draws no distinctions based on the mechanism of discrimination.” If an employer uses an automated decision-making tool to discriminate against a protected class, that employer is liable for unlawful discrimination, the same as if the employer engaged in that behavior without a tool. Such actions constitute disparate impact or intentional discrimination.
If use of an automated decision-making tool generates decisions that disproportionately impact members of a protected class, the employer that used the tool may be liable for disparate impact discrimination. Under well-established principles of disparate impact discrimination, even if the tool serves a “substantial, legitimate nondiscriminatory interest,” use of the tool could be argued to be unlawful if a “less discriminatory alternative” exists. The guidance shares an example about a company that uses an automated decision-making tool to assess contract bids. If that tool disproportionately excludes women-owned businesses, the tool is problematic and may cause disparate impact discrimination. Similarly, if a store uses facial recognition software to flag shoplifters, and the software disproportionately generates false positives for customers who wear certain religious headwear, the tool’s design is flawed, and the employer may be liable for disparate impact discrimination.
Use of Automated Decision-Making Tools and Reasonable Accommodations
The guidance also provides examples of how AI tools can affect applicants or employees who require reasonable accommodations. If, for example, the employer relies upon an AI tool to test an applicant’s typing speed, and the tool cannot assess the speed of an applicant who utilizes a nontraditional keyboard due to a disability, the employer’s use of the tool may cause discrimination against a disabled applicant.
In another context, if an AI tool is not “trained” (see above) on data that includes individuals who require accommodations, the tool may unintentionally penalize individuals who require accommodations. Similarly, an AI-screening tool used in the hiring process may screen out individuals who state in their applications that they require an accommodation to perform the job. Another example is an AI tool that tracks employee productivity by the number of breaks an employee takes. This tool may disproportionately target for discipline employees who require additional break time to accommodate a disability or to express breast milk. If an employer relies upon such tools to discipline employees, the employer could violate the NJLAD.
Next Steps
While the guidance creates no new obligations for employers, its issuance strongly suggests that the DCR, like the EEOC, Office of Federal Contract Compliance Programs (OFCCP), and the the U.S. Department of Labor (DOL), may focus increased attention on employers’ use of automated decision-making tools. New Jersey employers may want to consider reviewing and evaluating their use of these tools and subject them to a bias audit. Additionally, as employers can be liable for unlawful algorithmic discrimination even if they rely on a vendor’s representation that the tool they offer is sound and will not lead to discriminatory outcomes, employers may want to evaluate their vendor contracts and work closely with their vendors to determine how these potential risks and liabilities are spelled out.
Employers may want to stay tuned for new developments on the legislative front involving the use of AI. The New Jersey Legislature introduced two bills early last year (A. 3854 and A. 3911) that seek to regulate employers’ use of this technology in the hiring process. Among other provisions, A. 3854 would require companies that sell automated decision-making tools to conduct an annual bias audit and require employers relying on such tools to notify job candidates that such technology was used in the hiring process and provide a summary of its most recent bias audit. The proposed legislation would also impose monetary penalties ranging from $500 for a first offense and $500 to $1,500 penalty for each subsequent offense. A. 3911 addresses the use of AI-enabled video interviews, and, among other provisions, requires employers to obtain a candidate’s consent to use the technology. If enacted, New Jersey will join other jurisdictions, including Colorado, Illinois, and New York City, that have taken steps to regulate the use of AI in employment decision making.