Legal and Practical Issues Every Employer Should Know
Artificial intelligence (AI) has quickly moved from a futuristic concept into a practical tool used in everyday business. In the human resources (HR) world, AI now drafts job descriptions, scans résumés, conducts video interviews, and even generates performance reviews.
While these tools promise efficiency and cost savings, they also create new risks. Discrimination claims, privacy issues, and liability disputes are all part of the emerging landscape. For employers, the key is to balance efficiency with compliance and to ensure that technology doesn’t undermine fairness or expose the company to avoidable lawsuits.
Increasingly, regulators, courts, and lawmakers are paying attention. This means that employers who rush into AI adoption without a thoughtful compliance strategy may find themselves facing costly litigation or government investigations. The conversation is not only about what AI can do, but what AI should do in the sensitive context of HR.
What Counts as AI in HR Today
AI is being implemented across several HR processes including recruiting, performance management, and compensation.
Recruiting platforms use algorithms to scan résumés, chatbots interact with applicants, and some platforms help companies manage large-scale hiring. According to the Society for Human Resource Management’s “2025 Talent Trends: AI in HR” survey, just over 50% of employers are using AI in recruiting. In performance management, AI can now track productivity, analyze communication styles, and even suggest employee development programs. Some employers are beginning to use AI-driven pay equity audits to identify disparities across departments and levels of seniority. While this can be a step toward compliance with equal pay laws, it only works if the algorithms themselves are transparent and designed with fairness in mind.
While AI can generate information quickly, it can also produce false or misleading results, sometimes referred to as ‘hallucinations.’ Employers who assume that AI’s output is automatically correct risk significant liability. Human oversight is not just best practice; it’s a necessity. The more decisions are automated, the higher the stakes for ensuring a human layer of review.
Employers should be mindful that even routine tasks like job postings or performance evaluations can carry legal implications when AI is involved. As Helen Bloch of the Law Offices of Helen Bloch, P.C. cautions, “If someone is going to use AI, which is inevitable, you have to keep in mind the various laws that apply to each situation.”
Examples of AI-assisted HR tasks to be particularly mindful of include résumé-screening algorithms that prioritize keywords, automated assessments that claim to measure cognitive ability, and chatbots that handle initial applicant inquiries.
Legal Risks and Considerations
Disparate Impact and Discrimination
One of the biggest legal risks associated with using AI in the recruitment process is the concept of disparate impact, a policy or practice that seems neutral on its face but ends up disadvantaging a protected group. A current example is the class action lawsuit Mobley v. Workday, in which plaintiffs argue that software discriminated against job applicants over age 40 in violation of the Age Discrimination in Employment Act (ADEA).
According to Charles Krugel of the Law Offices of Charles Krugel, Mobley is the case to watch as it pertains to AI and discrimination. This type of litigation underscores the need for employers to do their due diligence on AI tools and conduct bias audits before relying on algorithms to make employment decisions.
Disparate impact claims are particularly dangerous because an employer may not even realize its practices are discriminatory until litigation begins. The Equal Employment Opportunity Commission (EEOC) has already issued guidance warning that automated decision-making tools fall under the same anti-discrimination laws as traditional practices. Employers should be aware that claims may be brought not only by rejected applicants but also by government agencies seeking to enforce civil rights laws.
Who’s Responsible When AI Gets It Wrong?
Some employers assume that outsourcing HR to third-party vendors will insulate them from liability. This is a dangerous misconception. Employers remain responsible for compliance with anti-discrimination and privacy laws, regardless of whether the error originated in-house or through an external AI service.
“You’re still going to face consequences if you break the law, whether somebody does it as an authorized agent or that authorized agent is a computer,” notes Max Barack of Garfinkel Group, LLC.
Contracts with vendors should be explicit about risk allocation, and employers should also review their insurance coverage. Employment practices liability insurance (EPLI) may not cover certain AI-related claims unless additional riders are purchased. Employers should also remember that joint liability principles mean that both the vendor and the company can be held accountable. For example, if a recruiter’s AI-powered screening tool is found to discriminate against disabled applicants, both the recruiter and the hiring company may be liable. This makes vendor due diligence and contractual protections more important than ever.
AI Regulation
States are starting to regulate how employers can use AI in hiring. For instance, Illinois’ Artificial Intelligence Video Interview Act requires employers to disclose when AI is being used in video interviews and to obtain applicant consent. In New York, recent laws require permission before a company can use AI-generated likenesses of employees. These rules reflect a broader trend toward transparency and informed consent. Employers must keep up with evolving disclosure requirements, especially when using AI in recruitment, evaluations, or public-facing content. Beyond Illinois and New York, states like Maryland and California are also experimenting with legislation aimed at regulating AI in hiring. Internationally, the European Union has introduced its AI Act, which classifies certain uses of AI in employment as high-risk and subjects them to strict transparency and audit requirements. These developments suggest that the regulatory trend is accelerating, and employers who fail to prepare now may find themselves scrambling to catch up.
The Importance of Vigilance and Adaptability
The use of AI in HR processes is becoming a standard feature of recruitment, evaluation, and workforce management, but the technology brings significant risks if used without proper oversight. AI can be a powerful tool for improving efficiency and fairness, but only if employers use it responsibly and remain vigilant about compliance, transparency, and human judgment. Companies that treat AI as a compliance blind spot risk litigation, regulatory penalties, and reputational harm. Those who take a proactive approach will not only reduce legal risk but also build trust with employees and applicants.
Employers using AI in HR workflows should remember to:
- Conduct regular bias audits of AI tools
- Require human review of AI-generated outputs
- Stay current with federal and state AI-related employment laws
- Review and update contracts with vendors for liability protections
- Ensure employment practices liability insurance covers AI-related risks
- Train HR professionals to identify and respond to AI red flags
- Maintain transparency with employees and applicants about AI use
This article was originally published on October 9, 2025 here.