Recently, the intersection of artificial intelligence (AI) and employment law has become a focus for legislators, regulators, and employers. As AI technologies continue to transform employment practices and workplace management, it is important for employers to stay informed of the latest legal developments and best practices. Recent developments in evaluating the role of AI in workplace management matters have highlighted important legal considerations, including the use of AI in employment decisions, co-employment scenarios, and the ethical implications of AI deployment.
The Pros and Cons of AI in HR
While the use of AI offers significant benefits in screening applicants, improving workplace retention and reducing turnover, it also raises legal challenges that employers must navigate carefully.
Strong Points:
Increased efficiency and speed: AI can streamline recruiting processes such as resume screening, candidate sourcing, and initial assessment, significantly reducing the time recruiters spend on these tasks. This allows HR professionals to focus on the more strategic aspects of the role. Increased accuracy of candidate screening: Advanced AI algorithms can more effectively match job requirements with candidate qualifications, skills, and experience, improving candidate-job fit and increasing the chances of hiring the right candidate. Potential reduction in bias: By focusing on objective rather than subjective factors, AI can help minimize unconscious and confirmation bias in hiring decisions, potentially leading to fairer and more inclusive hiring practices. Cost optimization: Generative AI can contribute to cost reduction through automating work activities, optimizing research and development processes, and improving customer service. Improved candidate experience: AI-driven tools such as chatbots can instantly respond to candidate questions, providing a positive candidate experience and engaging applicants throughout the entire recruiting process.
Cons:
Privacy and data security concerns: The handling of large volumes of applicant data by AI systems raises concerns about potential misuse or mishandling, which could have legal and ethical implications. Potential for algorithmic bias: AI systems may inherit biases present in historical data, which could result in biased decisions in applicant selection. Limitations in assessing human attributes: AI may struggle to accurately assess applicants’ decision-making skills, cultural fit, and other interpersonal qualities that are essential for success in the workplace. Difficulty differentiating applicants: As generative AI makes it easier for applicants to create polished resumes and cover letters, recruiters may find it more difficult to identify applicants’ qualifications based on these materials alone. Overreliance on AI: Employers risk over-reliance on AI systems to screen and filter applicants without fully understanding the potential biases and limitations of these tools.
The Ministry of Labor takes the lead
Recognizing the potential legal risks associated with the implementation of AI in employment-related matters, the U.S. Department of Labor (DOL) issued comprehensive guidance on how to ensure AI compliance with the Fair Labor Standards Act (FLSA) and the Family and Medical Leave Act (FMLA) on April 24, 2024. The guidance serves as a roadmap for employers leveraging AI in their employment practices, highlighting that removing human judgment and identification from these processes may violate federal employment law.
Key points from the DOL guidance include:
Potential Compliance Risks: The DOL warns that removing human oversight from processes such as timekeeping, productivity monitoring, and wage calculations could result in violations of federal employment law. FAQs and Best Practices: The guidance provides detailed FAQs and “promising practices” to help employers mitigate the risks associated with the use of AI in the workplace. Emphasis on Human Oversight: The DOL emphasizes the importance of human oversight in AI-driven processes to ensure compliance with wage and leave laws.
State-level efforts
As federal guidance evolves, states are also taking action. The International Association of Privacy Professionals has introduced a new interactive tool to help employers monitor state-specific laws related to algorithmic bias, discrimination, and automated employment decision tools: the US State AI Governance Legislation Tracker (iapp.org). The tool is particularly useful for employers who operate in multiple jurisdictions.
The White House intervenes
The federal government's focus on AI also includes the executive branch. Executive Order 14110, issued on October 30, 2023, calls for a collaborative approach to the responsible development and use of AI. This order is leveraged in recent DOL guidance for federal contractors requiring non-discrimination in AI-based hiring systems.
Best Practices for Employers
Generative AI has great potential to transform and streamline HR processes, but employers must carefully balance these benefits with legal and ethical considerations. To mitigate these risks and maximize the benefits of AI in recruitment and retention, we recommend that employers:
Proactively inform employees and applicants about the use of AI in hiring practices; Ensure transparency of AI-driven processes; Regularly monitor and test AI systems to ensure compliance with legal requirements; Conduct thorough due diligence when selecting AI vendors; Maintain human oversight for all AI-assisted hiring decisions.
Co-authored with Kaylin Chatman – 2024 Smith Debnam Summer Associate
Kaylin Chatman, Smith Debnam Summer Associate Class of 2024, joined Smith Debnam after recently completing her second year of law school at North Carolina Central University. She earned her bachelor’s degree in Criminal Justice from Livingston College and her master’s degree in Human Services Consulting from Liberty University. Prior to attending law school, Kaylin served as a police officer for 10 years.