HR TECH

UPDATES

Prepare your HR policies now for when AI meets HR

There are various remarkable aspects of AI tools like ChatGPT that are available today. AI can do a wide range of tasks, from composing headlines to creating artistic renditions of famous personalities as villains. While this is entertaining, it’s also practical, and businesses might already be using AI for work-related purposes, especially if they have no restrictions on the use of ChatGPT by employees. And why not? If it saves time, it’s worth exploring.

However, there are some caveats to consider. For instance, while AI can be useful for generating headlines, it’s essential to be cautious when utilizing it for hiring decisions. Data privacy is a crucial concern when it comes to using ChatGPT since it’s not genuinely intelligent and relies on existing ideas. It “learns” by collecting new data, including everything that users post, which might create risks. Craig Balding, a cybersecurity and AI newsletter and blog author at ThreatPrompt.com, warns that anything employees submit could appear in someone else’s answer, including confidential information like passwords, company names, or project names.

This data privacy concern applies to all areas of business, including hiring. For example, if a recruiter wants to provide hiring managers with a brief overview of candidates, ChatGPT can summarize resumes effectively. However, this puts a candidate’s entire work history, address, and phone number into the dataset, raising questions about their consent to share this information.

Artificial intelligence tools, including ChatGPT, are inherently biased as they rely on flawed datasets and programming. For instance, Amazon developed an AI system for hiring that had to be discontinued due to bias issues. Similarly, HireVue, a video interviewing program that assesses the facial geometry of candidates, is being sued by Illinois for bias. Moreover, ChatGPT may provide incorrect information when it doesn’t have the answer, as evidenced by a case where it returned information on gun rights instead of abortion rights in the workplace.

It is important to note that companies cannot shift the blame for any illegal bias in their hiring or employee retention practices to AI programs like ChatGPT. They are legally responsible for all decisions made by their organization, even if an AI tool is involved. Employment attorney and HR consultant Kate Bischoff advises companies to stay up-to-date on evolving laws and regulations related to AI and hiring, establish clear policies for AI usage and data management, and ensure that they are not engaging in illegal discrimination, regardless of the suggestions made by an AI program. Employees may still use AI tools like ChatGPT, so it is vital for companies to have policies in place to manage their use.

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe To Our Newsletter