Should HR Teams Use ChatGPT?

Table of Contents

One of the most important but often overlooked questions when dealing with scientific advancements or technological innovation is, “Just because we can do this, should we do this?” As businesses start to embrace AI open learning models to augment their work, HR faces an AI ethical dilemma: should HR, a fundamentally people-centered function, use Artificial Intelligence in its delivery model?

What’s the Current Status Quo?

The reality is that HR is already using AI in their work. However, a People Management study shows that while the proverbial stable door is ajar, the AI horse has not yet bolted. Their recent poll of 1,000 HR professionals shows that only 17% use AI for HR tasks. There is a lot of curiosity about AI, with 21% using it for something else and 33% not using it but wanting to. Just 30% of HR staff don’t use it and don’t plan to.

Can HR Teams Uphold Privacy When Using AI for HR?

The most obvious area of concern is privacy. For example, what are the implications of putting sensitive employee data in an open learning model? Does this constitute sharing confidential information with a third party?

Probably, yes. ChatGPT saves all the HR conversations you have with it and stores them as chat history. This data is used to improve its language model. It’s also likely that the human trainers working behind the scenes may see these HR conversations and queries at OpenAI.

OpenAI explains in its FAQs that “for non-API consumer products like ChatGPT and DALL-E, we may use content such as prompts, responses, uploaded images, and generated images to improve our services.”

They also state that a “limited number of authorized OpenAI personnel, as well as trusted service providers subject to confidentiality and security obligations, may access user content only as needed for these reasons.”

Data protection policies often allow for this kind of third-party personal data processing, provided it meets strict processing requirements. However, the free versions of ChatGPT 3.5 or 4 do not provide a non-disclosure agreement (NDA) or a data processing agreement (which defines their legal commitments concerning data), which means it’s likely to contravene many national data protection statutes and corporate data security policies. Therefore, it may not be ethically or legally appropriate for HR staff to use the free versions of ChatGPT.

Even the professional version of ChatGPT has its privacy challenges. ChatGPT is a machine learning model and learns from every conversation and data input and relays this back to other users in an indirect way. However, this learning relates to depersonalized data and is not identifiable to any one particular person, so it may not be deemed as personal data sharing.

However, the machine learning capability of ChatGPT introduces broader confidentiality concerns for HR teams. By developing HR processes and strategies with this machine learning entity, you are publicly distributing your organization’s people management best practices (potential trade secrets), making them available to anyone (including your competition) who makes the right query.

Using ChatGPT Without Breaching Privacy Laws

Given these risks, HR managers may need to prohibit the use of free versions of ChatGPT in their teams, as they may breach data protection regulations that many companies are held to. The premium version of ChatGPT has a more stringent privacy regime.

At a minimum, HR team privacy policies should be updated to prohibit HR teams from putting sensitive employee data into AI chatbots or conversational AI models. Guidance could also be provided on how to depersonalize data so it can be safely entered into AI, minimizing the risk of a privacy breach.

If HR teams do use ChatGPT to develop HR best practices, they need to accept that others (including competitors) may benefit from this expertise, which has been incorporated into the machine learning model. However, by the same token, your HR teams can and may have already benefited from knowledge that has been incorporated into the machine learning model by competitors. It’s quid pro quo and a sense of pragmatism may be needed here, therefore.

ChatGPT's Inherent Bias and HR

Privacy aside (and it’s a big issue to put aside), HR teams must be wary about utilizing any output from ChatGPT due to its inherent bias. OpenAI acknowledges that ChatGPT is “skewed towards Western views, performs best in English, and is not free from biases and stereotypes”. Worryingly, the model also admits that the “dialogue nature can reinforce a user’s biases over the course of the interaction.” Considering its open admission of biases, ChatGPT seems like a risky place from which to draw and develop HR policies, which should be built around fairness and inclusion.

AI-Generated HR Content Should Be a Starting Template, Not an End Product

If HR teams choose to use ChatGPT for policy development, there needs to be a clear process of bias mitigation. At a minimum, HR professionals should review and validate AI-generated content before it is implemented or incorporated into decision-making, which should help identify any biased or inappropriate suggestions. AI-generated HR content should be viewed as a starting template and not an end product. Chat-GPT does not yet have the professional judgment of an experienced HR professional!

We have spoken to many HR and other professionals, where the more experienced individuals use ChatGPT to create a first version that will need revisions rather than a final draft that needs to be proofread. We recommend this approach when using ChatGPT.

ChatGPT Can Generate Incorrect or Misleading Information

The free version of ChatGPT is trained on data up to January 2022, and the premium up to December 2023. This means that search engines remain a more reliable research source for HR teams, especially when looking for trends or time-sensitive data. External observers have pointed out, (and if you ask ChatGPT directly it admits to this), that ChatGPT can “generate incorrect or misleading information as a result of misinterpreting a query, combining data incorrectly, or drawing from inaccurate sources.”

If HR teams use ChatGPT for research, then data should be double-checked with the primary sources to ensure accuracy before using this data. 

There are so many outstanding accuracy, privacy, and confidentiality issues around using ChatGPT that, at this time, ChatGPT usage in HR should be on a more conservative basis, and AI-generated content should be seen as a starting template requiring expert human scrutiny and not an end product.

This is reflected in the polling figures above, showing that less than 1 in 5 HR professionals currently use ChatGPT. However, it’s clear that HR teams are AI-curious and in need of guidelines on how to use it safely, and we hope we have provided some useful pointers in this area.

Share this post