Artificial intelligence (AI) has hit the workplace like a tidal wave, and every day it seems to make its way into a different area of the business. Thus far, employers have focussed their AI attention on how much they want to allow employees to use AI in their day-to-day work. Increasingly businesses are now being forced to consider a different question, whether to allow employees to use AI to write their communications and submissions in formal HR processes.
AI is increasingly being used to write grievances, representations in consultations and disciplinary proceedings, as well as to write appeals. This is due to employees often thinking that AI will help them better articulate what they want to say in an important process. The problem is that in these scenarios, AI often lists legal cases in its answers, which creates understandable concern. What businesses are now starting to struggle with is whether to allow employees to use AI if they want to, to have a policy in place that expressly forbids it, or to allow it with certain safeguards in place. There are merits and risks to all options.
Removing the Human Element
AI lacks the human element. The question becomes: without common sense, empathy, and contextual judgment - how useful can AI really be in a human resources process?
When writing a grievance, AI won’t be able to identify that someone might have acted a certain way because of a disability like ADHD or Autism. Equally, there have been many cases recently on competing protected beliefs under the Equality Act 2010, and AI is unlikely to recognise that the other employee might have had a right to say what they did because their belief is protected. In both cases, AI will follow the direction it’s given to write a grievance without considering any common sense or empathy for the others involved.
Does is Promote a Lack of Active Thinking
One of the reasons for asking an employee to put their grievance in writing, to make written representations to a redundancy proposal, or to make written representations during a disciplinary – is to get the employee to turn their mind to the issue and actively think about it. All of that can be lost where AI is used to write it. If the employee forwards the AI output straight to HR after a quick read through, how much have they really actively engaged in and thought about what it is they are saying.
Equally, AI could represent something in a way that the employee doesn’t necessarily mean or it could frame the events in a way that is completely different to how the employee actually felt at the time.
Confidential Information and Data Protection
Recently, the phenomenon of ‘shadow AI’ has emerged, where employees use AI tools on their own devices or without formal permission from IT departments. According to a February 2025 BBC report, this trend is growing rapidly, raising fears about data breaches and unauthorised access to sensitive company information.
AI retains all the data that is put into it, and then uses it to learn and inform its future output. This poses a particular problem given the confidential nature of HR processes. Using AI to write a grievance may involve giving it personal data about other employees. Asking AI to write a response as part of a redundancy consultation will likely involve giving AI confidential information about the business. In both cases, AI will retain that data/information and could reuse it elsewhere causing major problems for privacy and confidentiality.
Are There Advantages
Whilst there are plenty of downsides to allowing employees to use AI in HR processes, there are also positives too.
People have different levels of communication and articulation, and most HR professionals will be familiar reading written submissions (particularly grievances) that contain paragraph after paragraph of rambling, when what you really want is something nice and concise clearly setting out the complaint. AI can help the employee do that, which in the long run could make the process more efficient, by helping investigators and HR consider and respond quicker.
Comment
Businesses may want to consider having a policy in place setting out their position on whether employees are allowed to use AI in HR processes. Some have chosen to ban it entirely for the reasons outlined above, particularly the risk with confidential information and personal data. However, a complete ban comes with risks. It wouldn't be advisable, for example, to refuse to hear a grievance purely because you suspect it was written by AI. Equally with a complete ban, exceptions may need to be made as part of your duty to consider reasonable adjustments if the employee has a medical condition (for example, dyslexia or autism) and using AI could help them better articulate themselves. An outright ban also only really works in practice if you have a way of monitoring it to ensure compliance. There are software packages that can review documents for the use of AI, but this obviously comes at a cost to the business.
Other businesses are taking a more practical approach. Even if you try and ban it, the reality is that employees are still going to use. If only just for a first draft that they then reword to make it sound more like their own, and less like AI. Some businesses are therefore allowing employees to use AI in HR processes, under the strict condition that no confidential information or personal data is put into it. But again, that becomes difficult to monitor.
There is no one solution, and businesses will have to decide for themselves what approach they think works best for them. Whatever that approach is, it needs to be clearly set out in a policy because the employee use of AI in HR processes is only going to continue increasing.
If you require any assistance or support in connection with the use of AI in the workplace, please contact a member of the Employment Team.