Will employers be liable for harassment by an AI tool in the workplace?
Proposed changes to the law may mean AI tools could create third party harassment liabilities.
AI tools are becoming increasingly integrated into workplace systems and interact directly with employees. While AI is not a legal person, could employers who deploy these systems still face liability where outputs amount to harassment? Is there a difference between human‑approved or prompted AI content?
Legal context
Under the Equality Act 2010 (EqA), harassment occurs where a person engages in unwanted conduct related to a protected characteristic which has the purpose or effect of violating dignity or creating an intimidating, hostile, degrading, humiliating or offensive environment for another person. Employers are vicariously liable for acts of harassment by employees in the course of employment, unless they can show they took all reasonable steps to prevent it.
A separate regime applies for third‑party harassment (liability for harassment by non‑employees where reasonable steps are not taken). The Employment Rights Bill has proposed to extend the duty on employers to prevent sexual harassment to acts by third parties, though this is not expected to come into effect until sometime between 2026-2027.
Who has conduct of the harassment?
Where an AI tool drafts content and a person reviews and issues it (for example, a letter containing harassing language), liability appears straightforward. The employee’s decision to prompt, adopt and send the content ultimately makes the employer vicariously liable. In these cases the application of the law is quite simple: the AI is the instrument; the conduct is by the individual which means the employer can be vicariously liable for the conduct.
However, there is more complex legal debate around instances where, for example, a chatbot or automated system communicates with an employee without human intervention. When considering this scenario it should be noted that harassment under the EqA presupposes conduct by a person acting on behalf of the employer, and since AI is not a person, the employer would not be vicariously liable.
However, in situations where an employer implements an AI system to replace staff, could it be argued that if the replacement AI causes harassment, then might the employer ultimately have vicarious liability for it?
This does not seem likely under the EqA as presently worded, where the harassment must be attributed to a person’s conduct. Again, as AI is not a person, can any harassment caused by it be attributed to the employer where the employer was not directly engaged in the conduct of the AI?
Arguably ‘employment law’ is not be the most appropriate framework through which to look at liability for the behaviour of AI on staff. If AI is considered a workplace ‘tool’, perhaps a view through health and safety law may be a better fit.
Can the supplier be liable as a third party?
If an employer is somehow found liable for the actions of AI, or employees perhaps feel it is ultimately the developer or licensor who should be liable, could the developer or licensor be sued for harassment? This again seems unlikely because the supplier’s act of creating or supplying a tool is not the unwanted conduct. In practice, employers will be the closest party to the alleged harm and therefore likely to remain the focal point for claims.
Employers may, in turn, seek to recover losses related to AI harassment brought by employees from suppliers through their contract. However, it is not clear why suppliers would agree to indemnify employers for such losses.
Conclusion
Future case law may provide guidance in relation to the potential liabilities and limits of third party AI harassment claims for employers. Such cases might consider (i) whether AI outputs can be attributed to the employer as its own conduct, (ii) the reach of vicarious liability for employers in machine‑mediated interactions, and (iii) the scope for supplier liability via third‑party harassment.
The European Commission’s withdrawal in February 2025 of the proposed EU AI Liability Directive (due to a reported lack of consensus and complaints over complexity) underscores how contested the field around AI regulation remains. It is also true that AI continues to develop and laws have a habit of rapidly becoming out of date and ineffective in the face of technological advancement.
If you would like to discuss any aspect of this article further, please contact our employment team on 0113 244 6100.
You can also keep up to date by following Wrigleys Solicitors on LinkedIn.
The information in this article is necessarily of a general nature. The law stated is correct at the date (stated above) this article was first posted to our website.
Specific advice should be sought for specific situations. If you have any queries or need any legal advice please feel free to contact Wrigleys Solicitors.
How Wrigleys can help The employment team at Wrigleys is expert in advising charities, third sector and education sector employers on all aspects of employee relations, policies and procedures, including advising on issues related to the employment of carers. Importantly, we work within the wider charities, social economy, and education teams at Wrigleys and so we also have in-depth understanding of how our clients’ governance and regulatory obligations impact on employment policy and practice. Our CSE team can further help to minimise your risks by providing advice on charity law, trustee and director duties and delegation of powers, reporting to the regulator, and reputational risk. If you or your organisation require advice on this topic, please do get in touch. |