Do all law firms need an AI policy? Even if their staff are not using it (to the firm’s knowledge at least!)?

Perhaps surprisingly given our line of work, we loathe writing policies just for the sake of it. Recently though it feels like the AI warning signs are mounting up for all law firms:

✳️ High Court authority that relying on AI legal research without checking the results constitutes professional misconduct and risks contempt of Court (Ayinde v London Borough of Haringey [2025] and Hamad Al-Haroun v Qatar National Bank QPSC and QNB Capital LLC)
✳️ SRA guidance that however good the AI system “firms will still need to check its outputs for accuracy”
✳️ Escalating warnings of negligence risks and confidentiality risks, such as if staff put client information into systems without having proper checks, balances and consent.

Add to this the growing daily use of tools like ChatGPT in people of working age and the risks of someone in the firm getting caught out begin to feel very real.

So to answer the question, yes, we would suggest all law firms adopt an AI policy regardless of whether they have an AI product approved for use in the firm.

Annnd thanks to the tireless efforts of our colleague Jessica Irwin at The Compliance Office (UK) we have a template policy available. Unlike some we’ve seen it really gets into the weeds on this topic and covers:

– Which AI products if any are approved for use in the firm and the guidelines for using them;
– A clear position on the use of public AI tools, such as to simply not use them for client work;
– Guidance on the risks involved where AI is made use of, such as hallucinations and breaches of client confidentiality;
– Intellectual property considerations;
– Client transparency;
– Quality assurance;
– Insurance considerations;
– Staff training and awareness.

Clients will all receive access to the template next week or drop us a line on contact@complianceoffice.co.uk to find out how to get your hands on it! Hope it helps!