OpenAI’s latest restrictions on ChatGPT’s ability to provide legal advice underscore a point that family law solicitors already know instinctively, but that is now becoming visible in everyday practice.
The model’s role is being cast more clearly as an informational and educational tool rather than a source of personalised legal advice for clients or the public. This repositioning has legal and ethical foundations and reflects concerns about accuracy, liability, and the limits of what algorithmic outputs can achieve in complex legal settings.
For those of us handling family law matters, the distinction between information and advice is not abstract.
Clients increasingly use ChatGPT and similar generative AI tools to research topics such as terminology, procedural steps or timelines in financial remedy or children proceedings. They look up what a Form A is, what the sequence of hearings might be, or general definitions of terms like ‘ancillary relief’.
On the surface, this might appear to save time during initial consultations because the basic ground has been covered. Clients can come to meetings with a basic conceptual framework already in their heads.
The expectation gap
However, generative AI does not and cannot be a substitute for professional legal judgement. ChatGPT and comparable tools are not legally qualified, do not comprehend jurisdictional nuance, and, critically, lack the lived experience of applying law to real cases. They will sometimes provide plausible answers, but that very plausibility can be their greatest danger. The model may confidently assert incorrect legal principles or invent case authorities that have no basis in actual law, a phenomenon commonly referred to as ‘hallucination’.
This has already caused problems in other jurisdictions where lawyers have cited non-existent cases generated by AI, leading to warnings from judges and the possibility of sanctions.
When our clients bring AI-generated content into a consultation, we quickly see the practical challenges. Clients often treat these outputs as if they are accurate legal interpretations. They come armed with ‘facts’ that they have copied from an AI chatbot and expect them to hold water in their matter. They will quote what ChatGPT said about rights, remedies or likely outcomes, without understanding the underlying legal analysis or judicial discretions that actually shape family law outcomes. This can create an expectation gap between what the client thinks they know and the reality of their legal position.
Erosion of trust
In my own practice, I see two predictable patterns. Some clients use AI to double-guess the solicitor’s advice, often based on prompts that are not precise enough to yield jurisdiction specific or up to date responses. Other clients use AI to draft initial instructions, producing polished looking documents, or even draft letters, that in fact misstate procedural status, incorrectly frame issues, or lean on inapplicable authorities.
They have ‘saved time’ by doing this work themselves, only for us to spend an equivalent or greater amount of time revisiting and correcting misconceptions. And because the AI output looks credible, it sometimes takes longer to unpick than if the client had simply admitted they were unsure.
The effect on client expectations cannot be overstated. When a client reads an authoritative sounding answer from an AI tool, they often conclude that the ‘law says so’.
If their solicitor’s explanation is more cautious, caveated and contextualised, clients may interpret that as a lack of competence rather than an expression of professional rigour. This dynamic can erode trust if not addressed head on.
The American Bar Association has noted that ignoring a client’s reliance on AI can lead to breakdowns in communication and unmanaged expectations, which in turn can increase dissatisfaction and even malpractice exposure if outcomes differ from what clients believed before instruction.
False economy
Another issue that has arisen with some frequency is the belief that using AI will reduce legal costs.
Clients see a free tool producing answers in seconds and naturally assume that they should be able to resolve significant parts of their dispute without professional involvement. They may ask for discounts or limited instructions because they think they are already ‘halfway there’.
What this belief overlooks is that AI cannot provide contextual legal advice tailored to a client’s specific facts, nor can it offer risk assessment, advocacy or strategy. The time saved in explaining basic concepts is often eclipsed by the time spent correcting AI-induced misconceptions and educating the client on why those AI outputs cannot be relied on.
The broader legal profession is wrestling with similar issues. Reports from commentators note that lawyers and clients alike are experimenting with AI to assist with tasks like document preparation and research, but that this experimentation is not without irritation and risk.
Some lawyers describe the effect as analogous to the ‘WebMD effect’ in medicine: patients come convinced they have a diagnosis because they read it online, despite it being simplistic or wrong. The traditional billable hour model that underpins law firm profitability is also feeling pressure as clients question how they should be charged for time that they believe might be mitigated by AI tools.
Ethical implications
The ethical and professional implications extend beyond client communication to the very nature of how we use AI internally.
Even when solicitors use AI to assist with legal tasks, we must do so with full awareness of issues such as confidentiality, data security and output reliability.
Professional obligations to protect client information do not change because the tool is digital.
Algorithms may store or process data in ways that conflict with confidentiality requirements under the GDPR and other regimes, so any use of third-party AI tools requires careful assessment of security practices and privacy implications.
Educate and reinforce
For family law practices specifically, where matters often involve deeply personal financial affairs and sensitive children issues, the cost of miscommunication or misinformation can be high. It is vital to stress to clients that AI can augment understanding of basic legal concepts, but that only a qualified solicitor can interpret how the law applies to their unique circumstances and advise on strategy, risk and process.
In responding to this evolving landscape, solicitors may need to adapt their engagement strategies. This may include setting clear boundaries around the use of AI materials from clients, educating clients early in the retainer about the limitations of AI as a source of legal advice, and reinforcing why human legal expertise remains indispensable. Those who treat AI as a complement to professional judgement, rather than a competitor to it, are likely to manage expectations and costs more effectively.
AI is here to stay and will shape how clients prepare for consultations and interact with legal services. Our task as solicitors is not to resist technological change but to harness it responsibly, ensuring that clients understand both its utility and its limitations. Nothing in current AI technology can replace the nuanced analysis, ethical judgement and strategic insight that family law practitioners bring to their work.
Author: Robert Webster is a partner at Maguire Family Law and a dual-qualified family law solicitor. He originally qualified as a barrister and solicitor in New Zealand in 2015 before qualifying in England and Wales in 2018. He advises on all aspects of family law, including divorce, financial remedy proceedings, children matters and cohabitation disputes, and has experience conducting contested hearings in both the Family Court and the High Court.