The Bubbly Show 2026
6-7 March 2026
Sip on the world’s finest Champagne and English Sparkling Wine
Unlimited tastings | curated lunches | indulgent dinners | elegant afternoon teas | expert-led masterclasses
“London’s favourite Champagne Festival”
An extract from the speech delivered by the Treasurer, Richard Salter KC, at the Singapore Litigation Conference on 29 April 2025.
It is natural for lawyers – especially lawyers of my generation – to be nervous, even to be afraid of Artificial Intelligence. Some of us (though I am sure, no one in this room) are still not adept at using computers for word processing and diary keeping, much less at using them for anything more complicated.
One of my predecessors as Treasurer still kept his diary in a small book that he carried around with him: and I am currently a member of an arbitral panel where the presiding arbitrator is only just getting to grips with MS Word and has had to leave the formatting of the Award entirely to me.
The reality though is that, as a profession, we have no real choice about whether we should embrace AI. We will have to.
First, because the industrial, financial and consumer sectors that are served by lawyers and the legal system are themselves already using AI at every level, and will expect lawyers and judges both to understand AI (with all its advantages and limitations) and to use it.
Second, because there will be work to be had in future in claims relating to the non-use and the misuse of AI.
But third, and most importantly, because of all the things that AI can do even now to help lawyers and the legal system to provide a better, quicker and cheaper service to our clients and other users.
The real question for us in the legal profession worldwide is not what deleterious effects AI will have – whether it will make us all redundant – but what beneficial effect it can have on the service that we provide to our clients.
The UK was among the first jurisdictions to provide, in December 2023, formal Guidance for Judicial Office Holders on the use of AI. This was revised and re-issued on 15 April 2025, partly to take account of the fact that a closed AI system is now available to the judiciary of England and Wales.
There are three ethical and practical themes in that guidance, which apply as much to lawyers as to judges.
First, before using AI, whether as a lawyer or as a judge, you need to understand both what it does and what it does not do.
This is the first, and most important, rule. AI can be a good tool but, to use it effectively, you absolutely have to understand it, and to understand it well.
At present, most of the available Large Language Models are simply trained to predict the most likely combination of words from a mass of data. Unless you are using a dedicated and specially written AI program, the model will not check its responses by reference to an authoritative database. It will sometimes ‘hallucinate’ and invent authorities.
You will no doubt be aware of the story of Steven Schwartz, who included six fictitious cases suggested by ChatGPT in his submissions in the case of Mata v Avianca Inc [Civil Action No: 22 Civ 1461] in New York. The affidavit which he filed in order to explain his conduct to Judge P Kevin Castel included screenshots showing that he had specifically asked the AI whether one of the cases that it had given him was a ‘real’ case, and had been reassured by the AI that it was. It even provided an apparently authentic case citation. [1]
Even the best programs are also likely to have a number of inbuilt biases, which lawyers must be alert to – let us not forget that human decision-making can be biased too – and, while AI can certainly invent, you cannot expect it to develop the law. It has a natural tendency to look backwards. It may contain the words of great judicial innovators like Lord Eldon and Lord Denning, but it does not yet have anything like their vision of what the law could and should look like.
Nevertheless, as Chief Justice Menon rightly reminded the inaugural India-Singapore Conference on Technology in April last year, AI technology is changing and developing at an incredible pace. To borrow the wise words of Professor Richard Susskind, today’s AI systems are “the worst [they] will ever be”.
Unlimited tastings | curated lunches | indulgent dinners
| elegant afternoon teas | expert-led masterclasses
“London’s favourite Champagne Festival”
The Bubbly Show 2026
6-7 March 2026
Sip on the world’s finest Champagne and English Sparkling Wine
Tomorrow’s AI systems may have different and better abilities and different risks and limitations inherent in them. Professional ethics and competence will require us to use those systems: but, to use them, we must understand them and must keep pace with the technological changes.
The second and third themes in the UK Judicial Guidance come from the limitations on what AI can do at the moment.
The second theme is that, you must avoid inputting confidential information into public LLMs, because doing so makes the information available to the world. Some AI models even now claim to be limited to accredited databases and to preserve confidentiality: but the ethical principle of preserving confidentiality must always be paramount, and is the individual responsibility of the lawyer using the AI, not of the system.
The third theme is that it is not the AI, but the judge or lawyer using it, that is responsible for the output.
These three themes find a clear echo in the four general rules provided in the Chartered Institute of Arbitrators’ Guideline on the use of AI in arbitration, which was published as recently as 9 March 2025.
These four ethical rules are said to be applicable both to parties and to arbitrators. First, all participants should make reasonable enquiries to understand a proposed tool. Second, all participants should weigh up the risks against the benefits of use. Third, all participants should make enquiries about the applicable laws and regulations governing the use of the tool.
Fourth, unless expressly agreed in writing by the tribunal and parties (and subject to any mandatory rule), the use of an AI tool by any participant will not diminish their responsibility and accountability.
As the guidance which the English Bar Council put out in January 2024 says, “Generative AI LLMs can therefore complement and augment human processes to improve efficiency but should not be a substitute for the exercise of professional judgment, quality legal analysis and the expertise which clients, courts and society expect from barristers”.
This idea, that human oversight and responsibility is always necessary in the legal and judicial process is built into the thinking behind the European Union’s recent AI Act (Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence). This Regulation characterises AI systems concerned with the administration of justice as “High Risk AI systems”. They are not banned, but they must be closely monitored.
The decision of the Court of Justice of the European Union in OQ v Land Hessen (SCHUFA Holding) (Case C-634/21) (a case concerning automated credit-scoring) suggests that Article 22 [2] of the EU’s General Data Protection Regulation [3] (which still, in slightly amended form to takwe account of Brexit, is part of English law) may also prohibit automated decision making more generally, even outside the legal system.
However, as the UK’s senior civil judge, Sir Geoffrey Vos MR, said in the lecture which he gave at the LawTech UK Generative AI Event in February this year, in a world in which machines are becoming so much more capable than humans, it may become simply too time-consuming and expensive for anyone to check the integrity of every decision that machines recommend humans to make.
For example, there are many small cases which it is presently simply too expensive for private litigants to take to England’s lowest civil court, the County Court. The UK’s out-of-court consumer redress systems – we call them ‘Ombudsmen’ – are also labour-intensive, and their cost is a significant drain which is generally either recovered from the relevant business or industry sector or paid out of general taxation by the state.
Might it be better, even at the cost of some imperfect decision-making and ‘rough’ justice, to automate the resolution of such claims? Might that not be better than leaving so many people without any practically effective legal remedy?
In his Blackstone lecture in November 2024, Sir Geoffrey Vos MR gave the question of whether a child should be removed from its parents as an example of a decision that it is inconceivable that society would accept as being made by a machine. In his February 2025 lecture, however, he noted that some of the distinguished Oxford academics who were present at that lecture questioned his assumption, and suggested that emotive decisions of that kind would be just the type of decision-making that parents would really prefer to be taken out of human hands.
The Legal Community will need, as technology develops, to decide what kinds of advice and decision-making should, and which should not, be undertaken by a machine.
I look forward to hearing and learning from the contributions which others will make to this Workshop. AI will inevitably make profound changes to the way in which we, as lawyers and judges, go about our business. How best to respond to the ethical and practical challenges which we face is a subject about which we can all learn and profit from the experience and collective wisdom of other jurisdictions.
Richard Salter KC
Treasurer 2025
References: