The Social Context of the Law: AI: Risks and Benefits
Taken from a panel discussion held on 19 February 2024 between Master Kay Firth-Butterfield (lawyer, professor, author and CEO of Good Tech Advisory) and Master Robert Buckland (Senior Fellow at Harvard Kennedy School [AI and Justice], Lord Chancellor and Justice Secretary 2019–21, Solicitor General 2014–19), moderated by Master Anneliese Day, Fountain Court Chambers.
Anneliese Day KC: We are incredibly fortunate tonight to have two really esteemed speakers. First is Kay Firth-Butterfield, CEO of Good Tech Advisory, one of just four 2024 Time Magazine Impact awardees for her work on responsible Artificial Intelligence (AI) since 2011. That is how ahead of the curve she is. Kay is the former inaugural head of AI at the World Economic Forum, and she really is one of the foremost experts in the world on the governance of AI.
Secondly, we have Sir Robert Buckland who was called to the Bar by The Inner Temple in 1991, and spent nearly 20 years in practice, specialising in criminal litigation. He was a member of the Attorney General’s list of prosecuting counsel from 1999 to 2010 and he was appointed a part-time Crown Court Circuit Judge in 2009. He started his political career in 1987 as a constituency campaign manager, and in 2010 he was elected as an MP for South Swindon. And, especially relevant tonight, since 2023, Sir Robert is the Senior Fellow at the Mossavar-Rahmani Center for Business and Government at the Harvard Kennedy School.
Kay Firth-Butterfield: AI will transform our lives, as we know, both individually and societally, in ways that we know now and in ways that we cannot imagine. But will it be for the good? I will posit the concept that actually in many countries, it will be the lawyers and judges and regulators in whose hands this decision lies. It will also be in the hands of company lawyers and ethics officers – responsible AI ethics officers. There are pivotal decisions to be made now which will shape our future and that of our children and beyond. So, put simply, we need to work out now if we humans want to fly the plane, as they do in Star Trek and Star Wars, or do we want to hand it all over to AI?
Is AI our helper augmenting our work? Or does it take over? And how much of our work does it take over? Different societies will have different answers. For example, a society with a coffin-shaped population, which is the UK – the shoulders being the number of old people compared to the number of young people down at the foot end – may want more AI carers for old people. Or maybe they will choose to have more people immigrate into the country to provide that work. These are urgent choices which need policy decisions.
We need to work out now if we humans want to fly the plane, as they do in Star Trek and Star Wars, or do we want to hand it all over to AI?
The absence of regulation specific to AI in the UK means that it will be, regulators, your advocacy and the courts which will shape our relationship with AI. Despite efforts of the Institute of Electrical and Electronics Engineers, the United Nations, the World Economic Forum, the Organisation for Economic Co-operation and Development (OECD) and many others, there is no global governance of AI in sight. However, there are existing treaties that can inform our work: various international human rights documents and the Geneva Convention can be used when we think about lethal autonomous weapons and whether to use them, and whether we need humans IN the loop, or ON the loop.
Various national governments and states have introduced their own legislation, causing what looks like a patchwork of legislation. The EU is about to have major legislation. Brazil is going to have legislation about AI. Many states in the United States have their own legislation about AI, but the federal government does not. The UK has the Online Harms Act, and some countries have introduced non-binding directives, yet others have mandated their existing regulators to deal with new issues.
We then have the courts, which at the moment are most notably dealing with copyright cases and the major societal impact of whether AI should be trained on data, regardless of copyright, or whether some of the richest companies in the world should pay for such a resource. Here, I think it is important to note that, for example, Spotify and Netflix pay for their copyrighted content.
The societal and legal discussion is essentially, “Should we put responsible innovation ahead of innovation without any protections?” Many are afraid of losing out economically, and that is a real societal problem when a country’s economy is growing very slowly. This brings me to the hype versus the reality of AI. I was in Davos in January at the World Economic Forum Annual Meeting, and everything was about AI, everything. There is so much hype, so many people are trying to sell you AI, but, in my experience with many companies around the world, we are generally at the ‘proof of concept stage’.
Although there are companies, notably the tech providers, social media and financial services which are well beyond that in their use. So, let us look at some of the protections that I think the existing law already affords. I think we can deal with some of the things like accountability and fairness by good procurement. Discrimination in AI – we call it bias, but as lawyers, you know discrimination when you see it. You know it in employment law. You know it in gender and race contexts. Jobs and hiring. You can use that existing law against discrimination by algorithm.
Online harms and deepfakes: in Australia, the eSafety Commissioner has done a lot of work thinking about deepfakes in pornography and has actually successfully brought some prosecutions in that area. In the UK you have good legislation in this context, but it is never enough.
Criminal law: punishing people who steal – we understand that as lawyers. There was a recent case with a poor employee in a bank in Hong Kong who was asked to cut a cheque for $25 million and he said, “No, I can’t do that.” So, the CFO said, “Well, let’s have a video conference.” They had a video conference. Everyone, but the employee who had to cut the cheque, was a deepfake in that video conference. What we are seeing is different ways that we repeat the same crime that we faced before.
Children and other vulnerable adults: I do think this is an area where we have good law, but we need better law. The Federal Communications Commission (FCC) in America has been thinking about deepfakes as well. There was a deepfake of President Biden phoning up everybody in New Hampshire to say, “Don’t bother voting in the primaries. It’s all right, don’t bother voting.” The FCC has used the Telephone Consumer Protection Act, which dates back to 1991, to ban that behaviour. Does any legislation in the UK protect children in the context of AI? This is particularly relevant for parents buying so-called educational AI enabled toys. The labels do not say how data from the child will be used nor the possible social impact of the toy on the child. It cannot do the latter because we do not know so essentially parents are beta testing these devises on their children.
And then we get to text-to-video, and SORA, the new text-to-video of Open AI, which has not been released yet, but we need to continue being vigilant about how we use these tools. My favourite story at the moment on that is the ant video, which you may have seen. The ant has only got four legs. I do not think ants only have four legs. Fortunately, there is an alliance which is led by Adobe, called the Content Authenticity Initiative, which focuses on systems to provide context and history to digital media. Knowing that something has been faked, that pictures have been faked, is absolutely crucial as we move forward.
Having talked about today, I just want to take us back to 1817, when Jane Austen’s protagonist, Anne Elliot, in Persuasion said to the man she was arguing with that she would not call into aid books when she defended women’s emotions because they are all written by men. Well, we had a data problem then, and I would argue that we still have the same data problem. For example, our daughter is a pilot in the US Air Force. Only about 6.5 per cent of pilots are women, and only about 3 per cent are fighter pilots. The bulk of data on heart attacks is from white American men over 55. In fact, just the same as when Anne Elliot was talking, the majority of data is from white men based in the Global North. I do not say that pejoratively, it is just that white men have had the pen longer than the rest of us, and so, of course, there is more data that has been created by them.
I hear you saying, “Well, that’s just how the world works, isn’t it?” But I feel that the potential, and peril, of AI is that it can take us to a new way of thinking about our world, a new way of including everyone. We really need to start now. Indeed, next year, it is estimated that we will produce more data every 15 minutes than has ever been created before. We really need to address this problem now. It will be part human created and part AI created; what we call AI cannibalism, where AI creates the data which then falls back into the data lake of generative AI and then it is used by AI again. If we are not careful, it will further marginalise women and minorities, and that is not the society we want.
So, what place does the law have to play? Well, again, the FCC in America is trying to prevent companies changing their terms and conditions to harvest yet more data. We have far to go. I truly hope that we can all come together and make AI a safe and equitable tool for everyone, to advance humanity as well as our economy. Maybe, we can do that. But you as lawyers, judges and regulators, as lawyers in companies, are going to be at the forefront.
Sir Robert Buckland: When Kay was talking about SORA, I kept on thinking about Aldous Huxley – the drug in Brave New World was, of course, called Soma. I am sure that is an unintentional slip by the manufacturers of this new product, but still, for those of us who are literary-minded, it does perhaps start this lecture on a somewhat sinister tone – which I do not intend it to be at all, because my basic thesis is that machine learning will be an incredible tool to us. However, it has to be a partner of our labours, a co-pilot, if you like, ultimately helping the human decision-maker, the human advisor, to come to a conclusion in an efficient, safe, ethical and just way; and to actually advance access to justice to corners of our society and corners of our human activity.
My basic thesis is that machine learning will be an incredible tool to us. However, it has to be a partner of our labours, a co-pilot, if you like, ultimately helping the human decision-maker, the human advisor, to come to a conclusion in an efficient, safe, ethical and just way; and to actually advance access to justice to corners of our society and corners of our human activity.
It is that question of trust, that human assumption that everything is going to be all right, which frankly is not going to be good enough when it comes to how we maintain, regulate and, yes, monitor the use of artificial intelligence. It is all very well to set rules and principles in year one, but unless those data sets are regularly checked and audited to deal with the sort of biases that Kay has been talking about – and believe me, they exist – the evidence is there that use of large language models in particular will entrench bias, in particular racial bias, in the criminal justice system.
But here is some good news: I think in some activities in law and justice, we do not actually need database models. I am thinking in particular of the sort of judgements that judges and tribunal chairs will have to come to in every day of their working lives, particularly with the proliferation of sentencing guidelines here in England and Wales and the increasing focus, particularly in crime but also in other areas of law, in the sort of decision-tree approach to legal judgements. Decision trees do not need populating by data sets because what they should be is entirely free of that and focused upon the task in hand, but maybe all the input from the recent sentencing guidelines that takes the judge through the steps that they have to remember, in many, many cases, right through the criminal calendar.
I think judgement falls into two main categories: the sort of practical judgement that I have just taken you through (applying sentencing guidelines, perhaps using a decision-tree model), where a judge is using a fixed framework within which they come to a defined conclusion; then the other type of judgement, which is reflective judgement.
So, I think we can perhaps eliminate some of the genuine, legitimate concerns we have about the dangers of unsupervised and unaudited data sets. What I am saying is that the decision tree, the process, can be sped up. This means more cases can be dealt with in a quicker way, but at the same time it still leaves that essential element of human discretion, which is the essence of human judgement.
I think judgement falls into two main categories: the sort of practical judgement that I have just taken you through (applying sentencing guidelines, perhaps using a decision-tree model), where a judge is using a fixed framework within which they come to a defined conclusion; then the other type of judgement, which is reflective judgement (when you are listening to a witness, assessing their credibility and working out whether you can rely upon their version of events as opposed to another conflicting witness).
It is in that area of reflective judgement that I think we need to start with first principles, which is to hold on to that essential and indefinable human element that I think any form of ‘assistive’ technology will not be able to fully replicate. I think the first question is, of course, the explicability of judgements, and why it is that judges or juries come to their view. Of course, juries never give their reasons.
All of us – judges, lawyers – have a responsibility to learn more about this area, not to be fazed by the technology, but to focus upon our ethics. What it is that we do and why, and then with that confidence, engage with the issues that these various types of machine learning now pose to us.
But then the machine is, of course, inscrutable as well, is it not? I am told that the technology is there that can allow machines to explain why it is they come to conclusions, and that may well be an answer to that conundrum. But I think, fundamentally, there still remains a question of public confidence and trust. In other words, whilst we, as members of the public, might be more than happy for PayPal to resolve our eBay disputes (which is happening in millions of transactions every year, and all being done by algorithm, by machine learning), are we going to be so happy if a machine tells us we are losing residence of our children, or contact with our children, or we are going to lose our liberty, because of a finding of guilt?
Justice is the product of the evidence, the material, that we all consider, whether we are giving advice as lawyers or making findings of fact or law as judges, from the outside world. When I started at the Bar 30 years ago, an assault case would perhaps be not much more than an inch or two of paper. Now, one is expected to be able to analyse the contents of a smartphone, which we know is the paper equivalent of the Eiffel Tower, to put it conservatively. The sheer proliferation of extrinsic factors generated in very large part by machine means that we as lawyers cannot put our heads in the sand and say, “Everything’s fine as it is, thank you very much, the world in 2024 does not affect the way that we do business in the courts.”
I am glad that the Judicial College has issued guidance to judges and tribunal chairs about the risks and benefits of various types of AI, and indeed has done a good job in defining the different types of artificial intelligence. Judges are already, with some good sense and a little style, dealing with this problem. For example, the Tax Tribunal judge who dealt very efficiently at the end of last year with Mrs Harbour and her capital gains tax case, where she was contending that she was not liable for capital gains tax: she was a litigant in person, and she – and the court accepted this – unknowingly generated a number of authorities that did not exist. They were figments of the imagination of a large language model. There were cases that approximated to the names that Mrs Harbour was submitting, but the outcomes of all those cases were entirely opposite to the ones that she had submitted to the court. The tribunal judge dealt with it dispassionately and calmly, but it took a long time to deal with the fact that hallucination had been the order of the day.
The Judicial College has issued guidance to judges and tribunal chairs about the risks and benefits of various types of AI, and indeed has done a good job in defining the different types of artificial intelligence. Judges are already, with some good sense and a little style, dealing with this problem.
Now, it would be tempting for all of us to fold our arms and say, “Hallucinations mean that we should never use large language models.” But we know that that problem will be cured, and that we will not be worrying about that issue in two years’ time. Instead, we should be lifting our eyes above the technological limitations, above this worry about whether your job is going to exist in five years’ time, and instead focus upon what makes us lawyers and judges – the ethical code that binds us together and sets us apart from just a person giving advice – and the role that we are going to continue to have, not just in deepening the quality of justice, but remembering that these tools give us an invaluable opportunity to broaden access to justice for many, many more people.
And that is why I think we should be excited about the opportunities that we have. But in that excitement, we must remain very open-eyed about the dangers of bias and the dangers of deepfakes, which are probably wafting around some of our courts now already. That is why Kay’s message about content authentication and preventative work is going to be key for all of us in the legal profession and beyond.
My final message is that all of us – judges, lawyers – have a responsibility to learn more about this area, not to be fazed by the technology, but to focus upon our ethics. What it is that we do and why, and then with that confidence, engage with the issues that these various types of machine learning now pose to us. If we take AI in that way, then we will find that it will provide us with more opportunities than risks. At the same time, using our good judgement as lawyers, we should remember that the human element and that partnership concept should lie at the heart of the way it comes in and is used, both intrinsically in our system and extrinsically, as it inevitably influences the material that we as lawyers deal with in our work daily.
For the full video recording:
innertemple.org.uk/airisks
Kay Firth-Butterfield
Sir Robert Buckland KBE KC
Anneliese Day KC