Archive.fm

WBCA Podcasts

Discovering The Law

Lucy Rivera speaks to attorney Jeffrey Woolf, Assistant General Counsel to the Massachusetts Board of Bar Overseers, about artificial intelligence, particularly generative AI. They discuss its uses and implications for the practice of law.

Broadcast on:
27 Aug 2024
Audio Format:
other

Lucy Rivera speaks to attorney Jeffrey Woolf, Assistant General Counsel to the Massachusetts Board of Bar Overseers, about artificial intelligence, particularly generative AI. They discuss its uses and implications for the practice of law.

(upbeat music) (dramatic music) (dramatic music) (dramatic music) (dramatic music) (dramatic music) (dramatic music) (dramatic music) (dramatic music) (dramatic music) (dramatic music) (dramatic music) (dramatic music) (dramatic music) (dramatic music) (dramatic music) (dramatic music) - Welcome to Discovering the Law. My name is Lucie Rivera and this episode can be viewed at www.discoveringthelaw.com. Today, we have, as promised, we are bringing back Jeffrey Wolf, Assistant General Counsel to the prestigious Massachusetts Board of Bar Overseers. And by acclimation, welcome to our program. - Thank you, thank you. - Welcome back. - Attorney Wolf is here today to discuss with us an innovative new wave artificial intelligence and its impact in the law. Attorney Wolf, what can you tell us about artificial intelligence? What is it and how does it affect my lawyer's representation? - Okay, so there's different kinds of artificial intelligence or AI. And so one is, we're all used to this. And when you do a Google search, you're using AI to come up with an answer. The next step that we have seen is what's called generative artificial intelligence or generative AI. And generative AI, or generative artificial intelligence, is a kind of computer program that will actually generate content rather than find an article for you. It will create something. And that content could be for lawyers. It could be a legal brief, for a court case. It could be an article. It could be a list of ideas for a client. People can even use generative AI to create music or artwork or speech. So the extent to which generative AI can benefit you as a client depends on what your lawyer's doing for you. Currently the best and safest, well, let me stop for a second. I should have done this. I have to have my usual disclaimer. You and I as government lawyers know we have to have disclaimers. - Yes. - So what I have to say today are my own views and don't necessarily reflect the views of Supreme Judicial Court, Commonwealth of Massachusetts, the Board of R. Sears. - Well said. - Okay, so thank you. Sorry, didn't start off with that. But with my view, the best and safest uses of generative AI and legal matters right now is to do things like summarize documents such as deposition transcripts or to come up with ideas based on certain content. Generative AI can also draft simple legal documents, contracts, some motions. You can use it to draft emails or letters. You can do this in a matter of seconds. Interestingly enough, it could also, for example, you've got a case that my generate money could calculate the settlement amount or what the likely outcome would be of a court case. That's the upside of generative AI. The downside is that generative AI has no morality, has no ethics built into it. And if you ask it a question, it will give you an answer based on its search of the internet. Now, you know that there's all kinds of things on the internet and not all of them are true. So what happens is that AI, generative AI may come up with an answer for you, but that doesn't mean that what it tells you is true. And they actually have a name for this that they say that generative AI has hallucinating when it makes up something, makes up an answer to a question. I mean, to give you another example, suppose you've got a small child, when you ask the child, why is the grass green or why is the sky blue? And child will give you an answer about why the grass is green, but it may have nothing to do with chlorophyll in the grass, which is why plants are green. It may have nothing to do with why, the suspension of particles in the air that are hit by the light in the reflux. And that's why the sky is blue. But the kid will give you an answer. And that's the same, the similar kind of thing with AI. It'll give you an answer, but it may have nothing to do with reality. So that's sort of where it takes us. The problem, the downside of this, is when sometimes lawyers have asked AI to write, for example, a brief or a court case. And AI complies, it generates the content, complete with citations to court cases to support the proposition that the lawyer is advancing. But AI's made them up. It didn't actually find them on some legal research database. It just fabricated the case. And a couple of instances, the lawyers have said to AI afterwards, 'cause remember, it's JetGPT, you can ask a question to follow up. They've said, are these cases real? And I said, oh yes, the case is real. And they're not. And so the lawyers get in trouble because instead of looking at this themselves, they've asked AI first to write it, and then they've asked AI if it's true. And AI said, it's not. And by the way, there've been a couple cases like this. The lawyers have been fined by the court for submitting briefs that have fabricated case citations. So at least for now, lawyers can't really rely on AI to do some kinds of things. And when it comes to, for example, writing a brief that has to have citations to the law to support the proposition, by the time the lawyer gets done checking what AI is generated, it might as well do the research themselves. But eventually it'll catch up. But right now, that's the problem. So what you're saying is that AI, artificial intelligence, is devoid of common sense. Yes. Tell us a little bit about chat GPT. OK, so chat GPT is a kind of generative artificial intelligence. It's like a virtual assistant that can generate content similar to human speech or human writing. The difference is that it allows the user to refine the questions and steer the conversation, if you will, between the user and AI toward a desired format content, refine the questions. So you get to ask successive questions, but it's not like asking a new question every time. If you do a Google search, you ask a question. You ask a question. It's like a new question all over again. With chat GPT, you've asked a question, and now you've got a context. And then the next question is going to be drilling down on that context to refine or fine tune the answers in response to the successive feedback between the user and AI, or chat GPT. So it will, like other kinds of AI, it'll generate content, but it'll be allowed to drill down and get more specific on what you're looking for. So that's the difference between chat GPT and other kinds of generative AI. And so in that vein, what are the consequences to those lawyers who use a generative AI to draft their work, their fake, they use fake citations, fake cases, fake precedent in their representation? - Okay, so I mentioned that the lawyers got fined, but let me explain why. So you and I as lawyers are bombed by certain ethical rules. And one of them is that we have to be truthful to the court about the facts and the law that we submit to the court. And this is true where the lawyers are using AI or not, right? So they can't make up facts, they can't misstate the facts and they can't misstate the law. Well, if they rely on AI and AI gives them false case citations, it's the same, it doesn't matter whether the AI did it or the lawyer did it or the lawyers associate or paralegal or the client gave the lawyer the wrong information. The lawyer is still responsible for what they submit to the court. And so for one thing, they can get fined by the court and this has happened in one case in Massachusetts and another one in New York where they submitted a brief written by AI and they didn't check the citations that AI generated and I as I said, AI fabricated it. So that's what happens. And the other thing is that the lawyer can have disciplinary charges filed against them because we as lawyers are governed by the rules of professional conduct that, among other things, make us responsible for what we say to the court, what we say to our clients, what we say to opposing counsel, what we say to third party, some witness. You can't lie to a witness about who you are, who you represent. So AI and general AI and false statements generated by AI are all in the same basket of misstatements to the tribunal, to the court, can be administrative perceived, doesn't have to be a trial court or a appellate court. I wanna add that lawyers and their clients get into trouble if the client fabricates something, whether they make up something or they alter a document or they destroy a document, we have an adversary system of law, there's our side and there's the other side or could be more liberal sides in a civil case. And as the saying goes, the truth will out. The fact that the other side gets the poke at your case means that if a lawyer's making a misstatement, then it's very likely that the other side will ferret it out. There was a case in Ohio several years ago where an assistant prosecutor is actually interesting. This isn't a prosecutor thought that a criminal defendant's alibi witnesses were lying. And so what he did was to create a fake Facebook persona and then interacted with the witnesses on Facebook and did not tell the witnesses who he was. He told them he was somebody else. And eventually this came out that he'd done this and consequences for him was that he got fired and then he got suspended from the practice of law. It also created a lot of problems for the state of Ohio. They had to disqualify the whole district attorney's office. The case actually had to be tried by the attorney general's office. I mean, create a lot of problems. But as to the lawyer, which is really the question you asked me, the lawyer got suspended from the practice of law and of course he lost his job in the prosecutor's office. So the takeaway is that not only should lawyers not fabricate stuff and fail to check on what AI submits, but clients shouldn't fabricate stuff or use AI to fabricate things either. So that's it. - I love your quote from the merchant of Venice. The truth will out. - Yes. - And Materi Wolf, so what are some measures or steps that the courts can take to prevent lawyers from using fake precedent and fake cases or citations from generative AI? - So in theory, the courts shouldn't have to do anything because they're already the disciplinary rules in place that say that lawyers aren't supposed to make false statements a factor of law to a tribunal. And also the courts have rules that basically mirror this that say lawyers are responsible and they can't make false statements both civil and criminal rules. But nevertheless, there are some judges who have gone a step further, for example, and they've had the lawyer sign. If I didn't use AI, generative AI to write any part of this or if I'd use generative AI to write part of this, I personally checked everything that generative AI produced. But they're in some other jurisdictions, not in Massachusetts. So they've done that, but there are other legal commentators who said, well, you know, that's not really necessary because there already are rules in place that make this unnecessary. But the short answer to your question is, some courts have asked, have said lawyers, either you don't use AI and you represent, you sign something saying, I did not use AI or that if I use AI, I have personally checked everything that AI has submitted. They're all trying to avoid the problem that we're talking about here, which is the hallucination by AI. - However, this is not something uniform. - Correct, this is just in a couple of jurisdictions right now. And as I said, legal scholars have written law review articles saying, well, it's not really necessary because we already have these rules in place. But there are judges who are trying to be proactive about the problems that have surfaced with AI. - And attorney, so with all this AI, seems to be so easy come. Would lawyers still be able to build their clients for this new artificial way to obtain information? I'll be fake. - Okay, well, so all right. So this gets back to the fact that what we call gender of AI is not the first use of AI. AI has been around for a long time. I mean, a search engine like Google is a form of artificial intelligence. And there are several research companies, legal publishing companies. Westlaw and Lexus are the two biggest ones, I think. And you can use AI to search for cases. You say, you know, you give them a proposition, you plug it in, either Boolean search or natural language search in, and Westlaw or Lexus Nexus will come up with some cases. So it's already being used. And that's not generating content. That's simply searching the realm of reported case decisions and giving the lawyers cases that are responsive to the search parameters that the lawyer has used. So that kind of AI is perfectly proper, comes up with real cases, saves a lot of time and a lot of money. Over what lawyers had to do 50 years ago when you had to do this research manually and go through books and look through cases. But the lawyer is still responsible for the relevance of the cases they cite and the accuracy of what the cases say. So the short answer is yes. Can you question about billing you for using AI? 'Cause for example, they can bill you for using Westlaw or Lexus Nexus if it's in the fee agreement, it has to be in the fee agreement. But there's an exception to this in Massachusetts as there often is. - Open. - Which is that the lawyer can use AI to do the research but he can't and charge the client for it. But the lawyer can't charge the client to come up to speed on a new area of law. So even if you, I go to a client and I were to sit if I were in private practice and say you want me to do this, I don't really think about it and I'll come up to speed on it but you're gonna pay for my learning curve and the answer is in Massachusetts the lawyer's not allowed to charge for that even if the client agrees to it. So that's not allowed Massachusetts. - And speaking about learning something, can artificial intelligence be used to create art and that would be something to learn? - Oh yes, it can be used to create art, it can be used to be creating music, also can be used to create speech. The problem around the country is, well, like I said, AI has no morality. So it just does what it wants and if you ask it to create something, it may take something that somebody else has created and then give it offered up to you as something that it has created. And there is a concept in law called fair use and I really can't get into the whole thing. It's a copyright concept but in Massachusetts, fair use allows you to use for limited purposes something that has been copyright, somebody else has created without having to get permission from them first to do this. So if you go on the internet and you see a picture you like, you can print it up, you can hang it on your wall. But what you can't do is take that other person's artwork that they've created and then sell it. So fair use has some limits. I mean, it's obviously it's used by search engines. Fair use is parody, criticism, news reporting, research, scholarship. I mean, you quote something that somebody else wrote with attribution, that's fair use. So within certain limits, you can use gender VI to create something but if it doesn't really create it, it takes something and somebody else is created then you've got a problem. The other problem is that if it takes somebody, something that someone else has created, then it alters it a bit. How much? And so just putting AI side, if somebody invents something and somebody else says, I got an idea, I'll take what they've invented. I'm gonna tweak it a little bit. And then I've got this new thing. And then the first person says, no, you're infringing on my patent. And the second person says, no, I've changed it enough. I'm not infringing on your patent. It's the same thing with AI. If it changes it, does it change it enough that it's no longer the first person's creation, artwork, patent, music, whatever. - And this has become original. - Right, so at one point it's become original, but the other interesting thing is, AI can't create a patent, it's not a person. You can file for a patent or a copyright and something that you come up with. But AI's not a person. How is AI gonna file for it? So if you ask AI to create something for you, can you copyright it? And this is another thing. The courts around the country, there's a lot of lawsuits over this whole, several of these issues. One of the, over the fair use about what is changing it enough. You know, what if AI creates something, who's got the rights to what AI quote unquote creates? So, this is gonna go on for years about this. But as AI progresses, we'll see more of this stuff. But we're just really, in terms of generative AI, just starting to see what a generative AI can do. I mean, eventually it'll get better, presumably, and be of more assistance. But I've read articles in law reviews where the author has said, "Here's the paragraph that I gave AI," and then told it, write an article, and then shows it, and now, this is the article that AI created. So it's going on now. - Fascinating, and there's going to be lots of room for litigation. - There is, there is, certainly. - We have about five minutes. I wonder if you would like to talk a little bit about ethics, and maybe with the theme, or maybe with speaking of the prosecutors and how they're using AI, and some takeaways. - All right, so one of the things that there's been cases around the country is, so now, when lawyers give closing arguments, everybody wants something visual. So they have PowerPoint presentations. People use PowerPoint presentations to present their case, but at the end of the case, when the lawyers are giving their closing arguments, they'll use PowerPoints to illustrate what they have to say. So there have been some prosecutors who have taken photographs, you know where this is going. They've taken photographs of the defendant, and they've altered the photograph of the defendant to make the defendant look more suspicious, more guilty. They've even put guilty across the screen. They've taken, so, and so this is really kind of AI, but they've done this with PowerPoints, and so these lawyers, you need us to say, have face disciplinary charges for using this kind of AI to manipulate the visual materials, 'cause it's not evidence, it's a closing argument, so it's not in evidence. But it's an illustration, and so they've got in trouble for doing that, for putting guilty across the name, for altering the image of the defendant when they've portrayed it. So, or they've used AI to misstate, for example, principles of law. - Well, this sounds just increasingly worrisome, interesting and worrisome both at the same time. - So, we are almost out of time, which has maybe a couple minutes left, but, Attorney Wolf, what are just your takeaways for us, the public, to that you would like us to leave with in terms of AI, or maybe some takeaways before we say goodbye? - Okay, so, don't be suspicious of AI, just, you know, I can't deal with it, we can't use it, but one should be very circumspect, particularly the lawyers, about how they use AI, and, or if you use AI to create something for yourself, for, particularly if you are in the business of generating art, speech, music, that AI has not infringed on what someone else has created. That's really the concern for the consumer. Lawyers, you know, we're in a different bucket here about the use of AI, but for the public at large, people have to be circumspect about the use of AI, so they don't run afoul of the limits of the fair use doctrine when they, and I'm talking not just AI, but generative AI, when these generative artificial intelligence create something, they have to be very careful not to cross the boundaries about what they're allowed to do, just the way they can't copy somebody else's music or art in the first place. They can't use AI to do it either. - They cannot take it as theirs, appropriate it when it's something that was created artificially. - Right, but they have to be careful that AI hasn't done it without them knowing about it. It's one thing to go in and copy somebody else's work, and you know you're doing it, but if you ask AI to do it, you may not know that AI has done this, and that's the problem that people have to be careful about. - And for our audience and lawyers out there, you've heard it from the Assistant General Counsel from the Board of Bar overseers. This is something to take heed to and be mindful about. We've come to an end of our program, Attorney Wolf. Thank you so much for coming back and sharing with us your experience and knowledge about artificial intelligence, which is both something we can't stop, but also something we need to check. - Yes. - Well, thank you for having me. It's a pleasure to be here. - My name is Attorney Lucille Rivera, and this episode can be viewed at www.discoveringthelaw.com. Thank you for watching today. (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music) [BLANK_AUDIO]