89.9 FM Live From The University Of New Mexico
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

How should judges and lawyers use artificial intelligence?

LEILA FADEL, HOST:

Artificial intelligence, like ChatGPT, are both cause for excitement and concern in all kinds of industries. That includes the courtroom. Already, there's been scandal over lawyers filing legal briefs where ChatGPT cites fake cases. So the Florida Bar, the professional association for lawyers, set up a committee to create the nation's first guidelines for the use of generative AI in the practice of law. I spoke earlier with Duffy Myrtetus, who co-chairs that committee, and I started by asking him why guardrails around AI are needed.

DUFFY MYRTETUS: Principally, there is a concern about security and whether or not there is a risk that an ethical duty could be breached through the use of these technologies. Some of those concerns, I think, are being addressed as these technologies evolve from open systems to more closed systems that are secure. There are all sorts of anecdotal examples from around the country of experiences where attorneys have submitted something or filed a pleading with a court that includes a citation to legal authorities, for example, that don't exist. I think the first case we heard about was out of New York - the Mata v. Avianca case.

FADEL: So you are speaking with practitioners of law around the country, lawyers, and...

MYRTETUS: We've spoken to judges and lawyers, vendors, consultants, folks in academia. There's sort of a patchwork of results in both state and federal courts and in different bars around the country where they're trying to respond to the effects of generative AI. And it's complicated. It sort of ranges from the general ethical duties of lawyers to concerns about evidence, effects on the courts and clerks. So it's an evolving process. And we're trying to, you know, distill what's happening and prepare for the future.

FADEL: And how is the committee preparing for the future?

MYRTETUS: We've looked at a range of topics that include things like cybersecurity, evidentiary considerations, how - issues like deepfakes, manipulated material, whether that be audio visual, pictures, materials that might be submitted as evidence or filed with a court and might not be credible.

FADEL: Should there be some type of ethical requirement that - to say, well, AI was involved in drafting this legal brief? Or should there be guardrails around that?

MYRTETUS: Yeah, there tends to be a reluctance to just immediately jump towards regulation. Florida, like most other states, have got rules of procedure that impose certain ethical duties upon lawyers. And the question might be whether or not those are sufficient, or whether they might be modified or embellished or strengthened.

FADEL: How can it actually be something positive for lawyers and judges in the courtroom, especially when it comes to efficiency?

MYRTETUS: Yeah, there certainly appears to be fantastic opportunities for efficiencies. Tasks that may have been performed by more junior lawyers or staff - legal research, for example - it's clear that these technologies are able to achieve an efficiency that is incredible. It's fascinating. At the same time, it's frightening. So with those efficiencies comes sort of an increased obligation to provide oversight and make sure that using these tools doesn't lead to an inadvertent violation of some rule of procedure or ethical obligation.

FADEL: That's Duffy Myrtetus, co-chair of the Florida Bar's Special Committee on Artificial Intelligence Tools and Resources. Thank you so much for your time.

MYRTETUS: Thank you for having me. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.