PALO ALTO -- Some San Diego judges are using artificial intelligence tools, including ChatGPT and Claude, to check math on child support orders and conduct quick legal research, according to Superior Court Judge Yvonne Campos, who cited crushing caseloads as the driving factor.
Campos told an audience at Stanford Law School that she had recently surveyed colleagues on their use of AI in the courtroom. She estimated that there was a 10% to 12% response rate from a judicial bench of 135, plus one commissioner.
"Now, my colleagues professionally are doing the things that you might expect. They're starting to use either (Thomson Reuters) CoCounsel or (Microsoft) Copilot because we have the MS, Microsoft Suite, of items, they're writing orders, they're summarizing testimony, they're doing legal research," Campos said.
In addition to CoCounsel, a tool developed for legal applications, judges were using common AI tools available to the general public.
"Some tell me that they are using Claude for math to help them check the math on child support orders because they think that it's really good at math and they want something quick and dirty. Some of them are telling me that they're still using Gemini or GPT for quick legal research because the 10 minutes that it's taking (CoCounsel) to give them an answer, they don't have that kind of time."
Campos, who described state trial courts as "the ERs of the legal profession," said her docket can run anywhere from one to 130 cases in a single day. Her colleagues were dealing with similar pressures, meaning they prioritized technology that could improve efficiency.
Campos said some colleagues had expressed concerns about AI products not protecting their data or getting things wrong.
"People have told me they don't want the algorithms leaking stuff about them for free. They very much want to own their own prompts and whatever it is they're doing," she said.
Campos was speaking as part of a panel titled Beyond Efficiency: Building Reliability into Courtroom Use of Artificial Intelligence, at Stanford Law School's CodeX FutureLaw Conference on Thursday. Other panelists included retired Superior Court judge Erica R. Yew and Shlomo Klapper, founder and CEO of legal AI company Learned Hand, which recently inked a pilot agreement with Los Angeles County Superior Court.
Yew, who served on the Santa Clara County Superior Court until January and is now CEO of the American Leadership Forum Silicon Valley, struck an optimistic tone on the issue of hallucinated case citations, arguing the problem would diminish as tools become more practice-specific.
Deepfakes, however, were a different matter. "When we talk about deepfakes, the human in charge, or human in the loop, is almost going to be worthless because the technology is going to be so good that the evidence is going to be so difficult to parse out."
She argued that the more interesting question around deepfake evidence is who can challenge it. Attorneys with well-heeled clients can hire experts to examine metadata and other technical markers, she said, "but self-represented litigants don't have that capacity to necessarily understand to express to the court the concern or the funds to hire an expert" to identify deepfake evidence.
"And so, I think at some point, the court's need to partner with universities for experts to have a bank of people that can be there if needed to be called upon to advise the court if this is something that's reliable or not."
Yew also expressed concerns over what she called "bootstrapping." A forged property title, for instance, could be filed with a county recording office, certified as a public record, and then introduced in court as an inherently reliable document -- with neither the lawyers nor the parties knowing it was AI-generated. That scenario, she said, is one courts will have to grapple with.
The panel also discussed how to deal with attorneys who don't check AI work and end up submitting briefs with incorrect information, most commonly bad citations.
Answering an audience question on whether appropriate mechanisms existed to punish parties who had inappropriately used AI tools, Campos said that the existing suite of evidence sanctions worked, but added that she applied a "cost-benefit analysis" to these.
"You can report a lawyer to the State Bar that can be sanctioned. There's multiple different tools that the courts have to keep trying to motivate people to keep them honest," Campos said. "I deal with so many people. I try not to personalize it. I don't want somebody deciding that they've been antagonized by me and that they want to come after me. Because these are some real life-and-death issues.
"I've never sanctioned a lawyer, although I could have a number of times. Why didn't I sanction a lawyer? Because I knew that reciprocally they could send me to the Commission on Judicial Performance, whether it was valid or not. And I didn't want that headache. And so, I've adopted the philosophy of have a lot of grace and believe in karma."
Jack Needham
jack_needham@dailyjournal.com
For reprint rights or to order a copy of your photo:
Email
Jeremy_Ellis@dailyjournal.com
for prices.
Direct dial: 213-229-5424
Send a letter to the editor:
Email: letters@dailyjournal.com



