Judges warn lawyers after ChatGPT makes up court cases

Neutral, high-contrast editorial collage showing Judge Laurie Miller and Judge Christian Sande in black robes as calm, front-facing portraits, centered around a close-up stack of legal briefs marked with bright red tab flags, with a Minnesota courthouse facade in the background, cool winter blues contrasted with warm red accents, medium close-up framing, crisp realistic detail, no text or watermarks

Two Minnesota attorneys admitted they submitted legal filings that cited fake cases created by artificial intelligence. Both lawyers said they used AI tools to help draft legal briefs. They failed to check if the cases the AI suggested were real. Judges in the state have now warned the legal profession about the risks of relying on AI without verification.

Judges Catch Fabricated Citations

Hennepin County Judge Laurie Miller discovered the problem during a Friday hearing. Veteran attorney Frederic Knaak had written a six-page memorandum for a housing case. Three cases he cited to support his argument did not exist. According to KARE 11, Judge Miller said she does not trust ChatGPT at this point.

Knaak told the judge he was relying on the tool for efficiency, not deception. He said the mistake mortified him beyond belief. He insisted there was no intent to deceive.

Pattern Emerges Across Courts

In September, Judge Christian Sande imposed a $5,000 fine on Minneapolis attorney David Lutz for a similar case. Sande also referred Lutz to the Minnesota Lawyers Professional Responsibility Board for potential punishment. Lutz admitted he used AI for efficiency and did not cross-reference the cases it provided.

A database tracking these incidents shows 134 other cases nationwide where attorneys cited fabricated case law generated by AI. The problem appears even more often among people representing themselves in court. These individuals use AI instead of hiring an attorney.

Experts Warn of Growing Temptation

Law professor David Larson teaches a course on artificial intelligence and the law at Mitchell Hamline. Larson believes attorneys can use AI ethically if they take proper precautions. He warns the temptation to use AI is so strong that people cannot resist it.

Judge Miller said she has seen at least one other case in the past year involving AI hallucinations. She noted that courts across the country are reporting similar problems. She called for efforts to stop the trend before it becomes more widespread. Lawyers and paralegals use AI tools to improve efficiency, but chatbots sometimes generate made-up information. This phenomenon is known as an AI hallucination.

Total
0
Shares
Previous Post
Close-up of a gleaming gold microchip with intricate circuits melting and dripping into a glowing crimson pool as a jagged downward market line looms in the background, bright warm–cool contrast, center-focused, high clarity

AI stocks lose over $1 trillion as investors pull back

Next Post
Editorial montage featuring the official SoftBank logo and the OpenAI logo large and central, a bold upward golden arrow weaving between them, shimmering coins and a glowing line-chart backdrop, high brightness with warm golds against cool electric blues, medium-close framing with crisp reflections, hyperreal and punchy, no other logos or text.

SoftBank sells all Nvidia shares, plans $22.5 billion OpenAI bet

Related Posts