Two Minnesota attorneys admitted they submitted legal filings that cited fake cases created by artificial intelligence. Both lawyers said they used AI tools to help draft legal briefs. They failed to check if the cases the AI suggested were real. Judges in the state have now warned the legal profession about the risks of relying on AI without verification.
Judges Catch Fabricated Citations
Hennepin County Judge Laurie Miller discovered the problem during a Friday hearing. Veteran attorney Frederic Knaak had written a six-page memorandum for a housing case. Three cases he cited to support his argument did not exist. According to KARE 11, Judge Miller said she does not trust ChatGPT at this point.
Knaak told the judge he was relying on the tool for efficiency, not deception. He said the mistake mortified him beyond belief. He insisted there was no intent to deceive.
Pattern Emerges Across Courts
In September, Judge Christian Sande imposed a $5,000 fine on Minneapolis attorney David Lutz for a similar case. Sande also referred Lutz to the Minnesota Lawyers Professional Responsibility Board for potential punishment. Lutz admitted he used AI for efficiency and did not cross-reference the cases it provided.
A database tracking these incidents shows 134 other cases nationwide where attorneys cited fabricated case law generated by AI. The problem appears even more often among people representing themselves in court. These individuals use AI instead of hiring an attorney.
Experts Warn of Growing Temptation
Law professor David Larson teaches a course on artificial intelligence and the law at Mitchell Hamline. Larson believes attorneys can use AI ethically if they take proper precautions. He warns the temptation to use AI is so strong that people cannot resist it.
Judge Miller said she has seen at least one other case in the past year involving AI hallucinations. She noted that courts across the country are reporting similar problems. She called for efforts to stop the trend before it becomes more widespread. Lawyers and paralegals use AI tools to improve efficiency, but chatbots sometimes generate made-up information. This phenomenon is known as an AI hallucination.