By Patrick J. Glinka, Esq.

Imagine this: You’ve got a super-smart AI assistant. You are swamped, so you ask it to help you write a very important legal document. The AI, in its infinite digital wisdom, spits out a beautiful brief, complete with fancy-sounding case names and official-looking citations. You high-five your laptop, file the document with the court, and sit back, feeling clever.

Then you get a call from a very unamused judge. It turns out your robot lawyer is a bit of a fibber. It just made things up and passed the work off as accurate.

This isn’t a sci-fi plot; it’s a real and growing problem in courtrooms across the country. As people turn to tools like ChatGPT to help with legal work, judges are seeing a surge in filings that feature “hallucinated” cases—citations that look real but point to court decisions that don’t actually exist. And as a recent string of cases shows, judges are not taking it lightly.

The Big Oops: It’s Not the AI, It’s You

The number one rule judges want you to know is this: The problem is not using AI. The problem is trusting it blindly. When you sign your name to a legal document, you are telling the court, “I stand by this. The facts are true, and the legal arguments are real.”

As the Missouri Court of Appeals put it in Kruse v. Karlen, 692 S.W.3d 43 (Mo. Ct. App. 2024), filing a brief full of AI-generated nonsense is a “flagrant violation” of your duty to be honest with the court. A New York court in the Will of Samuel, 82 Misc. 3d 616 (Sur. Ct. 2024), case agreed, scolding a lawyer for not even taking “minimal steps” to check if the AI’s work was factual. It’s like turning in your friend’s homework without even reading it—you’re still responsible for what it says.

The Judicial Smackdown: A Menu of Consequences

So what happens when you get caught passing off robot fiction as legal fact? Judges have a whole menu of penalties they issue, depending on how bad the violation is.

  • The “Pay Up” Penalty: This mistake can be costly. In the Kruse case, a litigant was ordered to pay the other side $10,000 for the trouble they caused.
  • The “Game Over” Ruling: In some of the most dramatic cases, judges have simply thrown the entire case out. In Gutierrez v. Gutierrez, 399 So. 3d 1185 (Fla. 3d DCA 2024), a Florida court dismissed the whole appeal. In New York’s Idehen v. Stoute-Phillip, 2025 N.Y. Slip Op. 50816(U) (Civ. Ct., Qns. Cty. 2025), the judge tossed the proceeding because the lawyer’s arguments were propped up by seven fake cases.
  • The “You’re in Real Trouble Now” Referral: The Minnesota Tax Court in Delano Crossing 2016, LLC v. County of Wright, 2025 WL 1539250 (Minn. Tax Regular Div.), decided not to fine the county attorney’s office, but instead did something arguably worse: they referred the lawyer who signed the brief to the state’s professional responsibility board.
  • The “On Probation” Order: Some judges are getting creative. A Delaware court in An v. Archblock, Inc., 2025 WL 1024661 (Del. Ch. Apr. 4, 2025), put the litigant on notice: any future document he files using AI must come with a signed “certification” confirming a human has checked it for accuracy.

Are Judges Being Unfair? Not Really

Now, courts are not completely heartless. They have shown some understanding, especially for regular folks representing themselves who might not know the risks of AI. In cases like Al-Hamim v. Star Hearthstone, LLC, 564 P.3d 1117 (Colo. App. 2024), and Herigodt v. Louisiana Department of Transportation and Development, 2025 WL 732298 (La. App. 4 Cir. 3/7/25), judges gave pro se litigants a stern warning for their first offense without bringing down the hammer.

But that patience has a very short shelf life. The court in An v. Archblock noted it might have been more lenient if the non-lawyer filer had just admitted his mistake. Instead, he “doubled down” and insisted his fake cases were real, which led the judge to deny his motion with prejudice.

Lawyers get no such leniency. The message from the bench is clear: you are trained professionals. You are officers of the court. You, above all, are supposed to know better.

The Takeaway: Don’t Blame the Robot

AI can be an amazing assistant. But it’s just that—an assistant. It is a powerful tool, not a replacement for your brain and your duty to be truthful and accurate. The ultimate responsibility for every word, every argument, and every single citation rests with the human who files the document.

So, go ahead and use AI to get started. However, before you sign your name, remember the simple rule that could save you from disaster: Trust, but verify. Because when a judge is staring you down, “the robot did it” is one legal argument that will fail. Every single time.


About the Author

Patrick J. Glinka is Special Counsel at Bradley, Gmelich + Wellerstein LLP. Mr. Glinka has defended clients in a broad range of civil litigation matters, including medical malpractice, products liability, premises liability and amusement and recreation liability. His experience includes many first chair jury verdicts, as well as appellate advocacy. Mr. Glinka is a graduate of the National Institute of Trial Advocacy and a member of the International Amusement & Leisure Defense Association and Defense Research Institute.