A panel of five judges and a professor of law and computer science, all members of the Working Group on AI and the Courts of the American Bar Association’s Task Force on Law and Artificial Intelligence, have published recommended guidelines for the responsible use of artificial intelligence in judicial settings.

The guidelines, “Navigating AI in the Judiciary: New Guidelines for Judges and Their Chambers,” appearing in the forthcoming Volume 26 of The Sedona Conference Journal, are intended to provide a framework for judges and others who work in state and federal courts to navigate the rapidly evolving landscape of AI tools while maintaining judicial integrity.

The guidelines were written by Senior Judge Herbert B. Dixon Jr. of the Superior Court of the District of Columbia, U.S. Magistrate Judge Allison H. Goddard of the Southern District of California, Research Professor Maura R. Grossman of the University of Waterloo, U.S. District Judge Xavier Rodriguez of the Western District of Texas, Judge Scott U. Schlegel of the Louisiana Fifth Circuit Court of Appeal, and Judge Samuel A. Thumma of the Arizona Court of Appeals.

The guidelines emphasize that while AI can serve as a valuable aid in judicial functions, the ultimate responsibility for decision-making must remain with human judges. “An independent, competent, impartial, and ethical judiciary is indispensable to justice in our society,” the authors write, noting that “judicial authority is vested solely in judicial officers, not in AI systems.”

The document outlines fundamental principles governing AI use in courts, warning against over-reliance on technology that could undermine essential human judgment. Judges are cautioned about the risk of “automation bias” where humans trust AI responses without verification, and “confirmation bias” where users accept AI results that align with pre-existing beliefs.

“Judicial officers must maintain impartiality and an open mind to ensure public confidence in the justice system,” the guidelines say. “The use of AI or GenAI tools must enhance, not diminish, this essential obligation.”

Regarding confidentiality and privacy, the guidelines advise against inputting personally identifiable information, health data, or other sensitive information into AI systems unless there is reasonable confidence the information will be protected. Particular caution is recommended for AI tools used in critical decisions such as pretrial release or post-conviction consequences.

Against that cautionary backdrop, the guidelines provide a list of potential appropriate uses for AI in judicial settings. The guidelines say gen AI may be used to:

  • Conduct legal research, provided that the tool was trained on a comprehensive collection of reputable legal authorities and the user bears in mind that gen AI tools can make errors.
  • Assist in drafting routine administrative orders.
  • Search and summarize depositions, exhibits, briefs, motions, and pleadings.
  • Create timelines of relevant events.
  • Edit, proofread, or check spelling and grammar in draft opinions.
  • Assist in determining whether filings submitted by the parties have misstated the law or omitted relevant legal authority.
  • Generate standard court notices and communications.
  • Perform court scheduling and calendar management.
  • Conduct time and workload studies.
  • Create unofficial/preliminary, real-time transcriptions.
  • Create unofficial/preliminary translation of foreign-language documents.
  • Analyze court operational data, routine administrative workflows, and identify efficiency improvements.
  • Organize and manage documents.
  • Enhance court accessibility services, including assisting self-represented litigants.

The authors acknowledge the current limitations of generative AI, noting that “as of February 2025, no known gen AI tools have fully resolved the hallucination problem.” They stress that human verification of all AI outputs remains essential.

According to the article, the guidelines represent the consensus view of the working group members, but are not an official positions of the ABA, the Law and AI Task Force, or The Sedona Conference.

The authors will present a free webinar discussing the guidelines on March 18, 2025, at 1 p.m. EDT. For details and registration information,  click here.

Photo of Bob Ambrogi Bob Ambrogi

Bob is a lawyer, veteran legal journalist, and award-winning blogger and podcaster. In 2011, he was named to the inaugural Fastcase 50, honoring “the law’s smartest, most courageous innovators, techies, visionaries and leaders.” Earlier in his career, he was editor-in-chief of several legal publications, including The National Law Journal, and editorial director of ALM’s Litigation Services Division.