fbpx

Management Assistance Program

‘It Is About Trust’: What an Oklahoma Magistrate Judge’s Order Teaches Us About AI, Advocacy and Professional Courage

By Julie Bays

The most impactful court orders serve not only to resolve motions but also to provide valuable teaching moments. An Oct. 22 order from U.S. Magistrate Judge Jason A. Robertson in the Eastern District of Oklahoma does exactly that. It’s a careful, unsparing explanation of what went wrong when counsel filed briefs laced with fabricated citations and misstatements, and it’s also a roadmap for how lawyers should engage with generative AI without surrendering the duties that make advocacy trustworthy.

Judge Robertson sets the tone from the first page:This ruling is not about technology. It is about trust.” The judge reminds us, “Generative technology can produce words, but it cannot give them belief. It cannot attach courage, sincerity, truth, or responsibility to what it writes. That remains the sacred duty of the lawyer who signs the page.”

The opening paragraph resonated with me on a deep, emotional level. Judge Robertson’s words powerfully captured the gravity of legal advocacy and the profound responsibility lawyers bear when engaging with emerging technologies like generative AI. His clarity and candor did not merely outline procedural missteps; they illuminated the ethical foundations that underpin the practice of law. By emphasizing trust, verification and the unwavering need for credibility, the judge offered more than legal instruction. He delivered a poignant reminder of the duty attorneys must uphold for the integrity of our profession.

WHAT HAPPENED AND WHY IT MATTERS

Across 11 pleadings, Judge Robertson identified 28 false or misleading citations (including fabricated and erroneous authorities) and found violations of Rule 11(b). The order is based on the systemic nature of the conduct rather than a single event, and it reflects the importance of ensuring that filed documents are authentic to maintain the integrity of judicial proceedings. As the judge puts it, “Rule 11(b) is the federal lawyer’s first oath in action … [It] demands that an attorney’s signature certify not creativity, but credibility.”

When opposing counsel raised the alarm, the problem was initially characterized as “clerical and formatting errors.” The judge rejected this, saying, “The problem was not form, it was falsity.” From there, the order walks through a clear framework that all of us can learn from: verification and inquiry, candor and correction, and accountability and supervision. Judge Robertson applies this framework to each lawyer and law firm involved.

The sanctions are measured but meaningful (public reprimands, monetary penalties scaled for responsibility, fee shifting and record-restoration measures). The lesson is explicit: Artificial intelligence may explain an error, but it can never excuse one.” And the closing line is one that I’ll use in my CLE presentations going forward: “Before this Court, artificial intelligence is optional. Actual intelligence is mandatory.”

FEDERAL RULES STILL SAY WHAT THEY’VE ALWAYS SAID

Nothing has changed about Rule 11 or the duties of candor, meritorious advocacy and evidentiary support because AI can draft a paragraph. The judge’s order anchors its analysis in Rule 11(b): truthful factual contentions, warranted legal contentions and reasonable inquiry. Then it applies to the realities of AI-assisted drafting. Verification is not optional. The signature on the filing is a personal warranty that the lawyer has read, checked and stands behind what was filed.

The judge’s framework translates effortlessly to everyday practice:

  • Verification and Inquiry: Look up every case in a trusted reporter or database (vLex Fastcase for OBA members, Westlaw, LexisNexis and OSCN). Do not rely on machine-generated citations or quotations – ever.
  • Candor and Correction: If you discover an error, fix it promptly and transparently. Candor after filing mitigates; minimization aggravates.
  • Accountability and Supervision: Responsibility travels up and across the team. Supervising lawyers and local counsel must ensure filings bearing their names are accurate. Firm policies matter.

HOW THIS CONNECTS TO OUR OKLAHOMA RULES OF PROFESSIONAL CONDUCT

Our ORPC already speak to every issue the order surfaces:

  • Competence: ORPC 1.1 and Comment [6] – Staying current on “the benefits and risks associated with relevant technology” includes knowing that generative models hallucinate and that their citations must be verified.
  • Candor to the Tribunal and Meritorious Claims: ORPC 3.3 and ORPC 3.1 – No false statements of law or fact, no frivolous arguments. If the tool invents it and you repeat it, you own it.
  • Supervision: ORPC 5.1 and 5.3 – Partners and managers must adopt and enforce reasonable policies, train teams and review work; lawyers remain responsible for nonlawyer assistants.

These are not new rules. They’re our familiar duties applied to a new workflow.

THE QUIET COURAGE OF ADVOCACY

Judge Robertson’s order is unmistakably practical, but it’s also moral. It insists that the practice of law is an act of trust and courage, calling for the quiet, disciplined courage to stand for what is right when compromise would be easier.” Machines can assemble the words, but only lawyers can believe in them. As the judge concluded, “Generative tools may assist, but they can never replace the moral nerve that transforms thought into advocacy.”

WHY DOES THIS KEEP HAPPENING WITH LAWYERS?

Many lawyers are under the false assumption that legal AI tools do not hallucinate. The branding may be different, and the datasets may be curated, but the underlying generative technology can still produce confident wrong answers. That includes invented citations, misread holdings and quotations that do not appear in the source.

This misconception often stems from a lack of familiarity with how generative technology operates. The rapid adoption of AI in legal practice can outpace the development of understanding and training around its limitations.

Busy practitioners may also see AI as a shortcut to efficiency, overlooking the critical need for manual review and validation. AI-generated content must always be independently checked and verified, or these errors will persist, and the responsibility for any resulting inaccuracies will ultimately rest with the lawyer, not the machine. 

A PRACTICAL PLAYBOOK (YOU CAN ADOPT TODAY)

To translate these principles into daily habits, consider implementing these practical steps in your legal practice. 

Establish a Firmwide AI Policy

Adopting a concise one-page AI policy, whether for a solo practice or a larger firm, helps set clear expectations and boundaries for responsible AI use. The policy should specifically:

  • Name the approved AI tools that have been vetted for security and accuracy. This reduces the risk of using unreliable or unsecure software.
  • Define “verification” by outlining the process for checking AI-generated content against authoritative sources, ensuring accuracy and reliability.
  • Prohibit the inclusion of client identifiers or confidential information in public AI models, thereby protecting client privacy and complying with confidentiality requirements under ORPC 1.6.
  • Require human review of all AI-assisted work before it is filed or shared, maintaining professional responsibility and accountability (ORPC 1.1, 5.1 and 5.3).

Utilize an ‘AI-Assisted Draft’ Checklist

Incorporate a standard checklist for both litigation and transactional matters to ensure the integrity and reliability of your work product:

  • Confirm each citation by cross-checking with official sources (g., court databases, statutes) to prevent reliance on fabricated or outdated authority.
  • Verify each quotation by reading the underlying opinion or source to confirm accuracy and proper context.
  • Restate key legal propositions in your own words after reviewing the source material, demonstrating understanding and avoiding parroting potentially erroneous AI output.
  • Review the entire document for fit, context and fairness to ensure the arguments are not misleading or taken out of context, upholding duties of competence and candor (ORPC 1.1 and 3.3).

Supervision and Ongoing Training

Continuous oversight and education are essential as technology evolves:

  • Senior attorney quarterly spot checks ensure policy compliance and support accountability.
  • Regularly refresh policies and training as new AI tools emerge and existing platforms update, ensuring that all team members understand current best practices and ethical obligations (ORPC 5.1 and 5.3).

By embedding these habits into your workflow, you reinforce the core ethical duties of competence, confidentiality, supervision and candor.

EMBRACING THE FUTURE

There is a hopeful message beneath Judge Robertson’s admonition. The careful habits that define the legal profession, such as reading the case personally, checking each quotation and verifying every citation, are precisely the practices that will guide lawyers successfully into the AI era. These routines reassure us that legal institutions can evolve while staying anchored to accuracy and rigor.

In a world where technology is rapidly reshaping legal work, foundational methods do more than uphold standards. They operate as guardrails that keep new tools aligned with truth. As artificial intelligence becomes woven into daily workflows, it may be tempting to accept outputs at face value. By steadily confirming sources and validating assertions, lawyers safeguard the reliability of their work and protect the integrity of the system they serve.

Progress in law is not measured by how quickly we adopt innovation but by how faithfully we use it to advance justice. Courage is the thread that binds technology to truth, and lawyers remain the guardians of that bond. By owning every detail and maintaining high standards, lawyers ensure that artificial intelligence strengthens the profession’s values. With this mindset, the legal system can remain strong, open and dedicated to truth, even as it meets the demands of a changing world.

OBA Management Assistance Program Director Julie Bays and OBA Ethics Counsel Richard Stevens have created a tip sheet members can use to ensure they are using artificial intelligence responsibly in their law practices. You can view the tip sheet at www.okbar.org/wp-content/uploads/2025/10/AI-Handout-October-2025.

Ms. Bays is the OBA Management Assistance Program director. Need a quick answer to a tech problem or help solving a management dilemma? Contact her at 405-416-7031, 800-522-8060 or julieb@okbar.org. It’s a free member benefit.

Originally published in the Oklahoma Bar Journal — December, 2025 — Vol. 96, No. 10

 

Article pertains to .