Oklahoma Bar Journal

A Lesson From Seinfeld: How Generative AI Issues Remind Us to Be True to Our Oaths

By Jandra Cox

stock.adobe.com | Tom | #620353347

One of the lowlights of a lawyer’s legal career is what one can call, for ease of reference, an “online research fail.” While attorneys have come to trust the robust and accurate flagging system of online research tools to warn us if a case has been overturned or even called into question by another authority, online databases occasionally miss one, and we offer the case to the court as “good law,” only to discover the case has been overturned. It happens even to the most careful lawyers.

Clearly, traditional databases are not infallible, just as humans are not infallible. And yet, members of the bar association have come to rely on Westlaw, LexisNexis, the Oklahoma Supreme Court Network (OSCN) and other databases to streamline the time commitment it takes to find law. There are only so many hours in a day, many of which we need to save for drafting briefs that cite those cases. Briefs won’t write themselves.

Or will they?


Enter generative artificial intelligence. “The term ‘artificial intelligence’ means a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.”[1] Generative artificial intelligence “learns how to take actions from past data” and “creates brand new content – a text, an image, even computer code – based on that training, instead of simply categorizing or identifying data like other AI.”[2]

AI is not yet sentient, but you can imagine the uses in law, many of which have been around for years (like online natural-language/Boolean searches and predictive coding in electronic discovery) and many of which are winging their way into our legal lives now, like AI-generated legal briefing.


Many lawyers are not great writers, so it is not surprising that some practitioners have resorted to generative AI to write legal briefs. While AI cannot write like, say, F. Scott Fitzgerald, perhaps one day it will. It is already arguably better than what we see turned out of some law firms on a regular basis. Consider this excerpt by Mr. Fitzgerald:

This unlikely story begins on a sea that was a blue dream, as colorful as blue silk stockings, and beneath a sky as blue as the irises of children’s eyes. From the western half of the sky the sun was shying little golden disks at the sea – if you gazed intently enough you could see them skip from wave tip to wave tip until they joined a broad collar of golden coin that was collecting half a mile out and would eventually be a dazzling sunset.[3]

When I asked AI to write something about a “dazzling sunset and blue ocean,” it wrote:

As I stood on the beach, the sky was painted with a stunning array of colors. The sun slowly sank below the horizon, casting a warm glow across the sky. The ocean was a deep shade of blue, and the waves gently lapped at the shore. It was a moment of pure serenity, and I felt completely at peace in the midst of such natural beauty. The dazzling sunset and blue ocean were a perfect reminder of the beauty that surrounds us every day.[4]

Mr. Fitzgerald’s prose is a good example for us to consider because the best legal writing contains an element of poetry. Consider this poetic quote by United States Supreme Court Associate Justice Louis Brandeis:

Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be the best of disinfectants; electric light the most efficient policeman.[5]

When asked to compose a quote about exposing wrongdoing, AI suggested:

"When we expose wrongdoing, we not only hold those responsible accountable, but we also pave the way for a more just and honest society."[6]

Generative AI is not Mr. Fitzgerald or Justice Brandeis, but it is not bad, and one can see why attorneys are tempted to use it. AI is able to clean up the woefully inadequate syntax, grammar, sentence structure and the mother of all failings, logical flow, in legal briefs. Moreover, AI does not ask for vacation days or 401(k) contributions.


Although using AI may be more affordable than hiring a brief writer, it is not always so, however. Consider its catastrophic cost in a case in New York, wherein a lawyer submitted a brief that cited phony cases (generated by ChatGPT) and was, as a result, publicly exposed and monetarily sanctioned for offering the court fake “law.”[7] Not surprisingly, that lawyer “greatly regrets” using AI, citing his surprise that generative AI could create false content.[8]

Problematic AI is likely to rear its ugly head with more regularity in the coming days. Scholars and practitioners are already sounding alarm bells regarding AI’s creation of ethical dilemmas for lawyers, citing potential violations of our duties of competence, diligence and supervision.[9] Was the New York lawyer who offered bogus cases competent? Diligent? That would be a hard case to make. Did he “supervise” the drafting properly? No, AI had free reign here. Certainly, nothing about his submission adhered to standard duties of candor toward the tribunal or the offering of meritorious claims and contentions.

AI may also compromise our duties of confidentiality and privilege. Consider that ChatGPT contains a warning that “[c]onversations may be reviewed by our AI trainers to improve our systems.”[10] (“Conversations” is what generative AI creations are called and how they are cited.) Do we breach client confidence if AI trainers are effectively listening in? Who are those people?

Some contend that using AI may constitute the unauthorized practice of law,[11] and some warn the use of AI may also potentially violate goals the bar association has collectively agreed are worth protecting, like diversity and inclusion.[12] AI looks for patterns in large data pools. The “training” of AI is “a statistical process” and “will have biases,” says Dr. Tonya Custis, a research director at Thomson Reuters who leads a team of research scientists developing natural language and search technologies for legal research.[13] “AI requires data – data about actions and decisions made by humans,” explains David Curle, director of the technology and innovation platform at the Legal Executive Institute of Thomson Reuters.[14] “If you have a system that’s reliant on hundreds of thousands or millions of human decisions, and those humans had biases, there’s a risk that the same bias will occur in the AI.”[15] As an example relevant to the legal world, David Lat, founder of Above the Law, says, “In the judicial system, one prominent example is judges making sentencing decisions based in part on AI-driven software that claims to predict recidivism, the likelihood of committing further crimes. There is concern over how the factors used in the algorithms of such software could correlate with race, which judges are not allowed to take into account when sentencing.”[16] Mr. Lat suggests we all 1) reconsider using generative AI, 2) remove privileged content in our drafts before asking AI to peek in and 3) “mask” or “fake” our input, such as using fake client names, until you ask AI to leave the party.[17]


Most jurisdictions are scurrying to even understand the perils of AI and, therefore, have certainly not adequately addressed AI’s use in their courthouses. One exception is District Judge Brantley Starr, U.S. District Court for the Northern District of Texas, Dallas Division, who has attempted to stave off AI brief writing disasters in the Northern District of Texas by recently issuing a “judge specific requirement” for all litigants practicing before him to certify in writing that they did not have artificial intelligence programs draft filings submitted to him without ensuring their accuracy.[18] Judge Starr’s “Mandatory Certification Regarding Generative Artificial Intelligence” provides:

All attorneys and pro se litigants appearing before the Court must, together with their notice of appearance, file on the docket a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI or Google Bard) or that any language drafted by generative artificial intelligence will be checked for accuracy, using print reporters or traditional legal databases, by a human being. These platforms are incredibly powerful and have many uses in the law: form divorces, discovery requests, suggested errors in documents and anticipated questions at oral argument. But legal briefing is not one of them. Here’s why: these platforms in their current states are prone to hallucinations and bias. On hallucinations, they make stuff up – even quotes and citations. Another issue is reliability or bias. While attorneys swear an oath to set aside their personal prejudices, biases and beliefs to faithfully uphold the law and represent their clients, generative artificial intelligence is the product of programming devised by humans who did not have to swear such an oath. As such, these systems hold no allegiance to any client, the rule of law or the laws and Constitution of the United States (or, as addressed above, the truth). Unbound by any sense of duty, honor or justice, such programs act according to computer code rather than conviction, based on programming rather than principle. Any party believing a platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why. Accordingly, the court will strike any filing from a party who fails to file a certificate on the docket attesting that they have read the Court’s judge-specific requirements and understand that they will be held responsible under Rule 11 for the contents of any filing that they sign and submit to the Court, regardless of whether generative artificial intelligence drafted any portion of that filing. A template Certificate Regarding Judge-Specific Requirements is provided here.[19]

In addition to a sweeping attestation that “I, the undersigned attorney, hereby certify that I have read and will comply with all judge-specific requirements for Judge Brantley Starr, U.S. District Judge for the Northern District of Texas,” Judge Starr’s template suggests that each attorney attest, “I further certify that no portion of any filing in this case will be drafted by generative artificial intelligence or that any language drafted by generative artificial intelligence – including quotations, citations, paraphrased assertions, and legal analysis – will be checked for accuracy, using print reporters or traditional legal databases, by a human being before it is submitted to the Court. I understand that any attorney who signs any filing in this case will be held responsible for the contents thereof according to Federal Rule of Civil Procedure 11, regardless of whether generative artificial intelligence drafted any portion of that filing.”[20]

When interviewed about his new requirement, Judge Starr explained, “We’re at least putting lawyers on notice, who might not otherwise be on notice, that they can’t just trust those databases. They’ve got to actually verify it themselves through a traditional database.”[21]

A traditional database – like our current online tools – which have failed us all.

Or checked by “a human being” – like all of us – who are known to miss things, too.

stock.adobe.com | sompong_tom | #225708360


Judge Starr’s premise is that generative AI should not be trusted because it is “prone to hallucinations and bias.” Some might respond that hallucinations and bias are not an AI-specific problem but can also be characteristics of people with an agenda, like litigants and those paid to represent them. Judge Starr insists that human drafting, or at least double-checking, is preferable to AI-generated briefing because attorneys “swear an oath to set aside their personal prejudices, biases and beliefs to faithfully uphold the law.” One may question whether that is the oath of lawyers, although that is the oath of judges. Lawyers, on the other hand, are advocates who swear oaths of zealous representation, which is supposed to be tempered by duty, honor, truth and justice.

Judge Starr further opines that AI programs are “[u]nbound by any sense of duty, honor or justice” and “act according to computer code rather than conviction, based on programming rather than principle.” True enough, but that implies attorneys are always bound by duty, honor and justice and act according to conviction. Although our ethics code mandates that we act with honor and conviction in the pursuit of justice,[22] we are witnessing more and more often these days that the oath and the mandate are not backed by the important part – the actual doing it part. To misquote Jerry Seinfeld: “See, you know how to take the [oath], you just don’t know how to hold the [oath] and that’s really the most important part of the [oath], the holding. Anybody can just take them.”[23]

In essence, what we are seeing is that artificial intelligence, created by people, also mimics people – not just in intellect but also in our infirmities, including dishonesty and, perhaps, laziness. Clearly, generative AI is problematic, and we will have to navigate its complex and thorny path with care – and quickly. But perhaps while we are inspecting AI’s “behavior,” we should inspect our own. In accordance with our ethical duties, are we “consistent with requirements of honest dealing with others?”[24] Or are we trying to win a game? Are we “us[ing] the law's procedures only for legitimate purposes and not to harass or intimidate others?”[25] Or are we trying to earn money however we can? Do we strive to “uphold legal process,” even when we must “challenge the rectitude of official action?”[26] Or are we cutting corners because we think no one is watching? Are we “work[ing] to strengthen legal education?”[27] Or are we pulling the ladder up behind us? Are we “mindful of deficiencies in the administration of justice and of the fact that the poor … cannot afford legal assistance,” and are we therefore “devot[ing] professional time or resources to ensure equal access to our system of justice?”[28] Or not? Let us be people of honor, not just in how we use AI but in how we practice law. This is a good time to examine ourselves, as well as the tools at our disposal.


Jandra Cox has practiced law in Tulsa and Oklahoma City since receiving her J.D. from the OU College of Law. In addition to representing her own clients, she has researched and ghostwritten for dozens of Oklahoma lawyers. She is an adjunct instructor at Southeastern Oklahoma State University, teaching undergraduate and graduate law classes in Oklahoma City.




[1] National Artificial Intelligence Act of 2020 (DIVISION E, SEC. 5001, effective Jan. 1, 2021).

[2] Jeffrey Dastin, Akash Sriram, Saumyadeb Chakrabarty, “Explainer: What is Generative AI, the technology behind OpenAI's ChatGPT?” (March 17, 2023) https://bit.ly/3NJK9DA.

[3] F. Scott Fitzgerald, “The Offshore Pirate” (1920).

[4] Grammarly, personal communication, June 2, 2023.

[5] Louis D. Brandeis, Harper's Weekly Dec. 20, 1913, Chapter V: What Publicity Can Do.

[6] Grammarly, personal communication, June 3, 2023.

[7] Benjamin Weiser, “ChatGPT Lawyers Are Ordered to Consider Seeking Forgiveness” (June 22, 2023) https://bit.ly/3pBEhV2.

[8] Jacqueline Thomsen, “US Judge Orders Lawyers to Sign AI Pledge, Warning Chatbots 'Make Stuff Up'” (June 2, 2023) https://bit.ly/3XN6ftx.

[9] See, e.g., Nicole Yamane, “Artificial Intelligence in the Legal Field and the Indispensable Human Element Legal Ethics Demands,” Georgetown Journal of Legal Ethics Vol. 33:877; Nicholas Boyd, “Do Professional Ethics Rules Allow You to Have a Robot Write Your Brief?” (March 21, 2023) https://bit.ly/3CZUCGf.

[10] Lance Eliot, “Is Generative AI Such As ChatGPT Going To Undermine The Famed Attorney-Client Privilege, Frets AI Law And AI Ethics” (March 30, 2023) https://bit.ly/44buTXd.

[11] See, e.g., Nicole Yamane, “Artificial Intelligence in the Legal Field and the Indispensable Human Element Legal Ethics Demands,” Georgetown Journal of Legal Ethics Vol. 33:877; Thomas Spahn, “Is Your Artificial Intelligence Guilty of the Unauthorized Practice of Law?” 24 Rich J.L. & Tech., no. 4, 2018.

[12] Rebekah Hanley, “Ethical Copying in the Artificial Intelligence Authorship Era: Promoting Client Interests and Enhancing Access to Justice,” Legal Writing Journal, Vol. 26, Issue 2, 2022 (June 15, 2022). https://bit.ly/3PEACjU; David Lat, “The Ethical Implications of Artificial Intelligence,” Above The Law 2020, https://bit.ly/3O3diew.

[13] David Lat, “The Ethical Implications of Artificial Intelligence,” Above The Law 2020, https://bit.ly/3O3diew.

[14] Id.

[15] Id.

[16] Id.

[17] Id.

[18] Jacqueline Thomsen, “US Judge Orders Lawyers to Sign AI Pledge, Warning Chatbots 'Make Stuff Up,'” (June 2, 2023) https://bit.ly/3XN6ftx.

[19] https://bit.ly/44bDZTD.

[20] Id.

[21] Jacqueline Thomsen, “US Judge Orders Lawyers to Sign AI Pledge, Warning Chatbots 'Make Stuff Up,'” (June 2, 2023) https://bit.ly/3XN6ftx.

[22] OK ST RPC Preamble.

[23] https://bit.ly/3NZL4kC.

[24] OK ST RPC Preamble.

[25] Id.

[26] Id.

[27] Id.

[28] Id.

Originally published in the Oklahoma Bar Journal – OBJ 95 Vol 6 (August 2023)

Statements or opinions expressed in the Oklahoma Bar Journal are those of the authors and do not necessarily reflect those of the Oklahoma Bar Association, its officers, Board of Governors, Board of Editors or staff.