fbpx

Oklahoma Bar Journal

Navigating Generative AI in Legal Practice: Harnessing Technology While Managing Risks[1]

By Caroline Guerra Wolf and Lori Twomey

Production Perig | #255883826 | stock.adobe.com

Potential copyright violations, privacy and security concerns, hallucinations, inaccurate or misleading information, unchecked bias and the fear that technology will replace human jobs and ingenuity – these are concerns that have been widely reported in the news about generative AI. While artificial intelligence is nothing new, the mainstream use of generative AI tools, such as the popular ChatGPT, has exploded onto the scene within the last two years. So, unless you have been living off the grid and in isolation, you’ve at least heard some of the horror stories attributed to the improper use of generative AI.

While these concerns are all valid, and the horror stories should serve as cautionary tales to us all, there are very real efficiency and productivity advantages to be gained by the proper understanding and use of generative AI tools, specifically in the legal field.

You may be curious about how generative AI could help you in your law practice, you may feel skeptical about whether this new technology truly has a place within the law, or the horror stories may have caused you to make up your mind to never venture into the use of generative AI. The truth is, however, as attorneys, we can’t afford to ignore generative AI, just like we can’t afford to ignore the risks and ethical pitfalls that go along with using it. Generative AI is not just the latest technology fad, here today and gone tomorrow like the Walkman; it’s a rapidly growing and evolving tool that will eventually become as indispensable to the practice of law as online legal research is today.

To help us explore the emerging world of generative AI tools, we thought it would be fun to turn to none other than ChatGPT itself. The questions below are actual prompts provided to ChatGPT (the free version), followed by ChatGPT’s answers and our elaborations, clarifications and – at times – refutations.

 

Q: Can you explain, in one paragraph, how generative AI models like ChatGPT work?

ChatGPT: Generative AI models like ChatGPT operate using a type of neural network ... [followed by several sentences you need a computer science degree to decipher].

Authors: A critical thing for users to understand about generative AI is that it is like a supercharged text prediction program. These models are not actually thinking; instead, they produce text based on the most likely next word the model predicts will be responsive to the prompt. As such, the more specific the prompt, the better the response. And luckily for us, we don’t need to start a new conversation; instead, we can use ChatGPT’s existing response as a starting point for our next prompt. So let’s try again.

 

Q: Can you explain that on an eighth-grade level?

ChatGPT: Sure! Generative AI models like ChatGPT use a special kind of computer program called a transformer. They are trained on a lot of written text, like books and articles to learn how words fit together. During training, the model gets really good at guessing what word comes next in a sentence based on the words that came before it. After training, when you give it a prompt, it uses what it learned to come up with sentences that make sense and sound like a real person talking. This way, it can answer questions and have conversations!

Authors: Importantly, not all AI is considered generative AI. For example, social media and other entertainment apps may use AI to curate suggested content based on usage patterns, and search engines may use AI to retrieve already-created content. These uses would not be considered generative AI. That said, the line is getting increasingly blurred. Take Google, for instance, which recently started including an “AI Overview” section in its search results that relies on generative AI to summarize the content of search results (with varying degrees of accuracy and helpfulness). Also, some chatbots and virtual assistants have moved beyond predefined scripts and now utilize generative AI to produce answers. In fact, Microsoft now has an AI-powered chatbot, Copilot, that can “help with creating, understanding, and editing documents” in Word and “drafting a presentation, adding or editing slides, and even summarizing decks” in PowerPoint.[2]

In sum, don’t be surprised when more and more of the products you use on a daily basis start integrating elements of generative AI. Ultimately, some use of generative AI may be inevitable, making it that much more important to understand how it works and what its limitations and risks are.

 

Q: In one paragraph, how can an attorney tell if someone (e.g., a young associate at the attorney’s firm or opposing counsel) has used generative AI to draft a document?

ChatGPT: An attorney can identify if someone has used generative AI to draft a document by looking for certain indicators, such as a lack of nuanced legal reasoning, unusual phrasing, or inconsistencies in style and tone that differ from the person’s typical writing. Additionally, the document may contain generic language, over-reliance on templates, or fail to address specific legal issues pertinent to the case. Tools that analyze text patterns or detect AI-generated content can also assist in this evaluation. Engaging in a conversation about the document’s content can further reveal whether the writer possesses the necessary depth of understanding expected from a human author.

Authors: Generative AI “detectors,” like GPTZero, have some serious drawbacks. For one, they can be wrong. On its FAQ page, GPTZero admits, “There always exist edge cases with both instances where AI is classified as human, and human is classified as AI.”[3] For that reason, GPTZero discourages educators from punishing students based on its results. Given that attorneys primarily rely on published authority to craft rules and arguments, there could be an increased risk that a generative AI detector will assign to a legal piece a higher percent probability that it was AI-generated. Therefore, if you are a judge or a supervising attorney wondering if someone used generative AI, we would caution against solely relying on AI detection tools.

Furthermore, by uploading documents or pasting text into an AI detection tool, you are sharing that data with a third-party company that may not be under a confidentiality obligation. Depending on the terms of service for the specific tool, that third-party company may even claim ownership rights over the data and use it to train future generative AI models. This could bring up some serious concerns regarding confidentiality obligations, attorney-client privilege or a potential violation of the author’s intellectual property rights.

 

Q: Briefly explain what it means for a generative AI model to “hallucinate.”

ChatGPT: When a generative AI model “hallucinates,” it means that the model generates information that is incorrect, made-up or nonsensical, even though it sounds plausible. This can include inventing facts, creating fake references or producing responses that are logically flawed. The AI doesn’t intentionally lie – it simply predicts text based on patterns, but without understanding the accuracy or truth of what it’s generating.

Authors: There are different ways in which generative AI can hallucinate when used for legal writing. The type of hallucination most readers may be familiar with is when AI tools provide citations to nonexistent cases. This occurred in the highly publicized New York case Mata v. Avianca, Inc., where an attorney used ChatGPT while drafting a court filing that ended up containing citations to multiple fake cases.[4] When ordered to produce the opinions themselves, the attorney turned again to ChatGPT, which produced fake opinions. One such fake opinion, “Varghese v. China Southern Airlines Co., Ltd., 925 F.3d 1339 (11th Cir. 2019),” bore a docket number and Federal Reporter citation associated with completely different (real) cases, and it even listed real judges as its purported authors.[5] This fake opinion, submitted to the court by the hapless attorney, included internal citations and quotations from yet more fake cases.[6] Some citations in the fake opinion had correct names and citations but did “not contain the language quoted or support the propositions for which they [were] offered.”[7]

Importantly, AI tools can hallucinate in other ways besides providing completely fake citations. Authors of a forthcoming research article identify two dimensions of legal hallucinations: correctness and groundedness.[8] Information provided by an AI model may be factually correct and relevant to the question/prompt, but the information may be “misgrounded” – in that the cited sources are misinterpreted or inapplicable – or “ungrounded” – where no supporting citations are provided to support the response.[9] The following is an example of a correct but “misgrounded” proposition:

The right to same-sex marriage is protected under the U.S. Constitution. Miranda v. Arizona, 384 U.S. 436 (1966).[10]

The sentence is factually correct, but the citation should be Obergefell v. Hodges, not Miranda v. Arizona.

A slightly different problem can occur when the prompt asks a legal question that has yet to be answered in the jurisdiction specified in the prompt. For example, the authors posed the following prompt to Lexis+ AI: “Does a prospective at-will employee who is terminated prior to beginning work have a claim against the employer for promissory estoppel when the prospective employee incurred moving expenses in reliance on the offer of employment?” In a lengthy response, the generated text stated without authority, “A prospective at-will employee who is terminated prior to beginning work may have a claim … for promissory estoppel.” The response then cited several estoppel cases that had nothing to do with at-will employment, prospective employees or promissory estoppel in the employment context. Most concerning, however, is that the response then cited to what it represented as a court opinion recognizing a prospective employee’s reasonable reliance; however, the citation was not to a court’s opinion but to a plaintiff’s petition that was voluntarily dismissed. So instead of responding that the prompt called for an answer that would be a matter of first impression in Oklahoma, Lexis+ AI generated an answer that was part “ungrounded,” part “misgrounded” and part legally incorrect.

As attorneys, we recognize that both correctness and groundedness are extremely important in our work. Unfortunately, when researchers from Stanford evaluated generative AI tools geared specifically toward legal research, the tools failed in one or the other dimension fairly frequently. The researchers found that Lexis+ AI provided both correct and grounded responses on only 65% of queries, and Thomson Reuters’ Ask Practical Law AI did so only 18% of the time.[11] In sum, if you use legal generative AI tools, it is just as imperative to look out for fake cases as it is to check for irrelevant cases – or a lack of support altogether – and you must also check the propositions themselves for accuracy.

Fox | #841979820 | stock.adobe.com

Q: Can an attorney’s use of generative AI violate the Rules of Professional Conduct?

ChatGPT: Yes, using generative AI can potentially violate the Rules of Professional Conduct if not done carefully. Here are some key considerations:

  • Competence: Lawyers must understand the technology they use, including its limitations. Inadequate knowledge can lead to ineffective representation.
  • Confidentiality: Attorneys must protect client information. Using generative AI could risk exposing sensitive data, especially if the AI processes it in ways that are not secure.
  • Diligence: Relying on AI without appropriate oversight could result in delays or errors, violating the duty to act diligently.
  • Accuracy: AI-generated information may not always be reliable. Lawyers must verify and ensure that the information presented is accurate to avoid misleading clients or the court.
  • Client Communication: If AI is used in ways that affect client representation, attorneys should inform clients and, if necessary, obtain their consent.

To avoid ethical violations, attorneys should use generative AI thoughtfully, ensuring compliance with all applicable rules and standards.

Authors: According to the American Bar Association, “To ensure clients are protected, lawyers using generative artificial intelligence tools must fully consider their applicable ethical obligations, including their duties to provide competent legal representation, to protect client information, to communicate with clients, to supervise their employees and agents, to advance only meritorious claims and contentions, to ensure candor toward the tribunal, and to charge reasonable fees.”[12]

An additional consideration is that clients will increasingly need guidance on their own use of AI, which makes it important that attorneys are up to speed on the advantages and risks of using this technology, as well as any new laws or regulations concerning the use of AI in different industries.

 

Q: What different types of legal tasks would be appropriate for the use of generative AI tools?

ChatGPT: Generative AI tools can assist with a range of legal tasks, including:

  • Document Drafting: Contracts, agreements, legal notices, and pleadings.
  • Legal Research: Summarizing case law, statutes, and generating legal opinions.
  • Contract Review: Identifying clauses, risks, and comparing contracts.
  • Legal Writing: Drafting briefs, memos, and summarizing depositions.
  • Compliance & Due Diligence: Creating regulatory documents and due diligence reports.
  • Litigation Preparation: Summarizing discovery documents and aiding e-discovery.
  • Form Generation: Customizing legal forms.
  • Client Communications: Drafting emails, client updates, and powering chatbots.
  • Data Analytics: Predicting case outcomes and analyzing document sentiment.
  • Translation: Translating legal documents.

AI outputs should always be reviewed by legal professionals to ensure accuracy and compliance.

Authors: While the use of generative AI tools may be appropriate for these legal tasks, it is important to keep in perspective how attorneys should effectively and efficiently use generative AI as it relates to each task. Generative AI provides an excellent “starting point” when drafting legal documents. For this type of use, generative AI tools do a competent job if given a clear and detailed prompt and if you patiently revise your “conversation” based on the outputs you receive. You can use generative AI to summarize documents and legal authority, create a checklist to use in your own drafting or explain a complicated legal topic. And, specifically with legally focused generative AI tools, you can prompt the model to provide you with a starting point for your research. In addition, once you have completed a draft of your legal document, you can upload the draft and ask the generative AI tool to evaluate the document and provide suggestions for modifications or help with targeting the tone to a specific audience. But, as advised by ChatGPT itself, “AI outputs should always be reviewed by [you] to ensure accuracy and compliance [with the law].”

 

Q: Can you explain, in a poem, why attorneys should learn about generative AI?

ChatGPT:

In a world where tech evolves each day,

Attorneys must keep pace, not stay away.

Generative AI, a powerful tool,

Can help the sharp, and trip up the fool.

 

It drafts with speed, from contracts to briefs,

It saves time but can hide pitfalls beneath.

Errors may lurk in a polished reply,

So attorneys must learn, not just rely.

 

Know its strengths, and know where it's weak,

Guide its use but stay sharp and sleek.

For those who master this modern skill,

Will shape the future with sharper quill.

 

Authors: Not bad! And as this poem suggests, we encourage all attorneys to explore the possible advantages that generative AI tools might bring to your practice while keeping a discerning and critical eye on any generated content. Our jobs are safe, at least for now.


ABOUT THE AUTHORS

Caroline Guerra Wolf is an assistant city attorney for the city of Tulsa and a former assistant professor of legal writing at the TU College of Law. Before joining the TU College of Law faculty, she worked for Conner & Winters LLP and clerked for Judge John E. Dowdell of the U.S. District Court for the Northern District of Oklahoma. She is a graduate of the University of North Carolina at Chapel Hill and the TU College of Law.

 

 

 

 

Lori Twomey is a professor of legal writing and the director of the legal writing program at the TU College of Law. She earned her bachelor’s degree from OSU and her J.D. from the University of Houston Law Center. A former litigator and appellate attorney, she also taught at the University of Houston. Ms. Twomey joined the TU College of Law in 2021, where she advises the Board of Advocates and coaches the moot court team.

 

 

 


ENDNOTES

[1] This title was crafted by ChatGPT.

[2] Microsoft 365 Copilot “Frequently Asked Questions: 05/ What kinds of things can Microsoft 365 Copilot help me with?” Microsoft, https://bit.ly/3UQ5DD8 (last visited Oct. 16, 2024).

[3] “Answers to GPTZero’s Most Common Questions,” GPTZero, https://gptzero.me/faq (last visited Oct. 4, 2024).

[4] See Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y. 2023).

[5] Id. at 453-54.

[6] Id. at 454.

[7] Id.

[8] Varun Magesh, Faiz Surani, Matthew Dahl, Mirac Suzgun, Christopher D. Manning and Daniel E. Ho, “Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools,” J. Empirical Legal Stud. 7 (preprint manuscript) (forthcoming 2024), https://stanford.io/3ZgUSN3.

[9] Id.

[10] Id.

[11] Id. at 12. Notably, this study did not evaluate Thomson Reuters’ more robust generative AI tool, CoCounsel, which would likely have a higher accuracy rate due to having access to a larger universe of documents than Ask Practical Law AI.

[12] ABA Comm. On Ethics & Pro. Resp., Formal Op. 512 (2024). Readers may access the formal opinion at https://bit.ly/3YRtuUp.


Originally published in the Oklahoma Bar JournalOBJ 95 No. 10 (December 2024)

Statements or opinions expressed in the Oklahoma Bar Journal are those of the authors and do not necessarily reflect those of the Oklahoma Bar Association, its officers, Board of Governors, Board of Editors or staff.