fbpx

Management Assistance Program

ChatGPT, Artificial Intelligence and the Lawyer

By Jim Calloway

Author’s Note: The more you think you are not interested in this subject, the more you need to read this article.

In November 2022, OpenAI released ChatGPT. ChatGPT, along with other artificial intelligence (AI) tools, has dominated the conversation about cutting-edge technology and legal technology tools during 2023. The reactions have ranged from “the most entertaining thing on the internet” to an incredible new tool that will change society in a positive way to a corporate tool that will allow companies to be more efficient and profitable (often by a reduction in workforce) to a potentially dangerous development, that if allowed to expand unchecked without regulatory safeguards, could lead to global instability and, possibly, an extinction event. To summarize, on the internet hyperbole scale, predictions about ChatGPT’s impact range from Nirvana to Armageddon. Whatever happens will likely be between these two extremes.

ChatGPT is a large language model (LLM) AI. This means its training involved digesting almost everything on the internet, including Wikipedia and many books, as of September 2021. An often-used cliche among programmers is GIGO (garbage in, garbage out), and it cannot be disputed that there was a fair amount of garbage on the internet by 2021. ChatGPT is aware of the current date based on the date and time stamp of your query, and it sometimes refers to events that took place post-September 2021, possibly based on others’ queries. Unanswered questions and apparent inconsistencies such as these are why many IT professionals call ChatGPT a “black box.”

Since the November 2022 introduction, there have been many new products incorporating ChatGPT. It set a record by amassing 100 million monthly active users within two months (for comparison purposes, TikTok required nine months and Instagram more than two years to reach that mark). This reaction was caused by how well the product performed. It is simply stunning. Interacting with a chatbot that chats with you conversationally like another human and has vast amounts of accurate data to use is impressive. The speed and clarity of its responses are amazing. 

OF COURSE, THERE ARE LIES AND HALLUCINATIONS

This ChatGPT displays many human-like traits. Not only will it answer your questions easily and quickly, but like a human friend, it may sometimes tell you what you want to hear, and sometimes it may share outright fabrications (called hallucinations). Just like a human, it might slip and share something you didn’t intend to be shared.

ChatGPT’s responses are very confident and persuasive. As Ed Walters, co-founder of Fastcase who also taught “The Law of Robots” at Georgetown University Law Center, says, “The answers are often totally wrong, but highly convincing.”

Some lawyers will learn of that credibility issue and decide never to use ChatGPT or any other AI. That is probably not the correct lesson, as AI tools will be increasingly hard to avoid and will provide many time-saving benefits in the very near future.

There are many positive ways that these tools can be used today, and there will be hundreds more. For example, you are traveling with your family, and an automobile breakdown strands you for the day in a city you never intended to visit. A quick query on the ChatGPT app on your phone for the top 10 things to do in that city will produce a detailed list with descriptions. It is probably quite accurate. But if not, so what? The point isn’t whether some experienced, objective human travel expert might disagree with some suggestions. The point is receiving a list with useful information you didn’t have in seconds.

I have Google, DuckDuckGo and ChatGPT installed on my phone. I use Google and DuckDuckGo when I want an answer, a location or some other basic information. But if I want an explanation, ChatGPT is the first option for a search.

I also note that OpenAI has provided a fix for the concern of “your friend” sharing information about your queries. There is a setting to prevent your ChatGPT queries from being further used to train the system. Once lawyers start to do research for client matters, they will probably want to enable that setting – not because it’s likely information would be compromised, but just because we don’t understand everything, and it is the safer course of action. 

SO WHY HALLUCINATIONS?

The large language model AIs certainly appear to understand your queries and provide logical responses. Professor Kenton Brice, director of the Donald E. Pray Law Library at the OU College of Law, had a helpful analogy at our OBA Solo & Small Firm Conference program on ChatGPT and AI. He said to think of the game Mad Libs. The AI does not understand the meaning of its communication with you. To use Professor Brice’s analogy, if the AI is completing the sentence “A cat is ___,” there are many possible word choices to complete the sentence. A cat is a mammal. A cat is black. A cat is a feline. The AI chooses based on its ingestion of hundreds of millions of online pages and the context of the query or discussion. The surprising thing is how often it selects the perfect word or phrase. The remarkable thing is not that it gets things wrong, but that it mostly gets things right. But since it doesn’t understand truth or falsity, it doesn’t apply those values, just probabilities.

Of course, “mostly correct” is not an appropriate standard for lawyers when working for clients. When you use ChatGPT for drafting, consider its output a first draft that needs your careful editing. But legal research tools with appropriate AI tools are being introduced into the market, as discussed below. 

A CAUTIONARY TALE OF POTENTIAL MALPRACTICE AND SANCTIONS FROM POOR USE OF CHATGPT

Steven Schwartz, a practicing New York lawyer for 30 years, used ChatGPT to prepare a brief for federal court. Mr. Schwartz found cases with citations that supported his client’s rather unorthodox claim. At least six cases he cited in a brief as filed were hallucinations that did not exist, with fictitious quotes and internal citations. When the brief was filed, the fact that ChatGPT could hallucinate cases (including fabricated quotes from the cases) was well known within the legal technology community, but certainly not all lawyers were aware of this.

Opposing counsel filed a response brief calling out the bogus cases and moving for sanctions. Mr. Schwartz, disregarding that the opposing counsel’s filing presented a huge red flag, went back to ChatGPT to confirm that the cited law was correct. “I apologize for the confusion earlier,” ChatGPT replied. “Upon double-checking, I found the case Varghese v. China Southern Airlines Co. Ltd., 925 F.3d 1339 (11th Cir. 2019), does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis. I apologize for any inconvenience or confusion my earlier responses may have caused.”

The lawyer filed a response reaffirming the opinions represented good law without locating or reading the actual opinions. Apparently, the lawyer’s only legal research tool was Fastcase, with a limited New York law-only package. As we all appreciate, the answer was to find another source for the cases. Instead, he “doubled down” and instantly became an internet meme. At the sanctions hearing, Mr. Schwartz was sworn to testify truthfully and then spent two hours being grilled by the judge. While ChatGPT made the national headlines, Mr. Schwartz could not avoid the simple fact that he cited as authority case law he had not read. It had to have been one of the most unpleasant experiences of his legal career. He noted that he had suffered great personal damage from his error. The court granted an award of sanctions for $5,000. But, no doubt, the two-hour examination was also punishment.

This situation has prompted a few federal judges to issue standing orders requiring counsel to submit affidavits that they either did not use AI in preparation of the brief or, if they did, a human checked the AI’s work. Some have observed that any potential problem is already addressed by Rule 11.[i] 

CASETEXT TO COCOUNSEL TO THOMSON REUTERS

Casetext has provided AI-powered legal research for some time. Their basic service is a discounted OBA member benefit. Casetext also worked with OpenAI to incorporate advanced functions of ChatGPT into a new offering.

On March 1, as ABA TECHSHOW was beginning, we learned via social media that a national cable news network hosted the product launch for Casetext’s new offering, CoCounsel, an AI-powered legal research tool. Free trials were only for a short period. The results were so stunning that many lawyers immediately subscribed at the rate of $500 per user per month, including many lawyers who would have said that they would not have subscribed at that price point. Casetext gained access to ChatGPT in 2022. The result was quite a success.

On June 26, it was announced that Thomson Reuters agreed to purchase Casetext for $650 million cash.[ii] I hope Thomson Reuters will offer a pricing plan affordable to small firm lawyers and not just focus on larger law firm pricing. 

ARE AI AND CHATGPT REALLY THAT SIGNIFICANT?

The easy answer here is yes. Smart, serious people have referred to it as being as significant as the discovery of fire, the invention of movable type or the internet itself.[iii]

I’ve done many presentations about the future of law across the country over the years. One of the keys to future law firm success will be to automate as much as possible. Creating automated templates is time-consuming. AI tools will make it less so, and some have reached the market. Did I mention that ChatGPT can also write computer code? That is scary for the programmers of the world. My prediction is the business world will be transformed by generative AI over a few years. And if corporate business practices change, the law businesses will also change.

Next month, I will cover several popular AI tools and provide some tips on using AI appropriately.

Mr. Calloway is the OBA Management Assistance Program director. Need a quick answer to a tech problem or help solving a management dilemma? Contact him at 405-416-7008, 800-522-8060 or jimc@okbar.org. It’s a free member benefit.

[1] SeeJudicial Treatment of ChatGPT: Throwing the Baby Out with the Bath?” by lawyer-blogger Stephen Embry.

[ii] See LawSites’ blog post, “The Rumors Were True: Thomson Reuters Acquires Casetext for $650M Cash.”

[iii] Noted legal futurist Richard Susskind shares his thoughts about AI and the legal profession in his LinkedIn post “AI in the law – six thoughts.”  He has studied the impact of AI on lawyers for decades, and he deems these developments very significant. Andrew M. Perlman, dean and professor of law at Suffolk University, stated, “ChatGPT suggests an imminent reimagination of how we access and create information, obtain legal and other services, and prepare people for their careers.” In “The Implications of ChatGPT for Legal Services ad Society.”

Originally published in the Oklahoma Bar Journal — August, 2023 — Vol. 94, No. 6

Article pertains to .