Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
Here's What Happens When Your Lawyer Uses ChatGPT A lawyer representing a man who sued an airline re
Last edited Tue May 30, 2023, 07:32 AM - Edit history (2)
David Weigel RetweetedHeres What Happens When Your Lawyer Uses ChatGPT via
@BenWeiserNYT
@BenWeiserNYT
nytimes.com
Heres What Happens When Your Lawyer Uses ChatGPT
A lawyer representing a man who sued an airline relied on artificial intelligence to help prepare a court filing. It did not go well.
Heres What Happens When Your Lawyer Uses ChatGPT
A lawyer representing a man who sued an airline relied on artificial intelligence to help prepare a court filing. It did not go well.
Link to tweet
Heres What Happens When Your Lawyer Uses ChatGPT
A lawyer representing a man who sued an airline relied on artificial intelligence to help prepare a court filing. It did not go well.
By Benjamin Weiser
May 27, 2023
The lawsuit began like so many others: A man named Roberto Mata sued the airline Avianca, saying he was injured when a metal serving cart struck his knee during a flight to Kennedy International Airport in New York.
When Avianca asked a Manhattan federal judge to toss out the case, Mr. Matas lawyers vehemently objected, submitting a 10-page brief that cited more than half a dozen relevant court decisions. There was Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and, of course, Varghese v. China Southern Airlines, with its learned discussion of federal law and the tolling effect of the automatic stay on a statute of limitations.
There was just one hitch: No one not the airlines lawyers, not even the judge himself could find the decisions or the quotations cited and summarized in the brief. ... That was because ChatGPT had invented everything.
The lawyer who created the brief, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, threw himself on the mercy of the court on Thursday, saying in an affidavit that he had used the artificial intelligence program to do his legal research a source that has revealed itself to be unreliable. ... Mr. Schwartz, who has practiced law in New York for three decades, told Judge P. Kevin Castel that he had no intent to deceive the court or the airline. Mr. Schwartz said that he had never used ChatGPT, and therefore was unaware of the possibility that its content could be false. ... He had, he told Judge Castel, even asked the program to verify that the cases were real. ... It had said yes.
{snip}
A lawyer representing a man who sued an airline relied on artificial intelligence to help prepare a court filing. It did not go well.
By Benjamin Weiser
May 27, 2023
The lawsuit began like so many others: A man named Roberto Mata sued the airline Avianca, saying he was injured when a metal serving cart struck his knee during a flight to Kennedy International Airport in New York.
When Avianca asked a Manhattan federal judge to toss out the case, Mr. Matas lawyers vehemently objected, submitting a 10-page brief that cited more than half a dozen relevant court decisions. There was Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and, of course, Varghese v. China Southern Airlines, with its learned discussion of federal law and the tolling effect of the automatic stay on a statute of limitations.
There was just one hitch: No one not the airlines lawyers, not even the judge himself could find the decisions or the quotations cited and summarized in the brief. ... That was because ChatGPT had invented everything.
The lawyer who created the brief, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, threw himself on the mercy of the court on Thursday, saying in an affidavit that he had used the artificial intelligence program to do his legal research a source that has revealed itself to be unreliable. ... Mr. Schwartz, who has practiced law in New York for three decades, told Judge P. Kevin Castel that he had no intent to deceive the court or the airline. Mr. Schwartz said that he had never used ChatGPT, and therefore was unaware of the possibility that its content could be false. ... He had, he told Judge Castel, even asked the program to verify that the cases were real. ... It had said yes.
{snip}
6 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies
Here's What Happens When Your Lawyer Uses ChatGPT A lawyer representing a man who sued an airline re (Original Post)
mahatmakanejeeves
May 2023
OP
If AI can't find cites, it makes them up? AI is basically a college student trying to finish a paper
marble falls
May 2023
#1
Firm's legal malpractice insurance carrier on use of AI in the practice of law
LetMyPeopleVote
May 2023
#4
marble falls
(72,527 posts)1. If AI can't find cites, it makes them up? AI is basically a college student trying to finish a paper
Nelson.
dalton99a
(95,213 posts)3. Kick
LetMyPeopleVote
(181,929 posts)4. Firm's legal malpractice insurance carrier on use of AI in the practice of law
I read all or most of the emails from our malpractice insurance carrier and found this discussion to be interesting. My first question is what lawyer would be stupid enough to use AI in their legal practice Here is part of the five page letter from the carrier
Loss Prevention Bulletin 23-02
OpenAIs release of ChatGPT 3.5 for public use in late 2022 introduced the world to a powerful and transformative generative artificial intelligence (AI) tool with the ability to rapidly create new, seemingly human-crafted content in response to user prompts. ChatGPT, like other generative AI, is a large language model trained on vast amounts of data that then uses machine learning and sophisticated algorithms to predict the next best word in a response. The rise has been meteoric: in a mere two months after its launch, ChatGPT reached 100 million usersa milestone it took TikTok nine months to achieve and Instagram two and half years. In this same brief period, ChatGPT and newly introduced competing platforms such as Googles Bard and Metas LLaMA (Large Language Model Meta AI) have also undergone exponential growth in responsiveness and accuracy.
The development of generative AI models designed specifically for use in the practice of law and trained on appropriately curated or proprietary data sets is potentially transformative, assisting lawyers in tasks such as contract review and management, due diligence, document review, research, and generating initial drafts of letters, contracts, briefs, and other legal documents. Indeed, global law firm Allen & Overy and Price Waterhouse Coopers both recently announced their use of an AI startup called Harvey, a chatbot built on ChatGPT 4.0 technology designed specifically for lawyers use in due diligence, regulatory compliance, and drafting contracts and client memos. Other legal service vendors, including LexisNexis and contract management platforms, have also introduced or announced the development of generative AI-based tools to aid lawyers practices.
Existing generative AI tools such as ChatGPT, Microsofts Bing, Bard, and LLaMA, however, are not designed for law practice. In our view, lawyers should not use these general-purpose AI platforms for client work due to serious shortcomings in the reliability and accuracy of responses the tools generate, as well as the significant risk of potential exposure of confidential or proprietary client information.
OpenAIs release of ChatGPT 3.5 for public use in late 2022 introduced the world to a powerful and transformative generative artificial intelligence (AI) tool with the ability to rapidly create new, seemingly human-crafted content in response to user prompts. ChatGPT, like other generative AI, is a large language model trained on vast amounts of data that then uses machine learning and sophisticated algorithms to predict the next best word in a response. The rise has been meteoric: in a mere two months after its launch, ChatGPT reached 100 million usersa milestone it took TikTok nine months to achieve and Instagram two and half years. In this same brief period, ChatGPT and newly introduced competing platforms such as Googles Bard and Metas LLaMA (Large Language Model Meta AI) have also undergone exponential growth in responsiveness and accuracy.
The development of generative AI models designed specifically for use in the practice of law and trained on appropriately curated or proprietary data sets is potentially transformative, assisting lawyers in tasks such as contract review and management, due diligence, document review, research, and generating initial drafts of letters, contracts, briefs, and other legal documents. Indeed, global law firm Allen & Overy and Price Waterhouse Coopers both recently announced their use of an AI startup called Harvey, a chatbot built on ChatGPT 4.0 technology designed specifically for lawyers use in due diligence, regulatory compliance, and drafting contracts and client memos. Other legal service vendors, including LexisNexis and contract management platforms, have also introduced or announced the development of generative AI-based tools to aid lawyers practices.
Existing generative AI tools such as ChatGPT, Microsofts Bing, Bard, and LLaMA, however, are not designed for law practice. In our view, lawyers should not use these general-purpose AI platforms for client work due to serious shortcomings in the reliability and accuracy of responses the tools generate, as well as the significant risk of potential exposure of confidential or proprietary client information.
I got this notice several weeks ago and last night I saw why the malpractice carrier was worried.
Link to tweet
Link to tweet
Link to tweet
I am a corporate/deal attorney and so the concept of using AI in a deal never occurred to me.
IbogaProject
(6,065 posts)5. Did even bother to look up the 'cited' cases directly?
Yep serious malpractice. Asking the same ChatGPT to verify is the final strike. He should've looked those cases up, or he was tight on money and let his WestLaw subscription expire. Still should have tried google even.
CaptainTruth
(8,256 posts)6. I've been reading about this on Twitter the past couple days.
Attorneys & legal analysts stared pouring through it as soon as the recent motions were filed. It's pretty wild!