I read all or most of the emails from our malpractice insurance carrier and found this discussion to be interesting. My first question is what lawyer would be stupid enough to use AI in their legal practice Here is part of the five page letter from the carrier
Loss Prevention Bulletin 23-02
OpenAIs release of ChatGPT 3.5 for public use in late 2022 introduced the world to a powerful and transformative generative artificial intelligence (AI) tool with the ability to rapidly create new, seemingly human-crafted content in response to user prompts. ChatGPT, like other generative AI, is a large language model trained on vast amounts of data that then uses machine learning and sophisticated algorithms to predict the next best word in a response. The rise has been meteoric: in a mere two months after its launch, ChatGPT reached 100 million usersa milestone it took TikTok nine months to achieve and Instagram two and half years. In this same brief period, ChatGPT and newly introduced competing platforms such as Googles Bard and Metas LLaMA (Large Language Model Meta AI) have also undergone exponential growth in responsiveness and accuracy.
The development of generative AI models designed specifically for use in the practice of law and trained on appropriately curated or proprietary data sets is potentially transformative, assisting lawyers in tasks such as contract review and management, due diligence, document review, research, and generating initial drafts of letters, contracts, briefs, and other legal documents. Indeed, global law firm Allen & Overy and Price Waterhouse Coopers both recently announced their use of an AI startup called Harvey, a chatbot built on ChatGPT 4.0 technology designed specifically for lawyers use in due diligence, regulatory compliance, and drafting contracts and client memos. Other legal service vendors, including LexisNexis and contract management platforms, have also introduced or announced the development of generative AI-based tools to aid lawyers practices.
Existing generative AI tools such as ChatGPT, Microsofts Bing, Bard, and LLaMA, however, are not designed for law practice. In our view, lawyers should not use these general-purpose AI platforms for client work due to serious shortcomings in the reliability and accuracy of responses the tools generate, as well as the significant risk of potential exposure of confidential or proprietary client information.
I got this notice several weeks ago and last night I saw why the malpractice carrier was worried.
I am a corporate/deal attorney and so the concept of using AI in a deal never occurred to me.