Securely Navigating the AI Sea
October 8, 2024 | Alan S. Rutkin | Joshua Beckham |Artificial intelligence (AI) is affecting many industries, and it will have a growing effect on lawyers. Insurers’ lawyers may be among the first affected. AI provides efficiencies. These efficiencies are important to insurers. Lawyers will be pushed to use AI.
Generative AI are algorithms that create content based on input data. AI can help with researching and drafting. AI can also help with predictive analytics.
AI’s effectiveness turns on the training data. Data must be sufficient for outputs to be reliable.
To address the concerns, lawyers should remember the three Cs: Communication, Confidentiality and Competency.
Communication. Communication is critical. Before using AI, lawyers should tell clients their plans and get clients’ consent. Clients should be told the methods used in legal representation. Transparency is vital. Clients must be told how their data will be used. The potential risks must also be disclosed.
Whatever the rules require, disclosure is the best practice.
Some courts have adopted rules controlling or barring AI. Federal judges have issued 23 standing orders on AI. Most allow AI. But many require identification of the AI-generated material. Courts also require accuracy certifications.
Critics of these orders say that accuracy commitments are part of existing rules. Professional obligations already require lawyers to verify the accuracy of all filings. Excessive or unnecessary court orders could implicate access-to-justice issues. Technological innovation in the legal industry may be stifled.
Lawyers must familiarize themselves with their local court’s rules on AI. Is AI allowed, limited or barred? Even in jurisdictions where the court allows AI, it is best practice to tell the court what documents and filings were generated using AI tools.
Confidentiality. Confidentiality is another core ethical issue. In New York, lawyers cannot disclose confidential information without the client’s informed consent. Lawyers must protect confidential client data. These obligations can clash with the use of AI.
Issues of confidentiality arise when entering client data into AI tools. Generative AI tools are trained by introducing the tool to information, such as case law or legal texts. This information is then used to generate outputs. Vendors can also include client data in the AI technology’s training information.
Confidentiality concerns are heightened when generated results are used as evaluative data. Evaluative data refers to the information that AI systems collect and analyze. This raises additional privilege concerns.
Lawyers must understand how AI tools use client data. As the technology stands, the confidentiality of client data isn’t guaranteed. You will hear about different AI systems, open and closed.
Open systems give broad access to data. Third-party developers can inspect the code. Developers study how the AI was trained, often to improve the code. Openness promotes innovation. But openness creates security issues.
Conversely, closed AI systems are proprietary. They are developed by specific vendors. These systems limit access to their underlying data. The systems give better security. But they lack transparency and adaptability.
Where lawyers use client data to produce AI-generated materials, they must protect the data from being incorporated into the vendor’s AI tools. Contracts with AI vendors should include clear restrictions on how the vendor can use the data. The lawyer must retain ownership of any AI-generated outputs derived from client data.
Competency. Attorneys must provide representation that is competent. The New York rule on competency requires not only knowledge and skill, but also thoroughness and reasonable preparation. Lawyers must understand the risks and pitfalls with generative AI. This duty is greater than simply entering a prompt into generative AI and accepting the results. To avoid expensive mistakes, AI’s operations and limitations must be understood.
Ronald Reagan famously said, “trust, but verify.” He wasn’t talking about AI, but his maxim fits. Generative AI is anthropomorphic. It engenders trust by mimicking human interaction. Users must be vigilant. AI-generated responses can mislead or be altogether hallucinated.
Ignoring the risks of hallucination leads to humiliation. Consider Mata v. Avianca, 678 F. Supp. 3d 443 (S.D.N.Y. 2023). There, overreliance on generative AI led to the sanctioning of two New York attorneys and their law firm. The attorneys used ChatGPT to generate court filings, in which the AI tool made up cases to support the attorneys’ legal arguments. The court held, “the filing of papers ‘without taking the necessary care in their preparation’ is an ‘abuse of the judicial system’ that is subject to sanction … An attempt to persuade a court or oppose an adversary by relying on fake opinions is an abuse of the adversary system.” On top of a fine, the court ordered the attorneys to notify the judges who were attributed to the fake opinions.
Mata illustrates the importance of lawyer competency when using new technologies, such as generative AI. Lawyers should understand how AI systems operate, including factors considered in AI decision-making and the datasets used for training. The attorneys in Mata did not understand that ChatGPT’s training data was limited and could thus hallucinate case citations.
As a practical matter, some AI uses are easier to verify because they do not turn on specific rules or cases. You might ask AI to generate interrogatories or document requests. You can then use, revise, or dump. There is little risk of violating a rule or case. And almost no risk of breaching clients’ confidences.
Similarly, we’ve heard of lawyers asking AI to suggest deposition questions. This, too, seems safe. The lawyer need not disclose client information. Still, lawyers must always be wary of automation complacency. Oversight is critical.
Lawyers must be vigilant when using generative AI. Lawyers must evaluate AI’s responses for accuracy, check all sources cited, and tailor the response to the facts. The lawyer’s skill is vital to provide accurate legal context and ensure that ethical considerations are upheld.
Ultimately, lawyers will need to wade into the AI sea. Some lawyers will leap in. Others will wait to be dragged in. However you go in, remember the three Cs: Communication, Confidentiality and Competency.
Reprinted with permission from the New York Law Journal©, ALM Media Properties, LLC. Further duplication without permission is prohibited. All rights reserved.