Ethical and Thoughtful Use of AI in the Legal Industry
Generative artificial intelligence (AI) is a rapidly evolving technology that has the potential to revolutionize the legal profession. In an ideal world, generative AI will free up lawyers to focus on more complex and strategic work by automating many of their less sophisticated, more repetitive tasks. This can lead to increased efficiency, productivity and profitability for law firms. In the short term, however, it carries significant risks if not used judiciously. Artificial intelligence won’t replace lawyers, but lawyers who don’t learn how to responsibly leverage the power of AI will be replaced by those who do.
That also applies to law firms and the legal industry as a whole. Those of you who have seen the most impressive of today’s available tools are fully aware that there clearly will be a need for fewer lawyers in the future. And the reality is we are very much at the early stages of the development and usefulness of this technology.
That said, it is important to separate public AI chatbots from private, legally focused AI tools. Popular AI applications such as ChatGPT, Microsoft Bing and Google’s Bard are for the most part inappropriate for use in legal practice. Private, legally focused tools including Casetext’s Co-Counsel and Henchman’s contract drafting tool are far better suited because they utilize known data sources, protect confidentiality and focus on security.
Some law firms are already using these industry-specific AI tools to automate repetitive tasks such as document production and e-discovery, as well as to analyze key performance indicators and metrics.
Nonetheless, the use of generative AI in the legal profession poses multiple ethical concerns. For example, there is the risk that an attorney could use generative AI to produce work product they treat as a final draft without applying their legal judgment or confirming the accuracy of the information contained therein. Additionally, AI could be used to replace local counsel or associates who are conducting legal research and analysis.
As recent news reports have demonstrated, the existing technology has some shortcomings. This is especially true when it comes to publicly available generative AI tools such as ChatGPT, the large language model chatbot developed by OpenAI that, among other things, can generate text, translate languages, write different kinds of creative content, and perform substantive analyses on topics and situations presented by the user.
In May 2023, a New York lawyer found this out the hard way when he decided to use ChatGPT to research a legal issue in a personal injury case. The attorney asked ChatGPT to provide him with case law that supported his client’s claim. The chatbot responded with six case citations, which the attorney then included in a legal brief filed with the court. When opposing counsel became suspicious because they could not locate any of the cases, the judge ordered the attorney to produce the case law.
The attorney was unable to produce the cases because they did not exist. He admitted to the court that he had used ChatGPT to perform the legal research and did not independently confirm that the cases were real (or even for what legal positions they stood). He told the court that he was “unaware of the possibility that ChatGPT’s content could be false.” He said that it was the first time he had used ChatGPT for legal research and he had not been trained on how to use it effectively. The attorney has since apologized for the mistake and agreed to withdraw the brief that cited the imaginary case law. As of this writing, the judge has not handed down sanctions to the attorney, but the judge was very displeased at the June 8th hearing discussing the matter and it is widely anticipated that he will be sanctioned.
The incident in New York highlights the need for lawyers to be extremely careful when using generative AI for legal research. The technology is still in the early stages of development and is not always accurate. In fact, generative AI is known to “hallucinate”—completely making up content while delivering it in a confident, matter-of-fact manner. In addition, lawyers should be aware that human bias is prevalent throughout the results produced by generative AI because of the nature of the data that is used to train the technology, a problem that carries its own ethical implications.
The newness of the technology, as well as this and other high-profile hiccups, have produced skepticism. Less than a week after the New York incident, Brantley Starr, a U.S. District Judge for the Northern District of Texas, issued a standing order requiring attorneys appearing in his court to certify that they did not use generative AI to draft any portion of their filings, or that any language drafted by such AI-powered technology was checked for accuracy by a human being. In his order, Starr cited ChatGPT, Harvey.AI and Google’s Bard as examples of generative AI covered by his directive. He made no distinction made between public AI tools and private, legally focused tools. The order states that while generative AI platforms are very powerful and have many uses in the law, “legal briefing is not one of them.”
Starr’s order is a significant development in the use of generative AI in the legal profession. It is the first time that a judge has required attorneys to disclose their use of AI in legal filings. In the short term, the order is likely to have a chilling effect on the use of such tools in legal research and writing.
There are several potential motivations behind Judge Starr’s order. First, he may be concerned about the reliability of the information AI generates because it is still in the early stages of development and is not always accurate. Second, he may be concerned that AI could be used to create fraudulent or misleading documents. Third, he may be concerned that AI may be used to replace lawyers altogether.
Starr’s order is certainly likely to be challenged. Some lawyers argue that the order violates the First Amendment right to free speech. Others contend that it is overly burdensome and will make it more difficult for lawyers to effectively represent their clients.
While the outcome of any challenge to Starr’s order is uncertain, the directive demonstrates growing concern in the legal profession about the use of generative AI. As the technology grows more sophisticated, other judges and courts are likely to take a similar approach to disclosure in the future and to develop new rules and procedures to address its ethical and legal implications.
Consequently, it is important for lawyers to stay up to date on the latest developments and to adopt best practices for using generative AI in an ethical and responsible manner. There are multiple ways law firms can ethically and thoughtfully incorporate generative AI into their practices. Here are a few tips:
- Law firms should adopt a policy regarding the ethical and thoughtful use of AI that reinforces attorneys’ ethical obligations and sets guardrails around use of various AI tools.
- Lawyers must be trained on how to use generative AI effectively and ethically. This training should cover the basics of generative AI, as well as the ethical considerations associated with its use.
- Lawyers should not use generative AI to replace their own judgment and expertise. Generative AI should be used to augment the work of lawyers, not to replace them altogether. Lawyers should still be responsible for reviewing and approving all documents generated by generative AI.
- It goes without saying that generative AI should not be used to create fraudulent or misleading documents. Lawyers should carefully review all documents created using generative AI to ensure that they are accurate and compliant with all applicable laws and regulations.
- Lawyers should keep in mind that generative AI is not perfect and can sometimes produce inaccurate or misleading information.
- Users must verify the accuracy of any information generated by generative AI before relying on it in any legal proceeding.
- Lawyers should make clients aware when generative AI is being used to create their documents. This will help to build trust and ensure clients are comfortable with the process. Users must stay current on the latest developments in generative AI and adopt best practices for using it in an ethical and responsible manner.
Reprinted with permission from the June 03, 2023, issue of The Legal Intelligencer© 2023 ALM Media Properties, LLC. Further duplication without permission is prohibited. All rights reserved.

