In its first major pronouncement on the ethics of using generative AI in law practice, the American Bar Association has issued an opinion saying that lawyers need not become experts in the technology, but must have a reasonable understanding of the capabilities and limitations of the specific generative AI technology the lawyer might use.
In Formal Opinion 512, issued yesterday, the ABA’s Standing Committee on Ethics and Professional Responsibility sought to identify some of the ethics issues lawyers face when using generative AI tools and offer guidance for lawyers in navigating this emerging landscape.
Acknowledging that the rapid development of gen AI makes it a fast-moving target, the committee said, “It is anticipated that this Committee and state and local bar association ethics committees will likely offer updated guidance on professional conduct issues relevant to specific GAI tools as they develop.”
The opinion offers no earth-shattering insights. Rather, it focuses on much the same ethics issues that have been discussed in state ethics opinions issued so far on gen AI and the same issues that come up in almost every ethics opinion on legal technology of any kind: competence, confidentiality, communication with clients, supervisory responsibilities, and fees.
One issue unique to generative AI that the opinion addresses is that of meritorious claims and candor toward a tribunal.
Recognizing that issues have arisen with lawyers’ use of AI involving citations to nonexistent opinions, inaccurate analysis of legal authorities, and use of misleading arguments, the ABA says that lawyers have a duty — both under their duty of competence and of candor to courts — to review the accuracy of all outputs from generative AI products.
“In judicial proceedings, duties to the tribunal likewise require lawyers, before submitting materials to a court, to review these outputs, including analysis and citations to authority, and to correct errors, including misstatements of law and fact, a failure to include controlling legal authority, and misleading arguments,” the opinion says.
Duty of Technological Competence
The opinion begins with, and devotes its largest section to, a discussion of a lawyer’s duty of competence in using generative AI under Model Rule 1.1 — both legal competence and technological competence. That duty, the ABA says, does not require that lawyers become experts in AI, but they develop a reasonable understanding.
“To competently use a GAI tool in a client representation, lawyers need not become GAI experts,” the opinion says. “Rather, lawyers must have a reasonable understanding of the capabilities and limitations of the specific GAI technology that the lawyer might use.”
See here for a list of states that have adopted the duty of technological competence for lawyers.
What does that mean, exactly? The ABA offers these guidelines:
- Lawyers should either have that reasonable understanding themselves or draw on the expertise of others who can provide guidance.
- The duty is not static. Lawyers should keep up with changes in the technology and remain vigilant about its benefits and risks.
With regard to generative AI’s capacity to hallucinate, the ABA says that lawyers should never rely on or submit AI-generated output “without an appropriate degree of independent verification or review of its output.”
What that appropriate level of review is will depend on the tool being used and the task being performed. As an example, the opinion says, “a lawyer’s use of a GAI tool designed specifically for the practice of law or to perform a discrete legal task, such as generating ideas, may require less independent verification or review, particularly where a lawyer’s prior experience with the GAI tool provides a reasonable basis for relying on its results.”
Interestingly, the opinion speculates that there could come a time when lawyers will have to use generative AI “to competently complete certain tasks for clients.” But even without such a requirement, the ABA says, lawyers should become aware of and understand these tools to a sufficient extent that they can make a professional judgment about whether to use them.
Other Ethics Issues with Generative AI
The opinion addresses several other ethics issues implicated by a lawyer’s use of generative AI:
- Confidentiality. A lawyer’s duty under Model Rule 1.6 to protect client information extends to the use of generative AI, the opinion says. Given that, lawyers must evaluate the risk of disclosing confidential information when using these tools and ensure that any information input into an AI tool is adequately safeguarded. The opinion advises lawyers to thoroughly review the terms of use and privacy policies of AI tools and consult with IT or cybersecurity experts if necessary. In some cases, lawyers should obtain a client’s informed consent before using a generative AI tool, particularly a “self-learning” tool whose output could be inadvertently shared with others. “For the consent to be informed,” the opinion says, “the client must have the lawyer’s best judgment about why the GAI tool is being used, the extent of and specific information about the risk, including particulars about the kinds of client information that will be disclosed, the ways in which others might use the information against the client’s interests, and a clear explanation of the GAI tool’s benefits to the representation.”
- Communication. Even when informed consent is not required, a lawyers duty under Model Rule 1.4 to communicate with clients may require lawyers to inform clients about their use of generative AI tools. “The facts of each case will determine whether Model Rule 1.4 requires lawyers to disclose their GAI practices to clients or obtain their informed consent to use a particular GAI tool,” the opinion says. “Depending on the circumstances, client disclosure may be unnecessary.”
- Supervisory responsibilities. Model Rules 5.1 and 5.3 require managerial and supervisory lawyers to ensure that all members of their law firm comply with professional conduct rules when using any technology tools, including generative AI. This duty includes establishing clear policies, training on ethical use, and monitoring compliance. Additionally, when outsourcing to third-party providers, lawyers must ensure these providers adhere to confidentiality and professional responsibility standards.
- Fees. When using generative AI, lawyers must adhere to Model Rule 1.5 regarding reasonable fees, the opinion says. Lawyers may charge clients for the time spent using GAI tools, the ABA says, but they must ensure the fees are reasonable and reflect actual time spent. “If a lawyer uses a GAI tool to draft a pleading and expends 15 minutes to input the relevant information into the GAI program, the lawyer may charge for the 15 minutes as well as for the time the lawyer expends to review the resulting draft for accuracy and completeness.” For fixed-fee matters, it may be unreasonable for a lawyer to charge the same fee when using generative AI as when performing the work without it. Lawyers may not charge clients for time spent gaining competence in generative AI, unless explicitly requested by a client. “Lawyers must remember that they may not charge clients for time necessitated by their own inexperience.”
The ABA wraps up its opinion with a reminder to lawyers of their responsibility to remain vigilant in the face of evolving technology.
“Lawyers using GAI tools have a duty of competence, including maintaining relevant technological competence, which requires an understanding of the evolving nature of GAI. … With the ever-evolving use of technology by lawyers and courts, lawyers must be vigilant in complying with the Rules of Professional Conduct to ensure that lawyers are adhering to their ethical responsibilities and that clients are protected.”