This article originally appeared in the Trial Lawyer’s Journal, Vol. II. To get free online access to all articles or subscribe to the print edition, visit www.triallawyersjournal.com
An android — a human robot — confidently addressing a courtroom, its digital eyes gleaming with intelligence. The judge and jurors, captivated. What’s remarkable isn’t the futuristic spectacle, but that this AI can outspeak, outthink, and match seasoned trial lawyers head-to-head.
If Hollywood has taught us anything, it’s that AI always knows more than it lets on. Remember HAL 9000, the sentient computer, in “2001: A Space Odyssey”? HAL wasn’t just any ordinary assistant; he had intelligence that surpassed human comprehension. And Ridley’s Scott’s masterpiece: “Blade Runner,” a film where AI goes beyond mere data crunching. Replicants, those bioengineered beings from Tyrell Corp., were designed to be “more human than human.” And who can forget James Cameron’s “Aliens,” where the corporate sycophant, Burke, was supplanted by an android named Bishop?
Bishop was the ideal blend of calm logic and ethical conduct, everything Burke was not. In today’s legal arena, AI aims to be our Bishops. It stands as a paragon of responsible logic, dispassionately analyzing reams of legal documents and past cases, spotting patterns that even the keenest human eyes could miss. However, just as Ripley had her initial concerns with Bishop, trial lawyers should be wary of what AI can (and can’t) do.
The regulatory landscape is evolving at warp speed to keep up with the rapid advancements in AI technology. Ethical considerations, data privacy issues, and the fair use of AI-driven insights are all legitimate talking points.
This brings us to the topic of discussion: the regulatory updates seemly coming out every week. How do we harness the hyper-intelligent power of AI without it spiralling into a legislative HAL 9000 scenario? We dive into the latest regulations, state opinions, the ethical conundrums, and the pragmatic solutions that pave the way for AI to become an indispensable ally in personal injury law.
While the legal sector has begun adopting various AI tools to assist professionals with a host of legal tasks, law firms have run into several ethical and legal challenges that require careful consideration. With attorneys using AI to draw up demand packages, write briefs, qualify leads, or perform other tasks, different state courts and bar associations have begun to weigh in on how the technology can and should be used by attorneys. And there are already cases illustrating how the courts will sanction lawyers when ethical lines are crossed. While cerain AI tools can benefit PI law firms, it’s critical that lawyers understand where the courts and bar associations stand on these issues.
More Than Meets The AI: Types Of Modern Artificial Intelligence
AI is an advanced technology that enables machines and computers to simulate human intelligence. On its own or when combined with other solutions, AI can perform tasks that would otherwise require human intervention. With the aid of vast amounts of data, these systems learn from past experiences and perform human-like tasks using complex methods and algorithms, enhancing the effectiveness, accuracy, and speed of human efforts. And some argue this technology can also provide greater access to justice for clients. There are four types of AI based on functions, only two of which are currently in use:
Reactive Machine AI
These AI systems only work with currently available data and are designed to perform a specific task. Examples are Netflix’s recommendation engine, which looks at a viewer’s history, and IBM’s Deep Blue chess-playing computer, which analyzes pieces on a board to predict probable outcomes.
Reactive machine AI doesn’t form memories or use past experiences to make its current decisions. These systems are highly task-specific, so they are ideal for performing repetitive tasks. For law firms, this might include things like seamless document management and scheduling.
Limited Memory AI
Limited memory AI can recall past outcomes and events and monitor specific situations or objects over time. It can use present and past-moment data to decide on an outcome, but it can’t retain data long-term. Examples of limited memory AI include generative AI tools like ChatGPT and DeepAI and virtual assistants like Alexa and Siri.
Limited memory AI relies on natural language processors and machine learning. Machine learning is the most popular and disruptive for the legal industry. Because it uses inductive reasoning, similar to a spellcheck, there’s no way to catch every error that this type of AI can produce. While it can be useful, it must be used with caution.
Theory Of Mind AI
Theory of mind AI is a type of AI that is still largely undeveloped. The idea is that this type of AI will understand the emotions and thoughts of other entities, so it can better interact with those around it. Once brought to fruition, theory of mind AI will be able to analyze images, voices, and other types of data to respond appropriately to humans on an emotional level. This type of AI may eventually be useful in screening and qualifying leads.
Self-Aware AI
This is another type of AI currently under development, so it remains strictly theoretical. This AI will not only be able to understand human emotions but also its own internal traits and conditions. It will have its own beliefs, needs, and emotions. How this type of AI can or should be applied in the legal sector remains completely unknown.
Responsible AI for Personal Injury Firms
In a short period of time, we have seen an explosion of service providers and software platforms offering their versions of an AI solution. Your inbox is perhaps flooded with claims of increased efficiency, productivity, turnaround times, and cost savings. Today, when everything seems to be a “dot AI” — how does one filter substance from noise?
A personal injury attorney’s mission is to secure justice for their client, which doesn’t always mean the fastest settlement. Every business strives to be efficient and productive, but true success in personal injury law demands more than the bits and bytes of any algorithms. It requires empathy, concern, and genuine care — qualities that chatbots, for now, cannot mimic. In times of crisis, people still value human touch and personalized attention.
That’s why we at CloudLex are taking a slightly different approach, one that commits to building “Responsible AI for Personal Injury Firms.” What does this responsibility mean? It is based on four foundational pillars:
- Enrich & Enhance our day-to-day work by making it more engaging and enjoyable.
- Unlock Efficiencies through automation, but always complemented by human intuition and judgment.
- Create a Win-Win Solution rather than a Zero-Sum Game, designed to empower — not eliminate — the workforce.
- Promote a Culture fueled by human empathy, passion, drive, grit, and good old-fashioned creativity.
AI will be a journey for all of us, and it represents a paradigm shift for knowledge workers. The good news is that we are in the driver’s seat, shaping this future. This isn’t a time to hastily jump on the AI bandwagon; it is a time to move forward responsibly, embrace new technology considering the pros and cons, and actively shape the future we want.
Ethical Considerations, Challenges, And Risks
Maintaining Competence
Law firms have a responsibility to submit accurate information to the courts on behalf of their clients. If they rely entirely on AI for data, they can and should be held accountable when that data is inaccurate.
AI hallucinations are also an ongoing problem, wherein an AI tool perceives objects or patterns that are nonexistent to human observers and creates outputs that are either nonsensical or entirely inaccurate.
In other words, the AI application understands that you want case law to back up your argument, so it makes one up to satisfy your query. This is obviously a serious ethical issue in any area of the law and one that has been addressed repeatedly by the courts recently.
Confidentiality Of Information
Attorneys have an ethical duty not to disclose information related to the representation of their clients. Similar to any business, law firms must protect the private data of their clients. This can create conflicts when using AI.
AI systems generally rely on vast amounts of data, including confidential and sensitive information, which can be stored and used to make decisions. When using AI technology, lawyers must ensure adherence to strict privacy regulations.
For example, lawyers using ChatGPT must understand the system’s privacy policy and make sure any data the application uses is strictly limited to the stated purpose.
Impartiality Of Opinions
If the data AI draws from is biased, the results it produces will also be biased. This is critical for any industry, but it can be catastrophic for the legal profession because it undermines the principles of equal treatment under the law and justice.
The problem is that predictive analytics have been found to be biased and even discriminatory. For example, if you are using AI to generate a cost of living score for settlement purposes, these models pull from historical data. If the algorithm pulls data from a district that isn’t representative of the client, it can produce an inaccurate and biased result.
Before using AI, it’s essential that lawyers understand that bias may exist and how it can impact case outcomes. Instead of blindly relying on analyses produced by AI, lawyers must critically examine the results to identify any potential biases.
Rulings And Guidelines Related To AI In The Legal Sector
Because AI is a somewhat new technology, it has been tested in the courts and addressed by many state bar and local bar associations.
Court Rulings Related to AI
Many real-life cases already exist related to AI’s use and the law. Here is a sampling of rulings by the courts, many of which involve sanctions and license suspensions for improper use of the technology.
Mata v. Avianca, Inc. (06/22/2023)
In Mata v. Avianca, Inc., the plaintiff’s attorneys, Peter LoDuca and Steven Schwartz, prepared an “Affirmation of Opposition” using ChatGPT, which cited many non-existing cases. After the opposition counsel and the court could not locate the cases, the court found that the lawyers acted with “subjective bad faith,” ordered the attorneys and their law firm to pay a $5,000 penalty, and required that the lawyers apologize to the judges who were incorrectly identified in the fake cases.
People v. Crabill (11/22/2023)
Colorado attorney Zachariah Crabill was suspended from practicing law for 366 days in addition to a two-year probation period by the Colorado Supreme Court Office of the Presiding Disciplinary Judge. The suspension was due to Crabill’s use of fictitious or incorrect cases generated by ChatGPT. Even though Crabill was aware of the issues, he didn’t withdraw his motion but instead blamed any mistakes on a legal intern.
Smith v. Farwell, et al. (02/12/2024)
This Massachusetts Superior Court civil case addresses the submittal of several legal memoranda by the plaintiff’s counsel who cited and relied on wholly fictitious case law, thanks to the use of AI. While the plaintiff’s counsel denied any intentions to mislead the court, the attorney was required to pay a monetary sanction in the amount of $2,000.
J.G. v. New York City Dept. of Education (02/22/2024)
In a federal case in New York under the Individuals with Disabilities Act, the plaintiff sought an award for attorney’s fees as permitted by the statute. While the court awarded a portion of the fees requested, it severely chastised the attorneys for using ChatGPT to support their claims. Specifically, the court found that relying on ChatGPT-4 was “utterly and unusually unpersuasive” for determining reasonable legal billing rates.
Bar Association Guidelines
In many jurisdictions, state and local bar associations have issued guidance or made recommendations concerning the use of AI. The common theme among these is that lawyers must understand the risks of using this technology and ensure its use complies with their ethical obligations under the Rules of Professional Conduct. Some examples include:
California (11/16/2023)
The State Bar of California Standing Committee on Professional Responsibility and Conduct issued “Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law.” The association’s guidance advises that lawyers should consider all the risks associated with using generative AI in providing legal services.
Many states have at least considered the possibility of changing the rules surrounding nonlawyer fee-sharing and business partnerships with lawyers. Not every state agrees that allowing nonlawyers to work or share fees with attorneys is the answer, but some have decided to change with the times.
Florida (01/19/2024)
The Florida State Bar Ethics Opinion 24-1 advises that lawyers may use generative AI in their legal practices provided that they do three things: protect client confidentiality, provide competent and accurate services, and avoid improper billing practices. On August 29, the Florida Supreme Court adopted the amendments proposed by the Bar. These changes went into effect on October 28, 2024.
Kentucky (03/15/2024)
The Kentucky Bar Association issued Ethics Opinion KBA-E457, advising that lawyers have a duty to keep abreast of the use of AI and the law. While lawyers may not need to disclose the “rote” use of AI for basic tasks and research, they may need to reduce their fees if AI saves time spent on client matters. Lawyers must continue to comply with all court rules and safeguard client information when using AI.
Michigan (10/27/2023)
The State Bar of Michigan addresses judicial competence and artificial intelligence in its Ethics Opinion JI-155. The association concludes that lawyers must maintain competence with emerging technologies, especially AI, and
the ways these technologies can impact their decisions and conduct. The opinion concludes that legal professionals have an ethical obligation to understand technology and take appropriate steps to ensure that any tools used are within the confines of court rules and the law.
New York (04/06/2024)
The New York State Bar Task Force on AI issued a 92-page Report and Recommendations in 2024, which affirms that lawyers must continue to comply with the Rules of Professional Conduct. The bar association emphasizes independent judgment. For example, Rule 1.1 of its guidance indicates that attorneys “have a duty to understand the benefits, risks, and ethical implications associated” with using AI tools.
New Jersey (01/24/2024)
The New Jersey Supreme Court Committee on Artificial Intelligence issued “Preliminary Guidelines on New Jersey Lawyers’ Use of Artificial Intelligence.” The guidelines emphasize that AI doesn’t change a lawyer’s fundamental duty or ethical responsibility to his clients. As with any technology, lack of care could lead to ethical violations and unintentional misconduct, including discrimination.
Pennsylvania (05/22/2024)
The Pennsylvania Bar Association and Philadelphia Bar Association recently issued a joint legal ethics opinion on the use of generative AI in law practice. The 16-page opinion includes 12 points of responsibility for lawyers using AI in their legal practices. Among these points are items like ensuring lawyers are competent in the technology they use, verifying the accuracy of all materials, eliminating biases, maintaining confidentiality, and ensuring transparency when using AI tools.
Texas (11/19/2024)
The Texas State Bar issued updated guidelines emphasizing the importance of lawyer competence in AI technology, maintaining client confidentiality, ensuring transparency, and addressing privacy concerns. It also highlights the need for verifying AI outputs for accuracy, mitigating biases, and incorporating AI and cybersecurity training into CLE programs.
Michael Abdan is a licensed attorney in the state of New York and Florida and Partner at CloudLex.