Over the past year, there has been a rash of cases where lawyers used artificial intelligence programs to draft the briefs that they submitted to the court, but without reviewing those briefs to discover that the AI program had erred in drafting those briefs — often to the extent of hallucinating citations to legal authorities which didn’t even exist. You can read examples of these stories here and here and here and here and … well, you get the point. These instances have often resulted in monetary sanctions for the attorney involved. This is somewhat ironic, considering that judges, including U.S. district judges, have themselves been caught in the same practice as you can read here and here. “Doctor, heal thyself.”
This article is not about those cases specifically, nor is it about the sanctions that were imposed on the involved attorneys or even judges. It is about something that is much more serious: Ethical violations by the attorneys who engage in this practice. These are violations so serious that they could result in suspension from practice or even disbarment.
There are several important ethical issues involved here. Since most states follow the American Bar Association’s Model Rules Of Professional Conduct, we will focus on those.
The first rule implicated is, well, the first rule.
“Rule 1.1: Competence: A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”
When a lawyer provides the court with a brief that contains significant inaccuracies, whether drafted by AI or not, the lawyer has not provided competent representation to the client and has violated Rule 1.1. If the brief is drafted by AI, then for this purpose it is little different than a brief drafted by a law clerk, paralegal or even a junior attorney — it must be reviewed for quality and accuracy.
The use of AI to draft a brief indicates something much worse. The attorney is not doing for what she is paid to be doing, which is thinking. Lawyer are paid to use their minds, not just toil going through the motions — that’s what much lower-rate paralegals are for. Lawyers are paid to use their minds to craft a winning case, not just grind out a work-product and call it a day. Drafting briefs gives attorneys a chance to think about what their case is really all about. I’ve had any number of important epiphanies about cases while drafting even simple motions and briefs. Delegating the thinking to AI is nothing less than a lawyer’s abdication of their primary function of using her professionally-trained mind to the benefit of the client.
It is also lazy. Rule 1.1 requires not just knowledge (one’s own research) and skill (thinking), but also “thoroughness and preparation”. Using AI to draft a brief without reviewing it for accuracy is the very antithesis of thoroughness and preparation — it is simply an alternate way of doing nothing.
For all these reasons, submitting an AI-drafted brief without reviewing it for accuracy or whether it really reflects the best arguments that could be made for the client, violates Rule 1.1 and exposes the attorney to discipline. While it shouldn’t be, this is probably the violation of the Model Rules which will result in the least serious discipline, probably a private letter of reprimand or something. Still not something one wants on their professional record because at the very least it shouts “lazy lawyer”.
A more serious violation of the Model Rules for submitting an unreviewed AI brief with inaccuracies and errors is found in Rule 3.1:
“Rule 3.1: Meritorious Claims & Contentions. A lawyer shall not bring or defend a proceeding, or assert or controvert an issue therein, unless there is a basis in law and fact for doing so that is not frivolous, which includes a good faith argument for an extension, modification or reversal of existing law.”
A lawyer violates Rule 3.1 if the lawyer submits to the court an argument that is not supported by law and fact. It doesn’t matter why the lawyer did that, only that it occurred. When a lawyer submits to the court an AI brief with hallucinated authorities, because the lawyer did not bother to review the authorities in the brief for accuracy, you have a Rule 3.1 violation. You also have a Rule 3.4 violation, which provides:
“Rule 3.4: Fairness to Opposing Party & Counsel. A lawyer shall not: (c) knowingly disobey an obligation under the rules of a tribunal except for an open refusal based on an assertion that no valid obligation exists;”
A court’s rules of civil procedure will almost always have a rule (usually Rule 11 in federal court and most states) which prohibit a lawyer from taking a frivolous position in the case. This can be satisfied by the lawyer urging an argument based on hallucinated authorities. Violating the rule against frivolous positions can then be a violation of Rule 3.4 for disobeying “an obligation under the rules of a tribunal.”
Violations of Rules 3.1 and 3.4 are usually taken more seriously by the Bar. They have frequently resulted in public reprimands with requirements that the offending attorney take certain CLE course, or sometimes relatively short suspensions. This is in addition to whatever monetary sanction is doled out by the court which caught the misconduct.
Now let’s move on to the most severely punished violation which is possible for an attorney who submits an AI-drafted brief with errors.
“Rule 8.4: Misconduct. It is professional misconduct for a lawyer to: (c) engage in conduct involving dishonesty, fraud, deceit or misrepresentation . . ..”
There are a couple of different aspects of Rule 8.4 that could come into play here
First, when a lawyer signs a court submission, whether a motion or brief or whatever, the lawyer is acting as an officer of that court and essentially verifying that the submission and its contents are valid. If the contents are not valid for whatever reason ― say, there is a misstatement that a legal authority says what it is represented to say ― that amounts to a deceit upon the court and an act of dishonesty. On this point, there is obvious overlap between Rule 8.4 and Rules 3.1 and 3.4 that we have previously discussed.
Second, we have an express or implied representation by the lawyer to her client that the document being filed in court is the authentic work of the lawyer, and not some AI-drafted junk which was not even reviewed by the lawyer before it was filed. After all, if the client wanted an AI-drafted brief filed, the client could have generated that sort of brief himself and for no cost. There is, thus, a significant and invidious element of dishonesty being practiced by the attorney on the client.
This brings us to the third factor, which depends upon how the attorney was compensated in the case. If the attorney had taken on the matter on a contingency or a fixed-fee, then perhaps it is of no matter. But if the attorney was billing the client on an hourly basis, then the matter can become very serious.
The question becomes how much the attorney billed the client for the AI-generated brief. Did the attorney bill the client for the maybe half-hour that it took to create the AI prompts to generate the brief, or did the attorney bill the client as if the attorney had put in the 50 hours of sweat equity needed to generate a quality brief? If the attorney billed for only the half-hour, then perhaps there is not a problem with Rule 8.4. However, if the lawyer billed for 50 hours as if the lawyer had put in that work, that is all of a dishonesty, fraud, deceit and misrepresentation to the client. That is billing for hours that the lawyer did not actually work and thus calls for the most extreme ethical penalty of disbarment.
This is something that clients need to look out for as well. If their lawyer gets caught submitting an AI-drafted brief, the client will want to review billing statements to see how much the lawyer billed for that work and possibly ask for those fees to be rebated. Law firms themselves have an ethical obligation to ensure that this sort of fraudulent billing is not being done by their partners and associate attorneys.
None of this is to say that artificial intelligence will not play a proper role within the legal sector. There are many things for which AI can be quite useful, such as sorting and summarizing documents, creating deposition indices and summaries and the like. There is also nothing wrong with asking AI to draft a brief so that the lawyer can compare it to his own brief to see if AI spotted any additional issues or arguments. Personally, in addition to some of these other things, I have found AI useful to summarize court opinions that I have already read to later remind me what they are all about. But I still treat AI very cautiously in my own law practice.
It is critically important that the use of AI be tempered with the realization that at the end of the day it is simply a computer program and can make mistakes, and even egregious mistakes. The use of AI also has the negative potential to be overused such as that the lawyer never gets a solid understanding herself about a particular case. This can end up biting the lawyer and thus the client she represents.
In the end, the biggest danger of AI to lawyers is that relying upon AI will cause a lawyer’s basic skills and thought processes to atrophy. It is thus a crutch to be used lightly, if at all. Perhaps there will come a day, and possibly within the next several years, when AI will be able to a better job at many lawyer functions ― including drafting briefs ― than even the very best lawyer. Yet, it will still be the human lawyer who must stand up in court and argue the case to other humans, whether they be wearing black robes or sitting in the jury box.
That requires a healthy legal mind, and one not atrophied through overreliance upon AI.
Read the full article here









