Connect with us

Hi, what are you looking for?

News

Artificial Intelligence And The Law: An Early Warning

There has been an interesting case pending in the U.S. District Court for the Southern District of New York, being Mata v. Avianca Inc., Case No. 22-CV-1461 (S.D.N.Y., filed Feb. 22, 2022). The case involves two lawyers who used artificial intelligence, in the form of ChatGPT, to draft their legal brief in the case. It didn’t turn out well, as related in a May 27, 2023, article by Benjamin Weiser of the New York Times. Essentially, ChatGPT just make up citations and quotations out of thin air, and when opposing counsel and eventually the court started to check those citations, they simply didn’t exist. This is part of a larger problem of AI in that AI has shown a disturbing tendency, like its human creators, to tell porkies and just outright lie about things. For their part, the lawyers threw themselves on the mercy of the court and begged forgiveness — the smartest thing they had done so far. Ultimately, each lawyer was fined $5,000 and were required to apologize to the judges that they (or, rather, ChatGPT) had blatantly misquoted. That was probably the least of their punishment, as they have now been immortalized as professional buffoons.

In federal courts, attorneys are required by Federal Rule of Civil Procedure 11 (popularly known as “Rule 11”) to sign off on all things filed with the court, and that signature essentially certifies that everything in the filing is true and correct within the best of that attorney’s knowledge and belief. That includes not just evidence put in front of the court, but also legal authorities which argue a particular position. Rule 11 applies to the signing attorney even if the person who drafted the filing was a law student clerking in their first summer of law school who slops together a bunch of authorities having nothing to do with anything as they are so apt to do; the attorney’s duty is to carefully check those authorities to make sure they say what they mean. Here, the attorneys were not even using the wet-behind-the-ears law clerk, but simply (and quite lazily) had ChatGPT generate their filing and then failed to check whatever it was that ChatGPT had ginned up. It was a blatant violation of Rule 11, and they got dinged for it.

The point of this article isn’t about this particular case, but rather the dangers to clients of their own attorneys using AI programs to generate planning documents, things such as contracts, trusts, wills, and even legal memoranda upon which a client may desire to rely upon later as the “advice of counsel”. This isn’t a 2023 problem but has already been going on for some years. The first time I saw it was in 2018 when one of my own clients told me about draft documents that were being created for him through a law firm’s AI program for a series of transactions that the other law firm was handling. This saved a lot of time and cost to the client, since the AI program was belting out these documents in seconds before some associate attorney, if they had been tasked with doing it, could even grab a legal pad. The documents were then carefully reviewed by attorneys of the firm and presented to my client at a much lower cost — albeit they did charge a hefty fee for their development costs of the AI program in the first place. The point being that AI drafting of legal documents is already happening, has in fact been happening for some years, and there is certainly the potential for benefits if it is used correctly.

And herein lies the problem: AI offers benefits for drafting documents if it is used correctly, but not everybody is using it correctly, and there are hidden dangers which must be understood.

What constitutes AI is beyond the scope of this article, but suffice it here to say that it is computer software code that is designed to create its own software code such that once it gets started it basically creates and re-generates itself over and over. As more external inputs in the form of data are added, the code takes this data into account and generates even more code to deal with it. Thus, an AI program basically is taught to teach itself as additional data is added. The difference between AI doing this and an organic person doing this is that AI makes millions of self-teaching decisions per seconds, whereas a human is lucky to have one really good epiphany once a day. This is wonderful in data-heavy environments, such as looking for patterns to exploit in millions of financial transactions, but there is an important catch: If an AI program teaches itself to misinterpret data, it can go off on a tangent at the rate of millions of times per second. Or, as the old computer science joke used to go back in the 1980’s (when computers were much slower): One computer can make as many mistakes in one minute as can ten men working ten years. Because AI programs can sometimes take a wrong turn because of a lack of data or insufficient programming, thus leading to conclusions which are simply false or absurd, human intervention is required to constantly tweak AI programs and keep them on Planet Reality.

The wedding of the law to AI might seem natural, and to some extent it is. AI is just computer code, meaning lots of rules. A typical line of code might read: “If X is yes, then go to Q.” There is a hierarchy of rules, and a computer program is designed to drill down to the final answer. The law is largely the same. Law is simply a set of rules, and indeed statutes are often referred to as the “Code”. Law also relies on a hierarchy of these rules: In the United States, at the top there is the Constitution, which authorizes the U.S. Code, which authorizes Federal Regulations. The various states and territories have similar hierarchies. If one were just to look at those things in the abstract, a well-constructed program could probably do just as well as the average lawyer, and perhaps even better in some instances such as in keeping up with changes in the law.

The difference between computer code and law arises where rules conflict. A computer code has another set of rules for dealing with such conflicts that are automatically applied. With law, it is much more difficult, because the outcome of such conflicts are often determined by things outside of the law, such as legislative intent, public policy, and even vaguely-defined notions of morality and even more amorphous determinations of “what is right” in a given situation. It might be possible for computer code to be developed to take some of these things into account, but we should be skeptical that a computer program will, at least at this time, be able to take into account the entire course of human nature that goes into these decisions.

Another way to think of this is that the law is not merely a set of rules to be automatically applied, but rather is a very nuanced and contextual system that to a significant extent operates to reach what is perceived to be the right conclusion, and then back-filling in the reasoning to support that conclusion, i.e., the classic result-oriented conclusion. If some might be surprised or even offended that such is how the law operates, well, about the only defense that springs immediately to mind for such a situation is that such is where we have ended up and the general public generally desires that the “right” outcome be reached in a given context no matter what the rules strictly say. Otherwise stated, nuance and context drive the application of the law, and nuance and context are very difficult if not impossible to be programmed, whether by a human coder or the best AI program.

We have now defined the limitation of AI in its relationship to the law: AI cannot successfully engage in a nuanced or contextual application of the facts of a given situation to the law, at least at the current time and no matter how good the AI program purports to be. This has significant implications for clients who may end up with documents that are drafted by an AI program.

Note here that I am not a Luddite who is advocating that AI should not be used at all for legal document drafting, but rather that the limitations of AI must be understood and whatever document AI delivers must be carefully and thoughtfully reviewed just as the draft by the aforementioned first-year wet-behind-the-ears law clerk. Properly used, AI is a tool that can save clients time and money, but that tool must be closely monitored and strictly kept to its appropriate function. This is what the attorneys who were sanctioned did not do.

Nuance and context is critically important in drafting legal documents. Consider a basic agreement between a wholesaler and a retailer. The contract might look very different if the parties have a new relationship and don’t yet trust each other, or one of the parties has a reputation for not living up to its deals. There will be a completely different contract if one of the parties is reputed to be in financial distress, with additional security interests and escrowing of product or funds.

Or, take a basic will. It is easy enough to draft a will that leaves everything in equal shares to the kids and their heirs. But what if one of the kids is a ne’er-do-will or has a drug problem? What if one of the kids is already in a bad business deal and there are concerns that she may end up in bankruptcy? Certainly, some of these typical things may be programmed into the code to make it easier to generate standard language, but the odds that the document will come out being a tight fit for the situation (the context) is pretty low. Or what if it is desired that one of the kids be favored, without that causing problems with the other children? That’s where you get into nuance.

An attorney who doesn’t take these other factors into account is probably not giving any valuable advice to their clients and is arguably useless. Notably Rule 2.1 of the Model Rules of Professional conduct allows ― if not encourages ― attorneys to take all these other non-legal factors into account when advising their clients:

“In representing a client, a lawyer shall exercise independent professional judgment and render candid advice. In rendering advice, a lawyer may refer not only to law but to other considerations such as moral, economic, social and political factors, that may be relevant to the client’s situation.”

You get the point. AI can produce some basic drafts, but somebody who understands nuance and context is going to have to massage the document to get it to where it needs to be. This isn’t too different to what happens now, since most transactional and estate planning attorneys have purchased access to large form databases and formbooks that allow them to quickly assemble the basic document, and then they can take it from there. Really, AI just does the same, only in seconds.

But you still have to be careful because AI is just as susceptible as any ordinary computer program to its biggest pitfall: “GIGO”, or “Garbage In, Garbage Out”. For those who don’t know, this means that the output of any program, including AI, can only be as good as the data which is fed into the program in the first place. Put in bad or incomplete data, and the outcome will likely be bad or incomplete. A really good program, including AI, can flag some of what is likely bad or incomplete data, but the problem is that even the best program doesn’t know what else is out there or if somebody is just plain mistaken as to the data they are putting in. Again, whatever is kicked out must be carefully reviewed.

We now come to the one thing that AI probably cannot do, at least in its current nascent iterations: Predict the future. If at first glance this seems crazy, it is not.

What you have to understand is that legal documents are not drafted in the event that everything is going to go cheerfully right. If that were the case, there would be no need for legal documents and folks could just memorialize deals on the backs of napkins so that they could remember the basic terms later. But, directly to the contrary, legal documents are drafted in anticipation that things are going to go awfully and horribly wrong. A party doesn’t live up to its side of a contract. Partners get into a squabble. Or, as in the case of a will, somebody has died.

Thus, legal documents are not so much about the present as they are about the future, and they seek to contain the economic damage, at least to somebody, when things have gone awfully and horribly wrong or somebody has died. To contain that damage, one must essentially look into the future and imagine the worst possible scenario, or at least the most likely worst possible scenario. That requires, first, an understanding of human nature and economic relations generally and, second, a good idea of what happens in the ensuing litigation. Thus, when an attorney drafts a document, they are basically looking into the future to see the worst possible scenario and then envisioning the course of litigation that would likely follow. This is something that even the best AI programs presently are simply unable to accomplish, at least as the technology exists today.

A good example of this is arbitration clauses which I have previously written about. An AI program will usually spit out a contract that contains a typical arbitration clause, because, well, most modern contracts typically have typical arbitration clauses. That’s probably good if you’re the big guy in the deal, but possibly bad if you’re the little guy in the deal. Even if the parties have agreed to arbitration, the clause should give details about what the arbitration will look like, i.e., rights to discovery, rights to an appeal, perhaps rights to a 3-person panel, etc., all of which can be critically important when the arbitration actually rolls about ― but that’s probably not what you’re going to get. Similar considerations go to attorney’s fee clauses, meaning whether the loser will have to pay the winner’s attorney’s fees or not, and a plethora of other things.

This is not to say that AI programs are invariably worse than many attorneys when it comes to drafting these documents; they are not. Bad or simply lazy attorneys will similarly include a bunch of common boilerplate in documents without thinking much of the implications, thus failing to do the thing they are really paid to do: Use their brain and anticipate future litigation. Which is to say that attorneys are really paid to do one thing above all others which is to think through the client’s matter. Unfortunately, AI illustrates (and it is not AI’s fault) that too many attorneys have become mere assemblers of documents without too much thinking, if any. It also illustrates conversely that good attorneys who do think through things are worth their weight in gold.

The attorneys who were sanctioned that I described at the start of this article is a perfect illustration. They were lazy, they were negligent, and they didn’t think. To save time, effort, and avoiding having to use their noggins, they instead took the shortcut of having ChatGPT do their thinking for them and with wholly predictable consequences. They did a terrible disservice to their client, the courts in general, and ultimately to themselves. They failed to realize that AI is simply another tool that has utilities and limitations, and particularly the limitations part.

The warning for persons using legal services going forward is how their attorney is coming up with their draft documents. They should know whether their attorney is really thinking through the documents, or is simply serving up with little change whatever comes out of an AI program, a legal form database, or a formbook. If the attorney is starting with one of these things but then is carefully reviewing language, including boilerplate, and making the necessary changes based on nuance and context of the pertinent circumstances, that’s fine. If not, well, you’d probably do just as well to buy your own forms at the office supplies store and do it yourself because it’s going to be a disaster anyway, and at least you’ll keep your up-front costs down.

The warning for attorneys going forward is largely the same: AI is a tool, not a panacea. You still have to think through things, you still have to anticipate what a future dispute might look like, and you still have to consider whether ordinary clauses really make sense in the specific circumstances before you. I can’t emphasize this enough: Your primary job is to think. Going back to MRPC 2.1, this is part of an attorney’s professional responsibility: “In representing a client, a lawyer shall exercise independent professional judgment and render candid advice.” Relying upon anything other than your own professional judgment, such as an AI program to do your thinking for you, is just not “independent”. Moreover, if you’re just working without thinking ― just going through the motions like you’ve done dozens of times before ― you’re probably not doing anybody any good and maybe it is time to look for something else to do. Oh, and make sure that your E&O insurance is fully paid up.

This is an emerging area that must be carefully watched. AI realistically offers the prospect for attorneys to have significant productivity gains in areas such as sifting through mountains of data, and perhaps eliminating many mundane tasks such as the drafting of corporate books and records, summarizing depositions, searching for a decedent’s property, etc. But the limitation must be understood, and attorneys cannot abdicate to AI their primary responsibility which is to think independently and render advice based on the totality of all surrounding circumstances including the pertinent non-legal ones. The Bar Associations at all levels should likewise get on top of this issue early, and not wait for problems to arise (but I’m not holding my breath on that one).

The expansion of AI into the legal professional will be interesting to watch. That will happen, and one might as well try to turn the tides back as fight it. Good attorneys will embrace the technology, learn its limitations, and come to use it as a master’s sword. Everybody else will probably just cut themselves sooner or later, as happens with all emerging technologies.

And for some early adopters, as with these two attorneys who were just sanctioned, AI may turn out to be a little too interesting.

Read the full article here

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Videos

Watch full video on YouTube