Skip to main content

How worried should I be about using ChatGPT and other AI tools for my legal practice?

The latest breakthroughs in AI offer unprecedented opportunities for improving efficiency across industries, and this is no less true for the legal industry. This is due to the fact that the legal practice is either carried out through highly detailed writing or transcribed into writing, and the latest AI tools have proven very adept at parsing and interpreting such text. In fact, ChatGPT can already pass law exams (here). Besides causing law interns across the globe to dread their future job prospects, this breakthrough can offer law firms and their clients unparalleled efficiency in writing, interpreting and communicating. Given all these benefits what could possibly be the drawbacks? And how do I avoid them?


Hallucinations


You have probably heard about hallucinations and seen examples of it. Famously, the Large Language Models (LLM) like ChatGPT can be tricked into saying things that aren’t true (like 2+2 = 5 here). But they can also say inaccurate things without being prompted. This is a result of their inherent structure, and often it takes someone who knows about the topic being discussed or some research to notice that there has been a hallucination. Hallucinations are difficult to automatically avoid with LLMs, so the best way to deal with them in the rare cases they happen is to always have someone who is knowledgeable in the topic review them. Additionally, if there is output that hinges on concepts and values it is best to do some quick research to confirm these (if the AI does not already do that for you, thankfully more of these systems are adding these references automatically to their output).


Privacy


There are also privacy concerns with LLM based AIs. It has been found using different models that input data can be teased out of a model if one structures a query to be specific enough. The most famous example of this is in the image generation model Stable Diffusion. There, researchers were able to make highly specific queries which resulted in images strikingly similar to stock images and in some cases included even the watermark which they came from (prompting Getty to sue the AI model maker here). Additionally, new tools like the GPT store have been found to have privacy loopholes, with users being able to coax out passwords and other private information used to build the store apps. Since some of these apps will use user data to further train their models, it is possible that documents or other private information you put into LLMs could end up being accessible to bad actors. On this front, make sure that any LLM you use with potentially private input specifies that your input data will never be used to train the model. Additionally seek out LLMs that are specifically sectioned off for sensitive use cases (these types of models are being tested on government and defense applications currently).


Conclusion


Large Language Models offer a lot of promise in improving the productivity of a legal firm, but keep in mind the potential risks as well. When using these models, keep private or sensitive context out of the queries. If documents are being uploaded, make sure that they are stripped of information you would not a bad actor to get a hold of. If possible, make sure that the LLM you are using will not be using your information for further training of the model. And finally, always make sure to read any output you intend to re-use thoroughly and that you have the context to understand if it is correct - LLMs are good at giving the right answers, but they are even better and sounding like they are giving the right answer.

Comments

Popular posts from this blog

The pain of finding a document in your emails

Why is it so hard? Email is a great platform for asynchronous communication. Its roots go all the way back to the beginning of the internet with ARPANET. The engineers who put together ARPANET and the early email protocols were very smart, but they could not have foreseen all the possible applications computers and the internet would see in the future. And that is why finding documents in email chains is difficult. The sending and receiving of documents and document links along with the email messages, along with other features, were tacked on later. And for that reason they are not optimized for document retrieval. The problem is doubly difficult for law firms since not only do documents need to be assigned with particular email chains, they need to be assigned to clients and particular matters as well. So when you are having trouble finding a document for one of the matters from your client, it’s not you. It is not your junior. It is the email system. What are the options? If email i...

Benefits of Structuring Your Legal Workflow

Getting stuck is the phenomenon of having a question with no answer. As we go about our days, we come across questions that need answering. If we have the answer, we don’t think much about it. But when we don’t have the answer, and don’t know how to get it, then it is a problem. Software, like amicus.work , provide their value in helping you avoid or solve these problems. Here is how we can be useful in addressing those difficult to answer questions that make our workdays longer: What needs to be done and who needs to do it? In an ideal world we would all have one task, which could be completed immediately before we would need to jump onto the next one. But we do not live in an ideal world. We often have to jump between tasks and miss things in the process of doing so. Structuring your workflows can make it easy to know what needs to be done, who is assigned to do it and what other things are on your plate. This reduces down time, avoid duplicated work and saves you from taking on...