Results obtained from ChatGPT are often riddled with errors and sometimes outright falsehoods. Additionally, employees may share proprietary, confidential, or trade secret information when engaging in ‘conversations’ with it.
Artificial intelligence (AI) has the potential to change the world we live in. If you have been paying attention, you will have realised that this technology revolution has already begun, growing more and more prevalent across multiple industries. Unsurprisingly, the legal industry is no exception.
AI-powered technologies have and will continue to play a large role in legal professions, enabling automation of tedious tasks or streamlining workflows, allowing employers to save both time and money. One such technology that should definitely be on every lawyer’s radar is American AI research laboratory OpenAI’s latest endeavor, the AI chatbot ChatGPT.
ChatGPT has been the talk of the town since its release in November last year, taking the technology world by a storm, with those leading the discussion including the heads of American multinational technology companies Google, Meta and Microsoft
, as the chatbot quickly became the fastest growing app of all time
, surpassing even the short-form video hosting service TikTok.
How can ChatGPT be used as a professional tool by lawyers?
As it exists today, ChatGPT as a stand-alone tool provides a great deal of value for lawyers.
For instance, the current release of ChatGPT can assist with contract review by suggesting improvements to the contract’s language as well as identifying potential issues in a contract.
Additionally, ChatGPT can be leveraged during legal research to provide summaries of cases, laws or even pleadings filed with the course as well as creating workable first drafts of demand letters, discovery letters, nondisclosure agreements and employment agreements.
However, in many respects, this AI technology is still very much in its developmental, infancy phase.
Does ChatGPT provide accurate information?
Results obtained from ChatGPT are often riddled with errors and sometimes outright falsehoods. For instance, American author, lawyer and journalist Nicole Black has written
that when she asked ChatGPT to draft a LinkedIn post about an article in which she was quoted, it simply created a quote out of thin air. In another case, she noted that the AI referenced a non-existent California legal ethics provision.
ChatGPT can assist with contract review by suggesting improvements to the contract’s language as well as identifying potential issues in a contract. Additionally, it can be leveraged during legal research to provide summaries of cases, laws, or even pleadings filed with the course as well as creating workable first drafts of demand letters, discovery letters, nondisclosure agreements and employment agreements.
These gaps in knowledge can be partially attributed to the fact that the chatbot has only been trained in data available through September 2021, and therefore does not have a database for any later information. Problems arise upon consideration that, for queries that definitively require knowledge post September 2021 — knowledge that the AI most certainly does not have for reference, it should not be able to answer the query. The most appropriate answer would be to decline to answer if it clearly didn’t have accurate information on the question. However, it still does.
This phenomenon is most notably outlined in an article
by American technology journalist June Wan who, upon asking ChatGPT for a review of the iPhone 14 (which released in 2022), was given a misleading one. Any reasonable writer or researcher assigned with an unfamiliar topic would, realistically, ask for it to be covered by another, more updated individual or would spend some time researching on it, writes Wan, instead of diving into the task headfirst without any concern for accuracy.
If employees rely solely on ChatGPT to gain information for work without cross-checking its accuracy, legal problems and risks may arise, depending on how the employees use the information and where they send it.
What are some other limitations of the information generated by ChatGPT?
Moreover, when considering that the information AI provides is dependent on the information upon which it is trained as well as those who make the decisions about what information the AI chatbot receives, there are bound to be inherent biases in the types of information presented in response to questions asked during ‘conversations’.
This inherent bias may deepen with recent developments allowing users to tweak ChatGPT’s behaviour by allowing them to redefine the chatbot’s ‘values’, thus accessing information that functions mainly to magnify existing beliefs rather than challenging them through access to varied information and perspectives. In a field concerning the law, where even the slightest words and how they are composed can create the difference between a spirited legal defence and a dismal legal calamity, accuracy of information is of the utmost importance.
What are the privacy and confidentiality risks associated with ChatGPT?
The biggest area of concern would be the infringement of privacy and confidentiality.
There exists the possibility that employees will share proprietary, confidential, or trade secret information when engaging in ‘conversations’ with ChatGPT. This could be easily done, say, through using the generative AI to compose a draft of a contract or other legal documents, or by passing a drafted text over to the generative AI to take a look and identify the gaps that may be present. Here, although the attorney may not have realised, the text of the contract has been taken up by the AI app — to be used as a sort of additional database — as fodder — for pattern matching and other ‘computational intricacies’.
Although OpenAI states
that it does not retain information provided in conversations, it does ‘learn’. Moreover, since users are entering information into conversations with the chatbot AI over the internet, there really is no guarantee for the security of such communication.
In trying to fasten the pace of your work, you may have accidentally given away private or confidential clientele information.
Employers will ultimately need to address the use of ChatGPT in their workplace by figuring out the balance between the efficiency that could be achieved through this generative AI through letting it undertake menial tasks such as writing routine letters and emails or generating simple reports, and the level of review and editing that must be undertaken to ensure overall accuracy, tone and other concerns of legality.
Some may try to excuse their transgressions by claiming they aren’t tech gurus and that they could not have possibly known entering certain texts into the generative AI may have been a breach of sorts. But, the American Bar Association
in particular makes it clear that lawyers hold a duty for a certain level of technological competence, encompassing being up-to-date on AI and technology: “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology, engage in continuing study and education and comply with all continuing legal education requirements to which the lawyer is subject.”
What trajectory will ChatGPT’s usage take?
All this being said, ChatGPT is not going away any time soon. A new and improved version should be out within the year. And so, employers will ultimately need to address its use in their workplace by figuring out the balance between the efficiency that could be achieved through this generative AI through letting it undertake menial tasks such as writing routine letters and emails or generating simple reports, and the level of review and editing that must be undertaken to ensure overall accuracy, tone and other concerns of legality.
For all its risks, employers can yet leverage its benefits. The long discussion has just begun.
(Miriam Fozia Rahman is a practicing advocate at the Supreme Court and the Delhi High Court.)