According to the American Bar Association (ABA), just 10 months after graduation, a href=”https://www.americanbar.org/news/abanews/aba-news-archives/2023/04/aba-legaleducationreleasesemploymentdata/” rel=”nofollow noopener” target=”_blank”>nearly 85%/a> of that class (30,512) were already employed in “full time, long term Bar Passage Required or J.D. The American Bar Association reports that just 10 months after graduating, almost 85% were already working in “full-time long-term Bar-Passage-Required or J.D. Advantage jobs.”
The recruits of this generation are also very tech-savvy. They used FaceTime, instant messaging and other technologies as children as easily as their parents did. More than half of Americans are millennials, and they can bring digital skills that will influence the capabilities of a company in many areas.
They have also likely used generative AI like ChatGPT. However, this can be a two-edged blade. Legal leaders can acquire the talent they need to use these powerful tools that save time and money. ChatGPT can be dangerous for a law office if it is not used correctly.
What is the risk associated with ChatGPT ?
Don’t be mistaken, generative AI will continue to grow and have a significant impact on all industries. It’s still a young technology that is just taking its first steps. As with all disruptive technologies, it has some gaps between its capabilities, and how they should be used. This is particularly true in the legal world.
Our space has already produced some embarrassing ChatGPT gaffes. Manhattan Judge fines two lawyers who gave him a fake legal brief with fictitious cases and citations. This was done by ChatGPT. The judge also ordered that the lawyers send copies of their opinions to the real judges who were named in the document. Also, we’ve seen a preview of future cases. Open AI, ChatGPT’s parent company has filed the first major defamation case. A radio host says it created a false complaint accusing him he had defrauded and embezzled funds from a non-profit.
You don’t need to be a genius to imagine the legal and reputational consequences that could come for law firms who fail monitor the use of ChatGPT. It could also be difficult to resist for new recruits who have already used ChatGPT in their studies and want to show off in their work. The cost savings that result from research and drafting materials being completed with a few clicks will also be attractive to law firms.
What’s the problem?
The problem with ChatGPT, when you get down to it, is that there are potential ethical violations. It includes:
- Breach of attorney client privilege: All information entered into ChatGPT is made public. Unredacted and confidential information entered by a new lawyer into ChatGPT, for example to do research or write a brief, could be used as the answer to someone else’s question.
- Intellectual property infringement. ChatGPT pulls information from the Internet. An attorney can then copy someone else’s ideas and violate their intellectual property. CopyScape can help you identify plagiarism. However, concept theft is harder to detect in review.
- Incorrect handling of information. The public internet contains inaccurate, offensive, and personal data. These materials may find their way into data sets used to train generative models and, if left unchecked, could eventually show up in legal documents or materials for which firms might be held responsible.
ChatGPT’s output is not checked or reviewed by lawyers with less experience. As cases like the Manhattan case demonstrate how to not use generative AI, this will diminish over time.
Can you trust it?
To make technology such as ChatGPT work, law firms need to feed it relevant, healthy information. The better performing systems will be the more data quality firms use to guide and train their AI models. The ethical duty to maintain confidentiality cannot be ignored, and complacency will always be the main problem.
Black box AI tools have a certain element. It’s not possible to explain the process of how they arrived at their conclusions. Blind trust is not recommended. Even though it is unclear whether the leakage of privileged data could be traced to an attorney or practice, any chance that this might happen is frightening. Even if the information is not trackable, it is still privileged and should be protected.
When firms are considering ways to harness generative AI they should train their tools using their own databases. While it’s easy to assume that information within a company is appropriate to share with everyone, certain privileged data needs to be kept private, even internally. It is important that firms figure out a way to keep data within the intended purpose. This applies to all, especially those with a Juris Doctor whose ink has not yet dried.
How do you control ChatGPT?
ChatGPT is a powerful tool, but it should be used with caution by recent grads in their new positions. To ensure that they follow best practices, law firms should also have protocols to guide them. This must be enforced. How can managers protect teams against potential vulnerabilities?
Experimentation is the best way to learn. You should encourage your employees to use generative language tools for experimentation and familiarization, despite the risks. Just don’t give them privileged information. As teams become better educated, their strengths and weaknesses will become more evident. The Manhattan attorneys would not have submitted their brief blindly if they had better understood ChatGPT. Simply put, it is not safe to use ChatGPT as a legal research tool without proper verification.
Require that employees refrain from using open-source or free technologies. Use only databases that the company knows, trusts, and pays for. Examples include Westlaw, LexisNexis, and Fastcase. There must be a system of checks and balances and a means to “see behind the scenes” to supervise employees. Senior managers and mid-level managers should be working together to audit and cross-check employee usage. You can bet that someone will catch a law office doing the same thing if a teacher notices a teenager submitting homework based on ChatGPT. It is important to do this when dealing with younger generations that may be more inclined to push boundaries.
Integrating new graduates into multigenerational teams successfully is also crucial. When there are controls in place, new recruits are a great asset because they’re less hesitant to use technology. Older generations might need to be prodded and pushed. Create a taskforce to investigate these tools, and include new hires. Inclusion in these projects not only validates their ambition, but also their unique perspective. It is an excellent way to introduce the broader organization to the possibilities that generative AI can offer. The senior team will decide the best path to take after the task force study.
ChatGPT: Is it Worth It?
ChatGPT can give teams a competitive advantage in the market if they are familiar with the technology and its usage. It’s a bit like the early days of the Internet, when many lawyers were reluctant to use it to market themselves. The firms who took the plunge may not have achieved perfect results but they did learn along the way and enjoyed greater success. They also put distance between them and the firms that sat back and did nothing.
ChatGPT automates the drafting process. It can analyze legal documents, and provide language to be used in briefs, contracts, and other materials. ChatGPT can be used to answer questions, and improve customer service when combined with chatbots. This can save law offices a great deal of time. It also allows them to better utilize their staff and reduce costs. Fresh grads are able to use newer communication channels to reach the younger generation of consumers and assist a firm in developing digital experiences that they have come to expect.
In 2023, the focus will be on determining which features are worth investing in and which ones don’t. The focus will be on determining the level of human involvement and the amount of hands-on language managers that are required. Human intervention is essential to avoid the dangers of fact-checking and plagiarism, as well as editing.
In spite of all this, generative AI is still worth the risk, and will continue to play a significant role in the future. Just as the younger generations will. It’s important to understand the technology and its capabilities, control usage, and train the new tech-savvy graduates on how to use it responsibly.
The article New Grads and ChatGPT: Legal Dangers They Pose first appeared on Attorney at Law Magazine.