Steve Whiter is the director of Appurity and offers advice to early adopters on what they should know about this technology, including what risks are involved.

Artificial intelligence (AI) is revolutionizing the way we communicate, conduct business and complete simple tasks. While AI tools have been around for a while, the interest in this technology has recently increased when Open AI released ChatGPT, its artificial intelligence chatbot.

ChatGPT has captured the imagination of the public overnight. The ability of ChatGPT to produce copy quickly, perform research tasks and participate in humanlike conversation opens up multiple operational options for businesses and organizations around the world. The same is true for law firms.

In April 2023 , Thomson Reuters released a report that surveyed 440 attorneys in the UK and US about their attitudes towards ChatGPT in law firms. According to the survey, 82% of respondents believed that ChatGPT and Generative AI could be “easily applied” to legal work. Of course, there is also an ethical question. Should employees and law firms use ChatGPT for legal work and should they be using generative AI? 51% of respondents said “Yes”.

Many companies are wary of the increasing use of ChatGPT. The tool is able to streamline operations, but many firms are concerned about the security, privacy and confidentiality requirements. Can ChatGPT help law firms increase productivity? What are the risks? Here are some key considerations for all fee earners and partners who want to know how they can use ChatGPT at their firm.

Accuracy and bias, as well as ethical concerns

AI can assist lawyers in a variety of ways. Automation of clerical tasks, legal research or even the drafting of briefs can improve productivity and efficiency in a law firm. Any such AI use comes with its own risks. Even though ChatGPT is sophisticated, it’s not always accurate.

Should Law firms and their staff use ChatGPT for legal work and generate AI? 51% of respondents said “Yes”.

AI tools have been known to create false information. The ‘hallucinations,’ as they are called, are alarming because the cause is unknown. ChatGPT does not flag content as incorrect or containing missing context or being wrong. This means that users have no way of knowing when ChatGPT is providing false information. A user can only guarantee that any AI-proclamation is accurate by checking it themselves. While AI can save time and money by automating menial tasks, this benefit may be offset by the need for a human to check and verify all of AI’s outputs.

Even the best fact-checker may not be able mitigate their bias. The output of a language-processing tool will be determined by how it is trained. The people who created the tool and the decisions that they made about the source of the training information, as well as how it is used, are critical in determining the information users receive. It may not always be malicious, but bias will still be present, especially if the tool is being used to make decisions or give ‘opinions.’ It is possible that future regulations will require firms to use language processing to eliminate bias.

Ethics and accuracy concerns go hand in hand. Can lawyers still serve their clients’ best interests if they rely more heavily on AI for content delivery and to complete tasks? What does it mean to the profession if lawyers are spending their time fact-checking work that is done by tools for language processing? Lawyers and firms are subjected to rigorous training, and strict regulations. ChatGPT is not bound by the same ethical obligations. The firms will be held responsible if the content of ChatGPT is misused. The malpractice implications are huge.

The implications for client confidentiality

The firms must protect and keep confidential the data of their clients. It is an obligation that cannot be ignored. Data mishandling and misuse can violate industry codes of conduct or data protection laws. AI tools are problematic because users don’t always know what happens to the data that they input. This is a risky move for firms to take.

Before using AI tools to aid in legal studies, firms must understand how the data inputted is processed and used. Where are these data stored? Are they shared with others? What security measures are in place to minimise the risk of a data leak? Many firms already have multiple processes and systems in place to safeguard their clients’ information. They use separate methods for data stored in the cloud, on-premises, and across multiple devices. It is no longer enough for companies to protect their own infrastructure. Do you have processes in place that protect against AI-related data breaches or misuse?

Can lawyers still serve their clients’ best interests if they rely more heavily on AI for content delivery and to complete tasks?

ChatGPT is a language processing tool that firms may want to extend their policies and procedures for digital communication to. If fee earners or partners are currently using WhatsApp or SMS to communicate with their clients, it is important that these messages be managed and stored. The IT department of a firm should have a full record of every message sent using modern communication methods. The same approach could be adopted by firms when it comes to AI. The minimum is to keep comprehensive records of all the data shared with tools for language processing.

Cybersecurity should be a priority

Any firm that is considering the use of language processing tools should put cyber security concerns at the forefront. Any new technology or tool introduced into a company’s workflow must be treated like a potential attack vector and secured. If a user doesn’t know who is in charge of the technology and tools they use at work, or how they manage, store, and manipulate data, they leave themselves open to vulnerability.

ChatGPT has advanced language capabilities that allow for well-articulated messages and emails to be created almost instantly. This can be used by bad actors to create malicious code or sophisticated phishing emails. While ChatGPT does not create malicious code directly, hackers are already using ChatGPT for malware and scripts.

Firms will also need to be vigilant as new AI tools are developed and educate their lawyers on the risks they face and how to protect the firm and themselves. To combat AI-generated phishing, firms may need to invest in more advanced security awareness training or new technologies. Some of the newer and more advanced malware protection software scans all incoming content to flag or quarantine anything that appears suspicious or has a malicious footprint.

AI tools for natural language processing may change the way we work forever. Businesses aren’t far from automating low-value or clerical tasks by leveraging ChatGPT’s advanced capabilities and other AI innovations. As with any new technology or tool that is hailed as the future of business, users and potential adopters must understand the risks as well as the rewards. Partner firms and partners must consider whether their infrastructure is ready to handle this disruptive technology and how they will protect themselves from any new security threats. We can then embrace the AI revolution, making it a successful one for firms, partners and fee-earners.


Steve Whiter Director



Appurity Limit

Clare Park Farm Unit 2, The Courtyard Upper Farnham, GU10 5DT

Tel: +44 0330 660 0277

E: [email protected]

Steve Whiterhas extensive experience in secure mobile solutions and has been working in the industry for over 30 years. Steve Whiter has been working with Appurity for over 10 years to deliver secure mobile solutions that not only enhance productivity, but also comply with regulations like ISO and Cyber Essentials Plus.

Appurityis an UK-based firm that provides mobile, cloud, cybersecurity, and data solutions to businesses. The company’s staff has a wealth in knowledge of industry-leading technologies that they use to help their clients develop secure and efficient mobile strategy. Appurity, in collaboration with its technology partners, including Lookout, NetMotion and Google, Apple and Samsung, BlackBerry, MobileIron/Ivanti and Apple, is delivering mobile initiatives across multiple verticals, such as legal, finance, retail, and public sector.

Leave a Reply

Your email address will not be published. Required fields are marked *