Trust AI for Legal Research, But Don’t Forget to Verify Output

Trust AI for Legal Research, But Don’t Forget to Verify Output

As technology advances, more and more lawyers are utilizing AI-based tools and makerspaces to assist with various aspects of legal research. Generative AI, in particular, is becoming increasingly popular in the legal industry and is being used to assist with tasks such as legal document review, predictive analytics, and contract analysis. Despite its ease of use and the potential benefits it could provide, it is important to remember that generative AI is still just a tool and its output must be verified before relying on it.

This reminder was made clear in the recent case of the ChatGPT, where the Canadian legal profession disciplinary body, the Law Society of British Columbia, sanctioned a lawyer for relying too heavily on AI-based technology and failing to verify the output. This case serves as a warning to lawyers who are using AI-based technologies for legal research; although AI can provide powerful assistance, lawyers must trust it, but also verify the relevant output.

Trust, but Verify: Key Tips to Safely Utilize AI for Legal Research

Learn as Much as You Can About the Technology and Tools: Just as you would for any software you use, it is important to take the time to understand the limits of the AI tool and the context in which it is being used. This will go a long way in ensuring the accuracy and completeness of the output.

Verify the Relevant Output: Specifically, lawyers should be sure to review the legal documents and/or results that are generated by the AI-based technology prior to relying on them. There is no substitute for doing one’s homework and going through the vetting process for any work product.

Adhere to Ethical Standards: Develop and maintain ethical standards to ensure that all legal documents, analysis, and research produced remain within the bounds of professional responsibility.

In conclusion, AI-based technologies can be powerful and helpful tools for legal research, but like all tools, they should be used with care and attention to detail. Lawyers must take the time to learn about the technology and the likely results, strive to verify the relevant output, and adhere to a set of ethical and professional standards. While it is possible to trust the technology, it is also important to verify the output. This is a key reminder that comes from the recent ChatGPT case, and it should serve as a warning to lawyers and any other professionals who are using AI-based technologies.

Published by Steven A Nichols

I am the founder of Banyan Business Outcomes LLC. I've spent my career helping technology companies get closer to their clients, and helping clients leverage technology companies to create value.

Leave a Reply

%d bloggers like this: