
AI technology continues to make waves in the legal industry, but a recent story involving international law firm Hill Dickinson shows just how critical it is to strike a balance between innovation and risk management.
According to a BBC News article, the firm recently restricted general access to several AI tools after noticing a “significant increase in usage” that didn’t align with its AI policy. Over a seven-day period, Hill Dickinson LLP detected more than 32,000 hits to ChatGPT, 3,000 hits to the Chinese AI service DeepSeek, and nearly 50,000 hits to Grammarly. In response, the firm announced that going forward, access to these tools would only be granted through a request process.
The Challenge for Law Firms
This development highlights a growing challenge for law firms across the UK. Generative AI tools offer undeniable benefits, from streamlining document drafting to supporting legal research. However, their ease of access and use can also create significant risks if not properly managed. Key concerns include:
- Data Security: Uploading client information to public AI tools could breach confidentiality obligations and data protection laws. Protecting sensitive client data is not just a legal requirement but a cornerstone of trust and professional integrity.
- Accuracy and Reliability: AI-generated content may contain inaccuracies or hallucinations that go unnoticed without careful human oversight. Legal professionals must be vigilant in cross-verifying AI-generated information to avoid the dissemination of erroneous advice.
- Compliance and Ethical Considerations: Regulatory bodies like the Solicitors Regulation Authority (SRA) have warned about the potential risks if legal practitioners don’t fully understand the technology they use. Compliance teams must stay updated on the evolving legal landscape surrounding AI use.
The increasing reliance on AI tools necessitates a robust risk management framework to navigate these concerns effectively. Without clear protocols, law firms expose themselves to reputational damage and potential legal consequences.
Insights from the LinksAI English Law Benchmark
A recent report from Linklaters, The LinksAI English Law Benchmark (Version 2), provides valuable context on how AI tools are evolving and the implications for legal practice. The study tested large language models (LLMs) like OpenAI o1 and Google Gemini 2.0 on English law-related tasks. Key findings include:
Improved Performance: The latest LLMs showed significant improvements, with OpenAI AI o1 coring 6.4/10 and Gemini 2.0 scoring 6.0/10. These tools now perform well on tasks like summarisation and document extraction.
Persistent Risks: Despite improvements, many AI-generated answers still contain inaccuracies or fictional citations, underscoring the need for human oversight.
Potential Use Cases: The report highlights the potential for LLMs to assist with legal research, contract review, and clause interpretation, provided expert supervision is maintained.
The benchmark illustrates the delicate balance between leveraging AI’s capabilities and mitigating associated risks, especially for in-house teams tasked with compliance and governance.
The Role of the Office of General Counsel and Risk & Compliance Teams
For the Office of the General Counsel (OGC) and Risk & Compliance teams, the rise of AI presents both unique challenges and opportunities. These teams are at the forefront of ensuring the firm’s compliance with legal and regulatory standards.
Key areas of focus include:
- Policy Development: OGC and compliance teams must draft and regularly update AI usage policies that align with evolving regulations and industry standards. Policies should address acceptable use cases, training requirements, and incident response protocols.
- Data Governance: Ensuring that any data fed into AI systems is properly anonymized and handled according to data protection laws. Compliance teams need to conduct regular audits to confirm adherence to policies and detect potential vulnerabilities.
- Training and Awareness: Conducting training programs to educate staff on the risks associated with AI use and the importance of following established guidelines. These programs should emphasize the human-in-the-loop model, where AI complements human decision-making rather than replacing it.
- Vendor Management: Evaluating third-party AI tools to ensure they meet the firm’s data security and compliance requirements. Firms should require vendors to provide transparency regarding how their models are trained and how data is processed.
AI adoption requires proactive engagement from these teams to develop strategies that align technological advancements with the firm’s ethical and professional responsibilities.
Embracing AI Responsibly
So, what can law firms learn from this situation?
- Monitor Usage: As Hill Dickinson did, firms should track AI tool usage to identify potential risks early. Continuous monitoring helps in detecting unusual patterns of activity that might indicate unauthorized access or misuse.
- Implement Robust Policies: Clear rules around client data, content verification, and tool access help protect the firm and its clients. Policies should be revisited periodically to account for changes in AI capabilities and evolving regulatory requirements.
- Invest in Training: Legal professionals need the skills to use AI tools safely and effectively. Training sessions should be tailored to different roles within the firm to ensure that everyone understands their responsibilities and the associated risks.
- Encourage Open Dialogue: The Information Commissioner’s Office has advised against outright bans, as this can push usage underground. Instead, firms should foster an environment where staff feel comfortable discussing AI use and concerns. An open dialogue ensures that potential risks are identified and addressed proactively.
- Conduct Regular Audits: Routine audits of AI tool usage and effectiveness help firms identify areas for improvement and ensure ongoing compliance with relevant regulations.
By integrating these practices into daily operations, firms can build a culture of responsible AI usage that supports their broader strategic goals.
The Future of AI in Law Firms
The legal industry is at a turning point. According to a recent survey by Clio, 62% of UK solicitors expect AI usage to increase over the next year, particularly in tasks like contract review and legal research. As demand grows, firms that can implement AI safely and strategically will gain a competitive edge.
The Linklaters report indicates that while LLMs can assist with routine legal tasks, they remain unreliable when it comes to nuanced legal advice. Misapplication of case law and fabricated citations are ongoing challenges that legal teams must navigate.
For OGC and Risk & Compliance teams, this is a critical moment to lead the charge in integrating AI responsibly. By collaborating with IT, HR, and legal teams, these professionals can help their firms navigate the complexities of AI adoption while safeguarding client data and maintaining regulatory compliance. This collaborative approach ensures that legal teams remain informed about the latest AI developments and their implications.
AI technology holds the promise of transforming legal work by automating routine tasks, reducing costs, and enhancing analytical capabilities. However, realizing these benefits requires vigilance, education, and a firm commitment to ethical practices.
The key lies in finding the right balance: embracing the benefits of AI while protecting clients, maintaining compliance, and ensuring legal professionals are equipped to use these tools responsibly. As the legal sector continues to evolve, the proactive and informed involvement of OGC and Risk & Compliance teams will be essential in shaping a secure and innovative future.
What are your thoughts? Have you experienced similar challenges with AI adoption in your firm? Let me know in the comments.