
All rights reserved, Habeas 2026
See our Privacy Policy
See our Privacy Policy

In December 2025, Herbert Smith Freehills published a piece framing 2026 as 'the year AI and legal technology become business as usual.' The framing reflects the pace of adoption at the top end of the market. For most firms, the reality is messier: a growing market of tools with overlapping claims, limited internal expertise to evaluate them properly, and vendor demonstrations that are optimised for the sale rather than for realistic working conditions.
The legal AI procurement process has not yet developed the rigour that the stakes warrant. For software that touches legally privileged materials, generates content that will be filed with courts, and processes client confidential information under data handling terms that most buyers have not read, a demo and a reference call is not sufficient diligence. These are the questions that actually matter.
This is the most important question and the most commonly skipped. Where are documents processed, who has access to them, and what happens to them after your subscription ends?
The answer is not always obvious from product materials. Many legal AI tools are built on top of general-purpose large language models operated by third parties. The firm-facing vendor may have a clear privacy policy, but the data handling terms that actually govern your documents are in the agreement between that vendor and their underlying model provider, not always disclosed in the sales process and not always reviewed before a firm starts uploading client files.
Ask explicitly: which third parties process my data, under what terms, and can I see those terms? Some vendors offer private cloud or on-premises deployment. Some can be negotiated into stronger data handling commitments. Knowing what you are agreeing to before documents are in the system is materially different from discovering it mid-engagement when the leverage to negotiate has gone.
This question is missing from most procurement checklists and it should not be. Australian court guidelines, including the Federal Court's GPN-AI practice note and equivalent guidance issued by state courts during 2024 and 2025, establish that practitioners ordinarily should disclose to each other the assistance provided by AI programs, and must disclose AI use if required to do so by a judge or registrar. Verification of AI-generated content before filing is not optional.
A tool that makes AI use harder to track, or whose outputs are not clearly distinguishable from practitioner-drafted content, creates a compliance problem on top of the procurement decision. Before adopting any AI tool for work that may be filed with a court, the firm needs a clear answer to how AI use will be recorded, and who is responsible for the verification step before filing.
Courts in Australia have been deliberate about terminology here. In JML Rose Pty Ltd v Jorgensen (No 3) [2025] FCA 976, the Court noted that 'hallucinations' is a term that 'seeks to legitimise the use of AI,' and that erroneously generated references are more accurately described as fabricated, fictional, or false. Vendors use 'hallucination' throughout their materials. The legal consequences of relying on fabricated output are the same regardless of what the marketing calls it.
Vendor accuracy claims are not independent evidence. Ask for third-party benchmark data. Better still, run a structured test before signing: give the system research questions where you know the answers, including at least one designed to elicit a confident wrong answer, such as a question about superseded law or a jurisdiction-specific issue the model may not have been trained on.
In May v Costaras [2025] NSWCA 178, the Court noted that generative AI 'does not relieve the responsible legal practitioner of the need to exercise judgment and professional skill in reviewing the final product to be provided to the Court.' A system that hedges appropriately and flags uncertainty when it does not have a reliable basis is more useful in practice than one that produces authoritative-sounding output regardless of its actual confidence. The former is usable with normal professional care. The latter requires verification on every output that may eliminate most of the efficiency gain.
Most AI vendor agreements disclaim liability for output errors almost entirely. That is commercially understandable. It means that professional responsibility for an error that reaches a client, whether a fabricated citation or a miscategorised document in discovery, sits entirely with the firm. Re Walker [2025] VSC 714 is illustrative: a solicitor was required to implement a formal verification protocol (checking every citation against an authoritative database, recording AI use internally, independently verifying key outputs before filing) after AI-assisted research produced citation errors. The protocol was the firm's cost to bear, not the vendor's.
For larger vendors, the indemnification position is unlikely to shift materially in negotiations. Knowing the allocation of risk before something goes wrong is still different from being surprised by it afterwards. Failures in AI verification can constitute unsatisfactory professional conduct and may warrant referral to a legal services commissioner. Procurement teams should read the indemnification clause and ensure the firm's workflow controls are calibrated to the actual risk profile.
The efficiency gains from AI are substantially higher when the tool sits inside existing workflows rather than running alongside them. A research tool that requires copying text out of your matter management system, through a browser interface, and back again will see adoption drop off within weeks of launch. Novelty sustains engagement for a limited period; after that, ease of use determines whether practitioners continue or quietly revert to what they know.
Ask vendors specifically about integration with your document management and practice management environment. Ask to speak with reference clients who are using the same technology stack, and probe what daily use looks like for a senior associate rather than in a demonstration environment.
Selecting the right tool and getting lawyers to use it are different projects. Firms where AI adoption has stalled, and there are more of these than are publicly discussed, have typically got the technology decision right and underestimated the implementation. The result is well-purchased software that usage metrics show is not being used.
Effective implementation involves training, ongoing support, and workflow redesign for the matter types where the tool is most useful. The element most commonly missing is a feedback loop that gives practitioners a way to surface problems when output looks wrong. Without it, errors accumulate quietly. The tool develops a reputation for unreliability, usage falls, and the firm concludes that legal AI is not suited to their practice, when what actually failed was the feedback loop rather than the technology.
The legal AI market in 2026 has enough mature tools that the value question is largely settled. AI adds value to legal practice in well-documented ways and at well-documented cost points. Whether a given firm's procurement and implementation process is rigorous enough to capture that value, rather than creating a new category of operational risk, is the question that remains genuinely open.
