
All rights reserved, Habeas 2024
See our Privacy Policy
See our Privacy Policy

Consider a scenario that plays out in Australian law firms every day.
A junior lawyer drafts a memo. They've run their searches across Austlli or a similar legal search platform, pulled together relevant authorities they found, and written up their analysis. The memo goes to the supervising partner or senior associate for review.
The senior reads it and knows, from experience, that the research or draft document isn't complete. Maybe the junior searched "negligence" and "duty of care" but didn't think to search "reasonable foreseeability" in the specific factual context. Maybe there's a recent Federal Court decision that shifts the landscape, but it didn't surface because it used different terminology. Maybe the junior missed a few crucial authorities, or failed to incorporate a full analysis.
So the senior sends it back. Run more searches. Try different terms. Check this database. Look at that jurisdiction. The junior goes away, runs another round, updates the memo. It comes back. The cycle may repeat again.
This is treated as a completely normal part of legal supervision and training.
It shouldn't be.
The review conversation between a senior and junior lawyer is one of the most valuable interactions in a firm. It's where juniors learn to think like lawyers - to frame issues precisely, to build arguments that hold up under scrutiny, to identify the weak points in their own reasoning before the other side does.
That conversation should be about the junior's legal analysis. How they've framed the problem. Whether the argument is structured persuasively. Whether they've anticipated the counterarguments. Whether their advice is practical and commercially sensible for the client.
Instead, a significant portion of that conversation gets consumed by a more basic question: is the research actually complete?
That isn't a question about legal skill. It's often beena question about search methodology... whether the junior happened to guess the right combination of keywords to surface the relevant authorities. And it's a tax on senior time that compounds across every matter, every junior, every day.
Think about what this looks like at scale across a mid-tier or top-tier Australian firm.
Partners and senior associates are the firm's scarcest resource. Their time is the bottleneck on matter progression, client service, and business development. Every hour a partner spends checking whether a junior's research is complete is an hour not spent on higher-value work such as refining strategy, engaging with the client, or supervising the analytical quality of the team's output.
Multiply that across dozens of matters and multiple juniors, and you're looking at a material drag on firm productivity that rarely shows up in any efficiency review. It's invisible precisely because it's been normalised. It's "just how supervision works."
But it's not supervision. It's quality assurance on a broken search process.
The root cause is that keyword search puts the burden of recall on the lawyer. You only find what you think to search for. A junior with two years' experience simply doesn't have the pattern recognition to anticipate every relevant search term across every potentially relevant body of law.
Australian legal research compounds this problem. Our law operates across Commonwealth, state, and territory jurisdictions. Legislation and case law use inconsistent terminology across those jurisdictions. A concept might be described one way in a New South Wales Court of Appeal decision and a different way in the Victorian Supreme Court. Keyword search doesn't always bridge that gap, it just makes you run more searches.
The result is a research process where completeness depends on the individual lawyer's vocabulary and experience rather than on the actual state of the law. And so the review stage becomes, in part, a check on whether someone's vocabulary was broad enough.
When legal research tools can search by concept rather than by keyword, understanding what you're looking for rather than matching the exact words you type, the review dynamic shifts fundamentally.
A junior can describe the legal issue they're researching, and the tool surfaces relevant authorities based on the meaning of the query, not just the terms used. Decisions that use different language to discuss the same legal principle come back in the same set of results. Cross-jurisdictional equivalents surface without the junior needing to know in advance what each jurisdiction calls it.
The research and insights becomes more reliably complete on the first pass. Not perfect — legal research always requires judgment, depth, creativity and skill — but substantially more complete than keyword search alone can deliver. And in fact, lawyers who are using AI responsibly will often use tools like Habeas in combination with search engines like Jade, which have their own unique benefits for other aspects of legal research.
And that changes the review conversation entirely. The partner can focus on what actually matters: whether the junior has understood the law correctly, applied it well, and produced advice that serves the client. The supervision becomes developmental rather than procedural.
Overall, deployed effectively, AI can lead to a structural improvement in how senior time gets allocated across the firm.
When partners spend less time verifying research completeness, they have more capacity for the work that only they can do — the high-judgment, high-stakes, client-facing work that drives the firm's reputation and revenue. Juniors develop faster because their supervision conversations focus on analytical skill rather than search technique and grind alone. Matters move more efficiently because the review loop tightens.
For firms thinking about where AI and modern legal research tools fit into their operations, this is the argument that deserves more attention. It's about what it costs everyone else in the firm when the research isn't reliable... or even the long-term effects when you haven't looked at a matter from a range of perspectives.
That cost is real. It's significant. And most firms have simply stopped noticing it. But luckily, those that are modernising are taking the best of the old and the best of the new to inform their modern practice.
