The Legal Landscape is Shifting to Reflect Increasing AI Adoption

Something shifted in Australian legal practice over the past 12 months, and it happened faster than most practitioners noticed.

We’ve already issued a blog on the specifics of the shift, but we thought we’d zoom out and discuss the macro trends that emerge from this landscape change.

NSW issued Practice Note SC GEN 23 in February 2025. The Federal Court followed in April with a formal Notice to the Profession. Queensland issued Practice Direction 5 of 2025. Victoria's Law Reform Commission published guidelines for courts and tribunals across the state. South Australia's Supreme Court guidelines came into force on 1 January 2026, now being incorporated into court rules applying across the Supreme Court, District Court, Magistrates Court, Youth Court, and specialist courts. The Fair Work Commission published an exposure draft of guidance on generative AI use in Commission cases on 24 March 2026. The Law Council of Australia has weighed in.

Courts have been watching each other. This is a coordinated profession-wide signal, arriving from every direction simultaneously, that the institutional period of watching and waiting is over. The question of AI in Australian legal practice has been resolved at the regulatory level. The answer is yes, with conditions, and the conditions are now binding.

What the pattern tells us

Read the instruments together and the individual differences matter less than the agreement underneath them. Every court that has issued guidance has landed in the same place on the basics: verify AI output before it reaches the court, accept professional responsibility for everything filed regardless of how it was produced, and be able to account specifically for AI involvement if asked.

The specifics vary. Some courts require affirmative disclosure; others only on request. Some focus on submissions specifically; others cover research work product. But the framework is the same across all of them, which tells you two things. One set of practices satisfies every court that has issued guidance. And the jurisdictions that haven't yet issued definitive guidance will land in the same place when they do: the trajectory is clear enough that waiting for your specific court to act is a dwindling bet.

The window for being passive has closed

There was a period, roughly 2023 to early 2025, when a wait-and-see approach to AI in legal practice was reasonable. The tools were new, the rules were unclear, and the prudent response was to watch how things developed before committing to anything. That window has gone.

Binding practice directions across the major litigation jurisdictions have produced a professional environment where the question is no longer whether to have an approach to AI, but rather whether your current approach is adequate and responsible.

Practitioners and firms that haven't made deliberate decisions about which tools they use, what their verification process looks like, and how they'd account for AI involvement if asked are exposed in a way that didn’t exist two years ago. Filing a document containing an AI-generated hallucinated citation in the NSW Supreme Court or the Federal Court is now a breach of a specific, binding requirement, where it wasn't in 2023.

The Ramifications for Adoption of Legal AI 

"We don't use AI" is a valid position, though increasingly unusual at firms of any size. "We use AI but I'm not sure how or what our review process looks like" is not valid.

To develop an ‘adequate’ approach, start with interrogating the tool your practice uses, by making a considered decision about whether the tool retrieves from a verified legal database or generates outputs from training data. The professional obligation to verify AI output is considerably harder to satisfy with a tool that produces confident-sounding text that is not verifiable, as this architecture is conducive to increased hallucination.

After interrogating the tool, firms should establish a verification process with actual teeth: a defined step in the workflow calibrated to address how the tools can fail. A verification process that doesn't specifically check citations against primary or secondary sources isn't doing the job in terms of upholding professional obligations.

Importantly, practitioners must also have the ability to account for AI involvement if asked. That means knowing, at the document level, where AI contributed to the work product. Building that into the workflow from the start is considerably easier than reconstructing it when a judicial officer asks.

The Importance of Choosing the Right Toolset and Executing on Adoption

Australian courts haven't banned AI. They've made it a professional responsibility question, which is in some ways harder. A ban is simple to comply with, where a professional responsibility framework requires ongoing judgment about tool selection, process design, verification discipline, and what you'd say if asked to account for your approach in a specific matter.

The firms and practitioners treating AI as optional exploration have a decision to make - that decision is less about whether to use AI, but more about how to create guardrails that ensure their approach to AI is both adequate and responsible in the eyes of the law.

Habeas is built for this environment: verified retrieval, source-traceable outputs, designed from the ground up for Australian legal practice. Book a demo.

Other blog posts

see all

Experience the Future of Law