AI Disclosure in Australian Courts: The Key Trends from Practice Notes

Australian courts are rapidly introducing guidelines, that vary across jurisdictions, governing AI usage and disclosure.

Australian courts have not waited for a coordinated national framework on AI disclosure, and the result is a set of requirements that varies materially across jurisdictions. For practitioners appearing in more than one court, understanding the specific obligations in each is now a practical professional necessity.

The consistent principle across all current requirements is that no Australian court has prohibited AI use. What each jurisdiction is establishing, at varying pace and with varying specificity, is the framework under which AI assistance is acceptable: verify the output, accept professional responsibility for it, and be able to disclose the extent of AI involvement if directed. The obligations differ in their detail, yet that underlying principle does not.

Federal Court

The Federal Court's Notice to the Profession on the Responsible Use of Artificial Intelligence requires practitioners to verify all AI-generated content before filing. Where AI has materially contributed to the reasoning or drafting of a court document, disclosure may be required on the direction of a judge or registrar. The Court has not mandated blanket disclosure for every document that AI has touched, but has reserved the right to ask, and expects a specific and accurate answer when it does.

New South Wales

Practice Note SC GEN 23, effective February 2025, draws a distinction between acceptable and prohibited uses of generative AI, requires verification of AI-generated submissions before filing, and requires practitioners to be able to explain the extent of AI involvement in any document if asked. Research assistance and drafting support are within acceptable use.

Queensland

Practice Direction 5 of 2025 imposes verification obligations on parties and practitioners preparing written submissions. The direction focuses on submissions rather than research work product. Verification is mandatory; unprompted disclosure is not required, but the practitioner must be able to account for how AI was used if the court asks.

South Australia

South Australia's Guidelines concerning the use of Generative AI in litigation take a principles-based approach. They expressly encourage the appropriate use of AI to improve efficiency and access to justice, while making clear that existing professional and overarching obligations fully apply. There is a strong emphasis on understanding clear AI risks [such as hallucinations/confidentiality breaches]. Verification of AI-generated material is required as part of existing professional duties, but there is no general obligation to disclose AI use beyond the requirement that practitioners must be able to explain their usage if questioned.

Victoria

The Supreme Court of Victoria's Guidelines for the Responsible Use of Artificial Intelligence in Litigation adopt an approach that prioritises transparency. Practitioners must 1) understand AI tools and their limitations, 2) ensure accuracy, and 3) remain fully responsible for all filed material. Converse to other jurisdictions, Victoria places stronger emphasis on disclosure, stipulating that AI use should generally be disclosed to other parties and to the Court [where appropriate], particularly where it affects the provenance or weight of a document.

What this means in practice

For practitioners appearing across multiple courts, the sensible approach is to adopt verification practices that satisfy the most demanding requirement likely to be encountered, rather than calibrating to each court's minimum separately. Keep a record of where AI contributed to document preparation. Verify every cited authority against the primary source before filing. Be in a position to explain, accurately and specifically, how AI was used in any filed document.

The professional liability exposure in this area does not come from using AI. It comes from using AI without the verification discipline that professional obligations already require and that courts are now making explicit. As the frameworks mature, that exposure will grow for practitioners who have not built verification into their workflow as a matter of course.

Sources: Federal Court Notice to the Profession Apr 2025 | NSW SC GEN 23 | QLD SC Practice Direction 5 of 2025 | South Australian Courts, Guidelines concerning the use of Generative AI in litigation (commencing 1 January 2026) | Supreme Court of Victoria, Guidelines for the Responsible Use of Artificial Intelligence in Litigation | Carroll & O'Dea AI Professional Responsibilities

Other blog posts

see all

Experience the Future of Law