Why Australian Workplace Relations Law Is One of the Best AI Use Cases Nobody Is Talking About

This article examines how Australia’s complex system of modern awards and enterprise agreements creates persistent wage underpayment risk, and why those same complexities make the area a natural fit for AI-assisted analysis.

Australia has around 120 modern awards currently in operation, according to Fair Work Commission data. Each covers a specific industry or occupation, sets minimum pay rates and conditions, and operates in combination with the Fair Work Act 2009 (Cth), the National Employment Standards, and any applicable enterprise agreement. The outcome of navigating those instruments together is not always derivable from reading any single one in isolation. Classification under the wrong award, or a misreading of how an enterprise agreement modifies award conditions, produces underpayment exposure that can persist across years before it surfaces.

Wage underpayment in Australia can arise from systemic misclassification and payroll error, though courts have also found instances of deliberate or wilfully indifferent conduct. Some examples include the wrong award being applied, overtime being calculated against an incorrect base rate, or even allowances being missed because they were embedded in a provision that nobody had read carefully. These errors often remain undetected until they surface, and by the time they surface, the liability has usually accumulated across a significant period.

Why this is a structurally strong use case for AI, if applied appropriately

The properties that make award interpretation laborious for human researchers are precisely the properties that make it well-suited to AI assistance. The source set is large but bounded: 122 awards, the Fair Work Act, the National Employment Standards, and a body of Fair Work Commission and Federal Court decisions interpreting them. A system properly configured or trained on that dataset can assist in identifying likely applicable awards, surface relevant provisions across multiple instruments in a single query, and do so considerably faster than any practitioner working manually. That output, however, still requires verification and professional judgment. Habeas assists lawyers who work in the employment space through its capacity to draw on both primary and secondary material in analysis, and lawyers can configure custom assistants for employment or HR-facing use cases.

This matters most in the situations where the relevant answer is non-obvious: where a role falls across classification boundaries, where an enterprise agreement modifies award conditions in ways that require cross-referencing, or where a specific allowance depends on reading an award provision against a particular factual pattern. These are precisely the situations where manual research is slow, where errors accumulate, and where the commercial consequences of getting it wrong are most significant.

What the FWC is seeing — and what it tells us about general-purpose AI in employment law

The Fair Work Commission is now dealing with the consequences of general-purpose AI being applied to employment law without any domain-specific guardrails. In February 2026, FWC President Justice Adam Hatcher told the Victorian Bar Association that the Commission's total workload had increased by over 70% in three years, with unfair dismissal applications alone up 41% between 2022–23 and 2024–25. He attributed the surge primarily to the increasing use of AI tools by potential litigants.

The issue is not simply volume. The quality of material being filed has deteriorated. Justice Hatcher noted that general-purpose AI tools can give applications a "sheen of legal plausibility" that masks fundamental deficiencies in the underlying claims. Self-represented applicants are receiving unrealistically optimistic assessments of their prospects of success and likely compensation, then filing applications built on those assessments.

The specific cases illustrate the pattern. In Pennisi [2026], an applicant lodged 53 pages of AI-generated forms and submissions that repeated the same arguments multiple times, with the reasoning shifting and evolving with each repetition. The Commission found it difficult to extract the relevant considerations from the volume of material. In Riley v Nuvei Australia Merchant Services [2026], an applicant who had used what they described as a "legally trained" AI tool submitted case law citations that did not exist — the Commission identified them as AI hallucinations with no actual legal basis. An earlier decision, Deysel v Electra Lift Co. [2025], saw Deputy President Slevin note that deficiencies in the application made it clear the applicant had relied on ChatGPT, and that the material failed to address the matters required to establish the claim.

In March 2026, the Commission responded by issuing draft Guidance Notes on the use of generative AI in Commission proceedings. The draft introduces three requirements: disclosure of AI use in applications, a legal checking obligation for AI-generated content, and affirmation by witnesses regarding their witness statements. Legal practitioners and paid agents will also be required to include hyperlinks to all case law referenced in submissions. The consultation period closes on 10 April 2026.

The Commission was careful not to prohibit AI use entirely. Justice Hatcher acknowledged a genuine access-to-justice benefit: a tool that can tell a dismissed employee, within minutes and with no prior knowledge, that the FWC exists, that they can apply for an unfair dismissal remedy, what the criteria are, and that they have 21 days to do so is, in his words, a positive development. The problem is what happens after that initial accessibility gain — when the same tool generates legal submissions without understanding the factual context, invents authorities, and gives applicants a false sense of the strength of their case.

This is precisely the gap that purpose-built legal AI is designed to address, and it is a gap that matters as much on the employer and practitioner side as it does on the applicant side. General-purpose AI hallucinating case law in an unfair dismissal application is visible and embarrassing. An employer or advisor relying on the same kind of tool to determine award coverage, calculate entitlements, or assess compliance obligations faces a different version of the same problem — one where the consequences are financial, potentially criminal, and may not surface until an audit or dispute years later.

The criminal exposure angle

Wage theft provisions under the Fair Work Legislation Amendment (Closing Loopholes) Act 2023 render intentional underpayment a criminal offence. Directors and officers can now face criminal liability where underpayment is proven to be intentional. That legislative change raises the stakes considerably for employers and their advisors: the cost of misclassification is no longer limited to back-pay liability.

AI that reliably identifies applicable award provisions, surfaces relevant allowances, and flags classification issues reduces the probability that a practitioner or HR advisor misses a provision that was always there. The analysis and advice remain entirely the practitioner's responsibility. What AI changes is how efficiently the practitioner reaches the point where that analysis can begin, and how complete the foundational research is when they get there.

Where the adoption gap sits

Anecdotally, employment lawyers have not (for the most part) been early AI adopters, and there is a persistent assumption that AI is better suited to complex legal research than to award interpretation, which tends to get categorised as procedural. That assumption mischaracterises both what makes the task difficult and what AI does well.

Cross-referencing approximately 120 awards, identifying applicable instruments for a specific employment arrangement, and mapping allowances against a fact pattern are high-volume, precision-dependent tasks where AI's speed advantage is most pronounced and where the consequences of manual error are financially significant. The judgment-intensive work of employment law practice — advising on complex disputes, managing proceedings before the Commission, drafting enterprise agreements — is unchanged. What AI changes is how long it takes to establish the factual and legal foundation on which that judgment operates.

The FWC's experience with AI-generated applications is a cautionary example of what happens when that distinction is collapsed: when a general-purpose tool is asked to do the judgment-intensive work without the domain-specific foundation. The same distinction applies on the advisory side. The value of specialised legal AI in employment law is not that it replaces the practitioner's judgment, but that it ensures the research substrate on which that judgment depends is accurate, complete, and sourced from the correct instruments.

Sources: Fair Work Commission President's Statement (March 2026) | Fair Work Commission Draft Guidance Notes: Use of Generative AI in Commission Cases | Fair Work Legislation Amendment (Closing Loopholes) Act 2023 | Fair Work Ombudsman enforcement data | Australian Industry Group Workplace Relations Issues Mar 2026

Other blog posts

see all

Experience the Future of Law