Australia’s Fair Work Commission has warned a sacked worker that he may be ordered to pay his former employer’s legal costs after his unfair dismissal claim relied on AI-generated arguments the Commission described as “egregious” hallucinations, the Australian Financial Review reported on March 30. The Queensland earthmoving business M & JD was invited to seek costs against its former landfill operator for “deliberately” misrepresenting the law through AI-generated submissions.

The costs order consideration makes it one of the first cases in Australia’s employment tribunal system where AI-fabricated legal content has carried direct financial consequences for the person who filed it.

A Pattern Across Multiple Cases

The M & JD case is not isolated. In Riley v Nuvei Australia Merchant Services Pty Ltd [2026], the Commission found the applicant had used a “legally trained” AI tool to prepare submissions, and noted that some of the case law he cited did not exist. The Commission identified certain legal principles and authorities in the submission as AI hallucinations with no actual legal basis.

In a separate case, Pennisi [2026], a worker lodged 53 pages of AI-generated forms and submissions. The Commission found the material repeated the same arguments multiple times, with reasoning shifting and evolving across each repetition. The application, an attempt to file a general protections claim six months late, was rejected.

In a third case reported by Lawyerly, the FWC upheld the dismissal of a senior developer at FujiFilm, ruling that his use of AI in submitting “excessive” workplace complaints “led to his demise.”

70% Workload Increase in Three Years

FWC President Justice Adam Hatcher has directly linked the surge in filings to AI tools. In a February 2026 presentation to the Victorian Bar Association, Hatcher revealed the Commission’s total workload has increased by over 70% in three years, according to an analysis by Fair Workplace Solutions. Until 2023, the FWC dealt with roughly 30,000 matters per year. By 2024-25, that figure had jumped to over 44,000. For the current financial year, the Commission projects between 50,000 and 55,000 lodgments.

Unfair dismissal claims specifically grew 41% between 2022-23 and 2024-25. General protections dismissal claims surged 62%. Other general protections disputes rose 135%.

The historical correlation between retrenchment rates and dismissal applications, which had held steady for decades, has broken down entirely since the release of ChatGPT in November 2022, according to the same analysis.

Approximately 85% of dismissed employees now contest their dismissal through the FWC, up from around 76% one year earlier.

The Expectation Gap

During his Victorian Bar Association presentation, Justice Hatcher demonstrated the problem live. He opened ChatGPT, described a dismissal scenario with basic facts, and within 10 minutes had a ready-to-file application and witness statement, Fair Workplace Solutions reported. ChatGPT told him to expect $15,000 to $40,000 in compensation and generated what Hatcher described as a “substantially invented story” about the dismissal.

Commission data tells a different story. Of general protections dismissal matters resolved in 2024-25 that involved a monetary settlement, 33% settled for less than $4,000, and 61% settled for less than $10,000. The median monetary settlement fell in the $4,000 to $5,999 range, per the same analysis. For unfair dismissal claims, the median conciliation settlement sits at approximately $8,704, and less than 1% of all claims result in a formal judgment.

New Disclosure Rules Coming

The Commission has released a draft guidance note requiring AI disclosure for anyone using generative AI to prepare applications, responses, submissions, or witness material. The three requirements: users must state they used AI in the document; they must confirm facts and legal references are accurate and include working links to cited decisions; and for witness statements, signers must confirm the content reflects their own knowledge.

The FWC’s sibling organization, the Fair Work Ombudsman, has separately flagged an investment in its own AI pilot program, SmartCompany reported. The FWO discussed the threat of commercial AI tools providing incorrect information based on obsolete or inaccurate data during a meeting with federal workplace regulators in early March, though a spokesperson later said the organization “is not currently piloting an AI-powered tool.”

What This Means for the Agent Ecosystem

The Fair Work Commission cases are a concrete illustration of what happens when AI tools operate as de facto legal agents without verification layers. Every case cited above involved a person treating AI output as authoritative legal work product and filing it directly with a tribunal.

The global scale of this problem is documented: researchers have catalogued 486 cases of AI hallucinations in court filings worldwide, 324 of them in US courts. Self-represented individuals account for 189 of those US cases, but 128 were attributed to licensed lawyers.

For builders working on autonomous agents, the FWC’s new disclosure framework is a preview of what regulatory responses will look like as AI agents move into high-stakes domains. The pattern is consistent across jurisdictions: mandatory disclosure, verification requirements, and financial penalties when AI-generated content contains fabrications. The question for the agent ecosystem is whether these guardrails will be built into the tools themselves or imposed after the damage is done.