Australia's hate speech bill: what community managers need to know
Australia's federal hate speech laws have undergone significant changes with new legislation passing through parliament in early 2026. For those managing online communities and social media, understanding what's changed is crucial.
This guide draws on insights from a regulatory briefing hosted by ACM Director Venessa Paech, featuring Dr Kath Gelber, one of Australia's leading experts on hate speech and freedom of expression, and Larah Kennedy, General Manager at Quiip.
Important: This article provides general information only and does not constitute legal advice. Always seek professional legal counsel for specific situations.
Key takeaways
New federal laws consolidate hate speech protections across Australia, covering race, religion, gender, sexual orientation, and disability
Lower threshold for enforcement: Focus is on whether conduct is "likely to incite hatred" rather than requiring proof of intent
"I didn't mean it" isn't a defence: The test is whether a reasonable person would interpret content as likely to incite hatred
Shift in "reasonable person" test: For some hate crimes, the test is now whether a reasonable member of the target community would feel the content incites hatred
Prohibited hate groups: Providing funding, training, membership, or a platform to designated groups is a federal crime
"Public" has nuanced meanings online: What constitutes "public" isn't clear-cut for semi-public or gated spaces
Context exemptions exist for genuine academic, artistic, religious, or scientific discourse—but aren't blanket protections
State complications ahead: NSW and Queensland are considering banning specific phrases, though constitutional experts warn these may not survive court challenges
Best practice moderation remains your best defence and likely supports compliance
What's actually changed?
Australia has long had fragmented hate speech protections. The new federal laws represent a comprehensive attempt to create uniform protections across the country.
The legislation criminalises public conduct that would incite hatred or serious contempt toward people based on protected characteristics.
The key shift is in threshold: instead of requiring proof of intent to vilify, the law focuses on whether conduct is "likely to" incite hatred—a significantly lower bar.
Lower bar, higher stakes
Under the new laws, you don't need to prove someone meant to incite hatred, only that their conduct was likely to do so. This removes the difficult burden of proving intent.
For community managers, "I didn't mean it that way" isn't a defence. The question is: could a reasonable person interpret this content as likely to incite hatred, regardless of claimed intent?
Who gets to decide what's hateful?
Perhaps the most significant change is the shift in who the "reasonable person" actually is when assessing certain hate crimes.
Traditionally, Australian law used a "reasonable member of the community" test—would an average person in the general community consider this harmful?
For specific offences including advocating violence through property damage and displaying prohibited symbols, the test has changed to a "reasonable member of the target community."
Dr Gelber gave the example of Hezbollah flags at pro-Palestinian protests. Previously, the Australian Federal Police declined to prosecute because a reasonable member of the general community wouldn't view the flags as inciting hatred. Now: would a reasonable member of the Jewish community feel this incites hatred against them?
This aligns some federal criminal provisions with Section 18C of the Racial Discrimination Act, but it's new territory for criminal hate speech law.
For community managers, consider the perspective of affected communities when moderating content, particularly around symbols and imagery.
Australian police out on patrol at a March for Australia protest event [Image: Pexels]
Prohibited hate groups: platforming risk
The federal government maintains a publicly available list of prohibited terrorist and hate organisations (updated regularly) and their symbols. Once designated, it becomes a federal crime to provide these groups with funding, training, resources, membership—or a platform.
If you're managing membership or onboarding for semi-public spaces, exercise care about who you allow in. Knowingly platforming members of prohibited groups could expose you to serious criminal liability.
Most of what prohibited hate groups do would already violate well-managed community guidelines. But formal designation adds legal weight—if an organisation is on that list, their symbols are banned and facilitating their activities is explicitly criminal.
The legislation includes retrospective elements: activities before the law passed can count when determining whether a group should be designated.
The tricky question of "public" in online spaces
Much hate speech legislation hinges on conduct being "public." But what does that mean for online communities?
Australian law is clear that content accessible in Australia is subject to Australian law. But the distinction between public and private spaces online isn't straightforward. Is a closed Facebook group public? A Slack community with sign-up? A Discord server? A membership forum?
The law hasn't fully sorted this out. The general principle: if content can be accessed beyond a genuinely private circle—even with barriers like sign-up forms—it's likely considered public for legal purposes.
For community managers:
Be thoughtful about onboarding for semi-public spaces. Who you allow in matters, particularly regarding potential affiliation with prohibited organisations.
Don't assume gated access makes your space legally "private." If the general public can join through standard processes, it's probably subject to these laws.
Document moderation decisions and membership policies. Clear processes matter if you need to demonstrate reasonable care.
As courts interpret these laws, clearer guidelines will emerge. Until then, err on caution.
State-level complexity: slogan ban proposals
NSW and Queensland are considering criminal laws banning specific phrases—"globalise the intifada" and "from the river to the sea, Palestine will be free."
Constitutional lawyers warned during a NSW parliamentary inquiry that these laws would likely be challenged and overturned for two reasons:
These phrases' meanings are heavily contested—some see legitimate political speech, others see calls to violence.
Australia already has mechanisms through existing vilification laws. Phrase-based bans represent what experts consider impermissible restrictions on freedom of expression.
If passed, community managers will need vigilance until inevitable court challenges resolve. Dr Gelber suggested they may not survive legal scrutiny.
This state variation creates headaches for national platforms with different standards depending on where members access content.
Context still matters
Despite lower thresholds on intent, context remains critically important. The legislation includes exemptions for genuine academic, artistic, religious, or scientific discourse.
Hate speech laws must balance protection from harm with freedom of expression. Content for legitimate purposes—educational discussions, artistic commentary, academic research—generally won't fall foul of these laws, even if confronting.
However, context exemptions aren't blanket protections. What's acceptable in an academic paper won't necessarily be protected as a standalone social media post without scholarly framing.
Community management tips
Review your community guidelines now. You don't need to cite the Bill, but guidelines should clearly define unacceptable speech in your community context while aligning with legal standards.
Lean into governance frameworks. Ensure solid community guidelines, risk matrices, escalation frameworks, and consistency in application. If you haven’t done this yet, or aren’t sure what these are, now is the time. These fundamentals matter more than fancy tech. ACM can help you develop these and set them in place if you need a hand.
Don't rely solely on platform tools. Native moderation tools often use non-Australian definitions and contexts. If you can't control what automation considers "hateful," be cautious about automatic hiding or removal. Major platforms are increasingly politicised (e.g. Meta removing protections for women and LGBTQI+ communities). Configurable third-party tools may be more reliable.
Consistency is your friend. Operationalising guidelines consistently proves they're not window dressing. Regular moderation, clear documentation, and consistent application represent good faith compliance.
Document your approach. Strategic documentation of moderation actions is more important than ever. Documented guidelines, moderation logs, and consistent practices are invaluable and can reduce liability risk. They’re also importance for compliance under other legalisation such as the Online Safety Act.
Looking ahead: enforcement and uncertainty
Many questions remain about enforcement and interpretation. Real-world application will reveal grey areas not immediately clear from the law's text.
How courts interpret "likely to incite" versus "intended to incite," how they weigh context exemptions, and how they handle online versus offline conduct—all will be worked through over time.
This uncertainty reinforces the importance of:
Staying informed about developments
Maintaining clear documentation of moderation decisions
Connecting with professional organisations like ACM
Seeking legal advice when unsure
Human touch is essential
While hate speech laws provide important boundaries, they can't solve underlying social issues driving hateful conduct. Community managers need both legal tools and social support—from organisations, peers, and the wider profession.
Despite new laws, tools, and challenges, fundamentals of good community management remain unchanged.
Understanding community context, applying consistent standards, maintaining ethical practices, and balancing safety with freedom of expression—these core responsibilities haven't shifted. The new laws provide an augmented legal framework for this essential work.
Essential reading & official resources
The legislation:
Criminal Code Amendment (Hate Crimes) Bill 2025 - The February 2025 hate crimes bill (passed)
Combatting Antisemitism, Hate and Extremism Bill 2026 - The broader January 2026 legislation
Australian Human Rights Commission Explainer - Plain language overview of the new federal and NSW laws
Listed organisations:
National Security: Listed Terrorist Organisations - Official government list of prohibited terrorist and hate organisations (updated regularly)
Protocol for Listing Terrorist Organisations - How organisations get added to the list
ACM resources:
ACM is here to support you with training, resources, research, a community of practice and much more.
Contact ACM at hello@australiancommunitymanagers.com.au anytime to discuss your needs.
Remember: this article provides general information and does not constitute legal advice. For specific situations, always consult with qualified legal professionals who can advise based on your particular circumstances.
Special thanks to Dr Kath Gelber, Larah Kennedy, and all the community professionals who joined this important discussion.

