Moderating the Voice to Parliament

The impending Voice to Parliament referendum in Australia is already generating an increase in racially motivated hate speech online, and is likely to see further increases in misinformation and other online harms.

Those moderating on the front lines of Australian social media are steeling themselves for a challenging few months, and many have been working on Voice specific governance planning to protect themselves, their community members and their organisations from these risks.

ACM has been both formally consulting, and informally speaking with, numerous organisations and online community practitioners getting ready to moderate through these challenges in coming months.

What trends are we already seeing

  • Increase in offline incidents (including violence) directed at Aboriginal and First Nations peoples

  • Rise in online micro-aggressions directed toward Aboriginal and First Nations peoples and their allies

  • Rise in racially motivated hate speech online

  • Rise in digital disinformation campaigns from organised actors

  • Increased stress and pressure on front line social media staff

  • Lack of structural supports for those moderating front lines, especially around mental health and wellbeing

  • Lack of social media defence supports (formal or informal) for Indigenous voices being centered or amplified in social media content or campaigns

  • Lack of consultation with Indigenous voices and perspectives in moderation and crisis planning (leading to both critical risks and effective solutions being overlooked)

Tips for moderating around the Voice

Regardless of your personal or organisational position on the Voice to Parliament, discussion and debate about it is likely to appear in your digital social spaces. Whether you are formally communicating a position, participating in a campaign, or abstaining, it’s wise to be prepared for the risks and issues ahead.

The following tips are a starting point for anyone moderating a digital social space, or supporting moderators in that work:

  • Immediately audit your social media or online community governance processes and infrastructures. Where are there gaps? Where do you need to adjust to align with best practice? Take time before the discourse heats up to tighten up your moderation operations.

  • Ensure you are conducting proactive as well as reactive moderation, and cultural moderation as well as regulatory moderation. (For more on what these forms of moderation are, explore our Training and Resources).

  • Update your Community Guidelines or equivalent terms to cover off discussions around the Voice. Do you want to allow the topic to be discussed? If so, what are the acceptable parameters? Explicitly call out behaviours that won’t be allowed, or that will result in moderation actions.

  • Ensure you have a risk matrix outlining common and key risks for your space, how you’ll deal with them, timelines (including any required response times), and who needs to be involved.

  • Create or update your escalation chart: often part of a risk matrix, this should be a clear list with contact details of key personnel to escalate specific issues to (e.g. when to engage Indigenous specialists, when to tag in your legal team or advisor, when to alert management). This should also include when to alert authorities (such as the eSafety Commissioner or the Police, if explicit threats are being made)

  • Create or update your Response Guide to promote consistency across channels and personnel, and to cover off specific risks and scenarios identified in your matrix around the Voice. A Frequently Asked Questions document for more traditional channels is likely not enough on its own - tailor guidance around likely scenarios and how they will play out in your real-world social spaces.

  • Update pre-moderation tools (such as filters and keyword alerts) with key risk terms, details of known bad actors or disinformation campaigners, publications, URLs and other indicators that invite harm. When available, these tools can act as an early warning system and offer you or your moderators time to assess and react appropriately, and minimise the appearance of harmful content or behaviour from the outset.

  • Involve Indigenous voices on all of the above documents and processes - don’t make assumptions and don’t speak for others. If you don’t work directly with Indigenous team members, tap into wider community resources that can offer essential sense-checking around factors like cultural safety. Check that your moderation plans (which must consider multiple stakeholders and accountabilities), are not inadvertently silencing Indigenous voices or undermining the agency of Indigneous people in your organisation or community.

  • Ensure you have structural protections in place for those moderating on your front lines (be that you or those you’re responsible for), including access to wellbeing and mental health resources and the ability to step away and take time out. Ensure there is burden sharing, so no single person is bearing the brunt of the mental health impacts of moderating hate speech, micro-aggressions, misinformation and other harms likely to increase during this time. These protections include strategies such as turning off comments or locking specific content down if you don’t have the resources or resilience to manage it at that time (but posting it is still important for you).

  • Honour your duty of care to Indigenous voices that you are working with or amplifying, e.g. an ambassador your organisation is centering in the discussion. Brief them on likely risks, connect them with explicit resources and assistance, and ensure they have a range of supports they can choose to avail themselves of.

For assistance with your Voice moderation planning, contact team@australiancommunitymanagers.com.au

Venessa Paech

Venessa is Director and Co-Founder of ACM.

http://www.venessapaech.com
Previous
Previous

Community Roundtable 2023 Report

Next
Next

Everything in Moderation: Ben Whitelaw