Combating Misinformation in Online Communities

Misinformation and disinformation have become significant challenges in online communities. They can disrupt social cohesion, cause panic, and hinder the flow of accurate information. 

We caught up with Ariel Bogle, Tech Reporter at the Australian Broadcasting Corporation (ABC) to get her take on the risks misinformation poses for online social spaces and how front line workers can push back on it. 

Ariel Bogle is an investigative reporter with a focus on technology, extremism and internet policy.

Defining misinformation and disinformation

Misinformation refers to information that is misleading or incorrect, often spread unintentionally without the intention to deceive or confuse. Disinformation, on the other hand, involves the intentional spread of false or misleading information to deceive and manipulate others.

Identifying markers of misinformation

Detecting misinformation can be challenging due to its nuanced nature. However, there are common markers can help you identify its presence. 

“Look out for content that uses highly emotional language, evoking anger, sadness, or disgust, as these emotions can hinder critical thinking and fact-checking,” says Ariel. 

“Be cautious of information taken out of context, such as screenshots of headlines or partial quotes, as they often miss important details and distort the intended meaning. Misleading visuals, graphs, and graphics can also be used to provoke emotions and drive the sharing of misinformation.”

Understanding the threat of misinformation

Misinformation poses a significant threat to the health of online communities. Ariel emphasises that during crises (such as the pandemic), misinformation can confuse, create disruption, friction and inter-personal conflict, and even lead to panic. 

She also warns that “the absence of agreed-upon ground truths and the rapid flow of unverified information exacerbate the problem, particularly when mixed with politics and personal beliefs.”

The role of digital frontline workers

“Moderators and community managers play a vital role in addressing misinformation within online spaces,” says Ariel. 

“If you’re responsible for the health, safety and efficiency of an online social space, it’s crucial to understand the impacts of misinformation and take proactive steps to mitigate its risks. 

“You have the power to establish ground rules for communication, set criteria for introducing  and organising information, and maintain the atmosphere conducive to healthy discussions. 

“By setting guidelines on sourcing, tone, and the number of reliable sources required, you can prevent the spread of misinformation and misleading content. Additionally, pre-bunking, sharing correct information ahead of time, and anticipating potential misconceptions within your community are effective strategies to counter misinformation and its effects.”

Strategies to combat misinformation

While the field of combating misinformation is evolving, some strategies have shown promise. Fact-checking can be an effective approach. However, it’s important to consider how it may reinforce existing beliefs and perceptions. 

“Pre-bunking is a proactive measure where someone overseeing an online group or audience anticipates potential misconceptions and shares correct information ahead of time,” says Ariel.

She points to the value of slowing down conversations to foster more considered responses, encouraging reflection and critical thinking. Technology-dependent methods like implementing delays in posting or creating reflective spaces can be valuable tools.

Emerging threats and the role of AI

Misinformation landscapes continue to evolve rapidly, with various digital spaces and communication technologies constantly emerging. Ariel Bogle highlights the role of AI, particularly chatbots, which can disseminate misleading information confidently. 

“These chatbots, in combination with other technologies, have the potential to flood digital spaces with misinformation, as their authoritative responses may be challenging to identify as false. Online community managers need to stay vigilant and keep an eye on the evolving role of AI in misinformation creation and propagation.”

Moderators, community managers, social media managers - anyone overseeing online social spaces - have the opportunity to influence the informational health of these spaces. By understanding the impacts of misinformation, staying alert to its markers, and implementing effective strategies, we can create healthier and more informed digital spaces. 

Venessa Paech

Venessa is Director and Co-Founder of ACM.

http://www.venessapaech.com
Previous
Previous

2023 SOCM Report: Industry at a crossroads

Next
Next

New Research Snapshot: Startup Space