Everything in Moderation: Ben Whitelaw

Everything in Moderation is a weekly newsletter about content moderation and the policies, products, platforms and people shaping its future.

We spoke to its founder and curator, Ben Whitelaw - media consultant, former community manager and occasional moderator - about the trends he’s seeing in moderation globally, and what drew him to begin curating these stories and observations for the wider community.

Ben Whitelaw - expert in online news communities and founder of Everything in Moderation

What prompted you to create Everything in Moderation (EiM)?

I started the newsletter back in 2018 because I was fascinated by the intricacies at play when it came to deciding who could, and couldn’t, share their opinion online. 

I had been working as audience engagement editor at a UK newsroom for seven years and part of the role involved setting our comment moderation policy. It was often challenging to apply those guidelines in-the-moment and we were forced to make lots of trade-offs that didn’t always feel good or fair. 

Brexit and the 2016 US election made that even more difficult and then the whole Alex Jones Sandy Hook affair happened. I thought this was likely to be the first of many incidents like this so I wrote the first edition (subject line: ‘moderation is mainstream’) to try and make sense of what was happening.

I’ve been sending it out every Friday ever since and now have almost 2000 subscribers who must feel the same way.

You cover a diverse range of issues and players across the platform and political landscape. What trends are you seeing around moderation globally? 

Government regulation of online speech has been in the works for years but it remains the most significant trend in my opinion. 

In recent years, India, Brazil, Pakistan and Indonesia have followed Germany, France and Ireland in passing platform intermediary legislation and that is beginning to shift the centre of gravity away from Silicon Valley, where speech rules are conceived and applied from. That’s fundamentally a good thing and something I try and reflect in each and every edition of EiM.

I predict that this trendline will continue and that the ramifications will be significant. In the case of the EU’s Digital Services Act, that should be net positive. But it won’t be good for all internet users, especially in countries where democracy isn’t the norm.

All eyes are currently on AI as a moderation support tool. Given recent leaps in AI capability, what are you most excited about - and concerned about - in this area?

Fundamentally, I’m looking forward to see where consensus debate lands on AI for moderation. I say that because the major platforms have been talking about it concretely since 2019, when Mark Zuckerberg touted it as the future of moderation

I added a product section to the newsletter not long after that speech (about ‘The features, functionality and technology shaping online speech’) because I recognised the importance of AI in making moderation more efficient, safer and better. 

But I’m also a born-and-bred sceptic and worry about who loses out in all of this. So I’m hoping that we’ll see evidence of the benefits of new forms of AI for all internet users, not just the same, select few. 

You have long standing experience moderating or managing teams of moderators for media organisations. What are the biggest insights about moderation you’ve taken away from these experiences?

I learnt that moderation isn’t dissimilar from being an editor but is conferred very different status, in newsrooms and in wider society. If you think about it, both involve making decisions on the fly, applying standards and weighing up multiple points of view. But, if you go to a party and get asked what you do, the reaction will likely be very different if you say one versus the other. That’s interesting to me. 

India, Brazil, Pakistan and Indonesia have followed Germany, France and Ireland in passing platform intermediary legislation and that is beginning to shift the centre of gravity away from Silicon Valley, where speech rules are conceived and applied from.

We both agree that moderation has a PR problem and is often misunderstood or marginalised as a practice. In your view, how can we fix this?

It’s a big and multi-faceted problem but I’d like the media to cover online speech issues in more detail and with greater depth. 

At the moment, news organisations and publishers tend to skimp on technology reporting and simplify stories relating to content moderation to the lowest common denominator. That’s frustrating to me because it ignores the fact that their readers are butting up against these questions every day; when there are accusations of moderator censorship in their local Nextdoor group, when they report a tweet that is deemed appropriate, when an algorithm recommends them inappropraite content or when they can’t find an app on their preferred store because Apple or Google has suspended it for breach of terms. 

These are all content moderation issues in the broadest sense and they require thoughtful and sensitive coverage to help readers understand what they mean. Only when that happens will the perception of moderation as ‘janitorial work’ begin to shift. 

Everything in Moderation was a sponsor of All Things in Moderation 2023, ACM’s global conference for humans that moderate.

Venessa Paech

Venessa is Director and Co-Founder of ACM.

http://www.venessapaech.com
Previous
Previous

Moderating the Voice to Parliament

Next
Next

Building a moderation movement