Blog

The Essential Guide to Content Moderation

by
Emily Heaslip
,
September 13, 2021
The Essential Guide to Content ModerationThe Essential Guide to Content Moderation
Emily Heaslip
September 13, 2021

See Nightfall in action

Explore a self-guided tour of Nightfall's product and platform.
On this page

According to a recent CNBC report, Google has seen a rise in posts flagged for racism or abuse on its message boards. This has caused the company to ask its employees to take a more active role in moderating internal message boards. 

That’s one way to handle content moderation. But, it also takes an employee’s time and attention away from higher-value tasks. 

Many companies address instances of internal harassment through training and stronger HR policies. And, while this approach is helpful, remote work has increased our use of channels like Slack, expanding the domains where HR policies must be applied. This can complicate both HR policy implementation and enforcement, as well as introduce new forms of harassment. 

Content moderation can help HR teams identify and address issues of inappropriate content or harassment. This guide will provide resources and tools to help HR teams protect their coworkers from profanity and harassment efficiently and productively. 

What is content moderation?

Content moderation refers to a group of policies and practices that cover what should and shouldn’t be shared on company systems. Things like profanity, toxicity, and harassment are commonly covered by content moderation policies such as: 

  • Code of Conduct: a statement that informal employees and external partners using an organization’s platforms and communication channels of the standards for behavior and values. 
  • Acceptable Use Policy: a document that outlines the rules to be followed by users of a particular platform or communications channel and states what a user can and can’t do with a particular resource. 

Content moderation policies act as guidelines to determine what type of content is permissible to share in an organization. Content moderation not only supports a collaborative and inclusive work culture but can also reduce the risk of insider threat that comes from sharing inappropriate content.  

Why is content moderation important?

Despite protections and resources offered by bodies such as the Equal Employment Opportunity Commission Task Force (EEOC) and the Workplace Bullying Institute, harassment in the workplace continues to rise in the U.S. In 2019, more than 90% of employees said they’d been bullied at work. 

The problem is multi-faceted, and technology is undoubtedly a contributing factor to the recent increase in harassment. As more companies shift to working remotely, tools like Slack and other collaboration platforms introduce new avenues for abuse to take place unchecked. 

Toxic content and messages are problematic for any work environment and ruin the employee experience — not to mention your organization’s reputation. Content moderation is a key aspect of building an inclusive work culture; companies that ignore content moderation inevitably face financial consequences, too. In one study, companies with a supportive workplace had 2.3x higher cash flow per employee over a three-year period and were 1.7x more likely to be “innovation leaders” in their sector. 

How to practice content moderation

There are a few different approaches to content moderation that organizations can use to reduce the risk of harassment and bullying in the workplace. Here are a few of the more common options. 

Pre-moderation

As the name suggests, a pre-moderation approach queues all user submissions for review before they can be displayed on a site or channel. This is a very time-consuming approach, as every piece of text, image, or video must be approved by a moderator before it is shared. This strategy works best for public sites and forums like Facebook, rather than internal channels like Slack. 

Post-moderation

Users are able to post their submissions immediately, but the message is simultaneously added to a queue for review. Unfortunately, this approach is still labor-intensive, as someone must continuously review every post for appropriateness. This approach is best for public sites where user engagement is a priority, but some content moderation is still required. 

Reactive moderation

Reactive moderation requires users to flag content that they find offensive or that they know breaches community guidelines. By asking users to self-police, moderators are able to focus on the content that requires the most attention. 

Reactive moderation is perhaps the most common type of content moderation for internal sites like Slack or Microsoft Teams. But, there is still the risk of content being shared that slips through the cracks and damages the brand. 

Supervisor moderation

An individual or group of moderators is tasked with scanning and flagging inappropriate content, deleting submissions, and restricting access for users who abuse their privileges. This method is scalable as the community grows, but can get expensive quickly. Plus, the company runs the risk of human error — a moderator may miss something that can still negatively impact the employee experience.

Unfortunately, these approaches are all imperfect. There is a better option: automatic content moderation.

How to automate content moderation

Many companies don’t actively practice content moderation because it’s seen as too time-consuming and resource-intensive. Content moderation can quickly become complex as the organization’s digital footprint expands across multiple collaborative, cloud-based applications. Without an automated platform that can scan the apps your workforce uses, it’s hard to identify threats to culture and employee safety. 

Automating content moderation with a tool like Nightfall’s cloud DLP allows you to customize scans to find the terms that do not belong on your company’s cloud apps and block those messages from being shared across the company. Nightfall DLP can not only find information that’s at risk, but also improper messages or files that shouldn’t be shared. 

The best part? It all happens without the need for a team of moderators. Nightfall’s machine-learning engine automatically finds and removes toxic content before it can spread to other people or systems. From there, you can delete the messages in question and send notifications to educate users why this type of conduct at work is not allowed. 

Learn more about how Nightfall can keep your information secure by scheduling a demo at the link below.

Nightfall Mini Logo

Getting started is easy

Install in minutes to start protecting your sensitive data.

Get a demo