Audience Engagement Digital Publishing Top Stories
6 mins read

Predominance of toxic comments can damage publishers’ credibility: How to identify and deal with trolls effectively

Getting your Trinity Audio player ready...

Trolls can ruin the way consumers relate to a brand. A 2019 study found that almost 50% of the people would remove themselves from a situation where they have experienced incivility. 

Pew Research Center found that roughly four-in-ten Americans have personally experienced online harassment, and 62% consider it a major problem.

“Trolls have the power to ruin the way consumers see your brand”

Another study, “Attacks in the Comment Sections: What It Means for News Sites,” by the Center for Media Engagement at the University of Texas, found that the predominance of toxic comments also damages publishers’ credibility. 

According to the researchers, “People who viewed news stories with only uncivil comments had less positive attitudes toward the site, and saw it as less valuable compared to those who saw stories with only civil comments.”

Trolling has continued to increase in recent years. The anxiety arising out of the pandemic and the fact that people are consuming more online content as they stay at home has added fuel to the fire. 

We are swimming in a cesspool of misinformation. The pandemic likely makes it worse because increased levels of uncertainty creates the kinds of environments that trolls take advantage of. 

Jevin West, Associate Professor at the University of Washington’s Information School

“No matter how interesting and reliable your content is, trolls have the power to ruin the way consumers see your brand,” says Jesse Moeinifar, CEO and Founder, audience engagement solutions provider, Viafoura

A 2017 Pew research found that 27% of Americans decided not to post something online after witnessing the harassment of others. And 13% said they stopped using an online service after witnessing other users engage in harassing behaviors.

“Without even knowing it, anyone can become a troll”

Viafoura has published a new report, “Everyone is a troll,” that takes an in-depth look into this menace and presents solutions that will help publishers identify and deal with trolls effectively.

“If you want to protect your company’s content and audience against trolls, moderation is your best line of defense,” comments Moeinifar. 

However, it’s important to have a nuanced approach, as even well-meaning people can at times give in to uncivil behavior. 

“Large, polarizing events — like the U.S. election coming up this November — create a perfect storm for trolling,” the report states. “Partisan opinions and controversial topics are a hotbed for triggering conversations. In these cases, people often think irrationally or make quick judgments about others.” 

Without even knowing it, anyone can become a troll — the real-time nature of social spaces can elicit a knee-jerk reaction from consumers. This behavior can even become infectious as people respond to other trolls, causing conversations to escalate.

Everyone is a troll, Viafoura

“Communities that have sophisticated moderation in place see significant on-site engagement growth”

“Although you can’t stop someone from acting as a troll, you can make it much harder for them to create offensive content and interact with your brand’s community — whether through comments on articles or live chats,” the report. 

“Communities that have sophisticated moderation in place see significant on-site engagement growth: including 62% more user likes, 35% more comments per user and 34% more replies per user,” it adds.

The first step is identifying the trolls. They can be easily identified via the volume of their comments. There are two main types: The first type will create multiple user accounts. They will make toxic comments, get banned and then create a new account to continue posting. They can be very persistent and will refuse to abide by the rules even if they have been clearly explained.  

The other type maintains a single account and has a high volume of comments and flags. Generally, such users are “passionate people who sometimes go over the line. Typically, a temporary ban and explanation of why they were banned is enough to discourage their bad behavior,” according to the report.

Another way of identifying trolls is to check the disable rates of a user’s posts. A disable rate of over 10% likely indicates a troll.

Source: Everyone is a troll, Viafoura

Types of moderation

Moderation tools can be very helpful in identifying trolls as they allow publishers to see the queue of new users, their activity, comments and disabled rates all in the same place. Moderation may be done manually as well, but can be resource-intensive and time-taking. Here are the different types of moderation listed in the report. 

  1. User to user moderation: This refers to the ability of users to report/flag others’ comments. It is useful as a part of the job is done by users themselves. However, it is important to have clear community guidelines for them to follow.
  2. Human moderation: This can be incredibly time-consuming and expensive. However, it has value because humans are able to catch harassment or incivility that can’t always be picked up by automated systems. The report recommends a combination of human and machine-based moderation that would allow the former to focus on toxic comments that slip through automated systems.
  3. Automated moderation: This uses natural language processing or machine learning to identify and flag uncivil comments. The algorithm can be customized according to a publisher’s community guidelines and trained on the work of human moderators. 
  4. Full-service moderation: Publishers that don’t have the time or resources for moderation can also outsource the work. There are risks in this, for example, the service provider may use human moderators who are not native speakers of the language they are moderating, or unaware of the context of certain messages. This can make them ineffective. So adequate due-diligence is required before choosing a vendor.

“Core of any troll-hunting initiative”

No matter which of the above options is taken by a publisher, “well-written community guidelines should be at the core of any troll-hunting initiative,” the report suggests. “When banning accounts, you want to be able to reference guidelines that are crystal clear, easily available, visible to visitors and in-depth.”

It recommends the following best practices for setting effective community guidelines:

  1. Protect user information: Remind users not to post personal information about themselves or others. Make a mandate to eliminate any posts that include this information on their behalf.
  2. Dealing with illegal content: Comments that appear legally objectionable or encourage/condone a criminal offense or any form of violence or harassment should not be tolerated.
  3. Proactively whitelist or blacklist websites: Users should be informed that anything that looks/acts like spam will be removed and blacklisted. 
  4. Enforce community guidelines: Don’t make empty threats, ban users who violate community guidelines. “If you don’t ban them for fear of lower numbers, you could lose an even higher number of your engaged users,” the report states. Publishers can follow a tiered system, for example, the length of the ban can be increased with subsequent violations. Also, users that are banned should be given a clear explanation for the step.
  5. Delete spam posts: Similar to #3, publishers can also consider banning users who repeatedly post spam. 
  6. Abuse has different forms: Abuse can go beyond name calling, certain users can harass others by repeatedly flagging their posts even if they do not break any rules. Leverage a user’s historical information to make a judgment call.
  7. Make unacceptable content crystal clear: Create a clear, unassailable description in the community guidelines of what is or isn’t allowed in comments.

“Hit trolling from all possible angles”

Finally, the publisher should let users know that it reserves the right to remove or edit comments, and permanently block those who violate any of the terms and conditions. This umbrella statement would give the publisher complete control over the content the community produces.

Implementing strong identification methods, moderation tools, and banning procedures will help publishers make their websites safe and enjoyable for all contributors. 

A holistic moderation approach will “hit trolling from all possible angles, keeping you and your audience protected and empowered to keep sharing,” the report concludes.

The full report can be downloaded from Viafoura
Everyone is a troll