*

Jon K. Rust

Jon K. Rust is publisher of the Southeast Missourian and president of Rust Communications.

Opinion

An update on Southeast Missourian's changes in moderating comments

Getty Images

The following comment was posted Sunday, Feb. 17, on a semissourian.com Speak Out forum. Legitimate? Civil? Important for community discussion?

"Wow, Joe Biden is overseas telling Europeans that America is an embarrassment! You do not speak for us, Joe. There are millions of proud, liberty loving Americans who think YOU are the embarrassment! Yes, Google god I think that this is a civil comment. Don't you think that this is not what an ex-VP needs to be saying overseas."

Three months ago semissourian.com began a pilot project with the Google subsidiary Jigsaw to use machine-learning technology, called Perspective API, to help us moderate comments. At the same time, the newspaper increased the number of people and hours of the day it scheduled to review comments.

Three internal categories were set up in how the Southeast Missourian applied the technology. Comments below a certain threshold of "toxicity" were automatically posted online without any other action. Let's call these "benign."

Comments in a middle band of "toxicity" were posted automatically, but only after the commenter received a message to consider the civility of his or her message. Once posted (if still above the threshold), the scoring of such comments initiated a review by Southeast Missourian moderators. If the comment was deemed by one of them to be outside the boundary of community standards, it was removed. So far, few of these comments have been removed. Let's call these: "reviewed after posting."

The final category was for comments that scored above a higher threshold where experience led us to believe there may be problems. These comments were not immediately posted online, but instead, were put into a queue to be reviewed by Southeast Missourian moderators in advance of publishing. If deemed okay, they were posted onsite. If not, they were not posted. (Note: Profane or particularly egregious comments could lead to a commenter being banned, even if not posted.) The commenter was also given an opportunity to rework his or her comment, if he or she wanted to get the comment to score beneath the maximum threshold, which almost all did so that the comment could be posted immediately. Let's call these: "review before posting."

Reminder: In all cases of comments being removed or not posted, it is the ultimate decision of Southeast Missourian staff. The Google technology is only to assist the newspaper in managing the flow -- and scrutiny -- of the high number of comments we receive. Without the technology, we would not able to moderate as quickly.

The project continues, but here are some early results. Toward the end of this column, I'll return to the opening quote.

To conduct our analysis, we ran all comments submitted to semissourian.com since 2016 through the Perspective API, which allowed us to compare trends in a number of different time periods.

First, since launching the system, the toxicity of comments dropped precipitously. For example, those in the "reviewed after posting" level dropped in frequency by 59 percent, from 11.78 percent of all comments to 4.88 percent. The percentage of comments submitted at the "review before posting" level dropped 96 percent, from a 3.94 percent frequency to a 0.15 percent frequency. Meanwhile, an even higher "toxic" level that we looked at dropped to virtually zero, from 0.53 percent of all comments to 0.04 percent.

Clearly, the system, in conjunction with more resources dedicated to moderation, is making a difference. On a highly positive note to me personally as publisher, the system also curtailed the number of egregious comments that lived on the site, breathing, until brought to our attention through community notifications, which is what initiated moderation in most cases previously.

I'll be sharing more data and analysis in the future, as well as results from a survey conducted before the system was launched and another conducted recently. But for today, let me return to the opening quote.

This comment originally scored in the to be "reviewed after posting" category, which means it would have gone online immediately and been reviewed by moderators afterward. But it was on the very edge of the threshold "review before posting." In fact, the commenter actually made several changes, which caused the comment -- and the message the commenter received from our system -- to switch between categories a couple of times, before the commenter finalized on what you see (with the added critique of a "Google god").

Why did the system flag the comment? It was the term "America is an embarrassment." The Perspective API scored this as being more sensitive and thus (how we're using the system) important to review. Also causing the toxicity score to be higher, the term "you are the embarrassment." The name Joe Biden had little effect on the scoring.

I have no stunning conclusions for you today -- or broader comments. That will come later. This is an update, which hopefully reinforces your sense that we are constantly trying to improve what we do, and that we take the opportunity and responsibility of allowing comments seriously. It also begins to underline the complexity of moderating comments through machine-learning. Much more to come.

Previous column: Southeast Missourian joins Google in test to elevate civility

Jon K. Rust is publisher of the Southeast Missourian.

Comments