Coral by Vox Media Guides
Photo of various people waiting for Jury Selection

Designing Community Jury Duty

By Rob Malda

The Slashdot article selection process required us to maintain a sort of memory about users, to identify who was an independent actor and who was acting in the employ of a specific publication. Moderation of comments, however, was a different challenge entirely: not only was this work shared between users and paid staffers, but the volume and pacing of comments was far beyond what any single person could manage.

At the core of the Slashdot moderation system was the notion of “Jury Duty.” No court lets jurors select which cases they want, yet almost every moderation system does just that. But Slashdot users never knew when or where they would be called on to participate.

This created a host of workflow issues: firstly it created two classes of participants. One class (employees) could be trusted with meta information not publicly available to users because their position was compensated, and any abuse on their part would stop that sweet paycheck. (They had also signed a contract to that effect.) But the other class of participants were general users who took turns moderating. These users varied wildly in their knowledge of the people they were moderating.  Also, some of them took their roles seriously, researching users and considering if a comment was actually “Funny” or “Interesting.” Others would just bang through the task at hand as fast as possible, and then get on with their day.

Tools for users on Jury Duty allowed them to step into a user profile and view a post history: they could quickly discern if a user makes posts about a particular subject or pushes one viewpoint, or is just generally socially inappropriate.  However, didn’t seem that a lot of user-moderators took that extra step.

We restricted access to some information for non-staff. For example, a general user obviously had no need to know the IP of a comment poster. Although we hashed IP addresses to force anonymity among users, editors could quickly identify when a single subnet or IP was a proxy server representing an office or ISP, or identified a single user trying to make their viewpoint appear more important by pretending to be legion.

In hindsight, there was probably a lot of useful user information that wasn’t being surfaced and that could have been helpful. For example, a user’s “Karma” (at its core, simply the sum of all positive and negative moderation points spent on a user) was a decent indicator that a user was a good (or not) contributor, but it may have would have been very useful to create something that revealed other data points such as  velocity or subject expertise.

For instance, if a user posted 80% of their comments in stories tagged with ‘Encryption’, that could tell you something useful when it came to moderating their comments in that area. If a user posted nine comments today, and averages 0.01 comments over the last year, that could hint that something unusual was happening. Similarly, if someone had just posted a dozen comments every day spanning all subjects, the moderator might gain from being  made aware that this person just likes to jaw.

We spent a lot of time developing notions like this but never really saw them through.

Rob “CmdrTaco” Malda is the creator and user #1 of the popular News for Nerds site Slashdot.org. He spent many fruitful years there developing some of the first large scale community driven discussion systems, and crowd sourced news systems. After more than a decade he left and since logged some time working for the Washington Post Labs, and developing a news app known as Trove.

Photo: Steve Bott, CC BY 2.0, via Wikimedia Commons

How can we make these guides better? Let us know
Subscribe to our newsletter for updates