Moderation standard

in #steem-standards9 years ago (edited)

This is a proposal for a new standard that allows front-end clients to adjust how comments are displayed in the UI (e.g. allowing auto-hiding) based on the moderation of accounts optionally selected by the creator of a post.

Moderation standard

By default all discussion threads on steemit.com are unmoderated other than by the up and down votes of SP (Steem Power) holders. This proposal provides an option for the creators of posts to opt-in to a moderator account who has the ability to add metadata to each post (via a special reply comment) which will be recognized and respected by front-ends like steemit.com.

First a small change to the blockchain

Because the metadata is added by a reply comment, it means that comments at the deepest level allowed by the blockchain cannot be moderated. Therefore I first propose as a necessary precondition to this proposal, that the devs increase the max comment depth level enforced by the blockchain from 6 to 7, but that Steemit keeps the max comment depth for replying to comments on steemit.com to 6. Any comment replies with a depth of 6 (meaning the seventh one down since depthuses 0-based indexing), for example created with an alternative client such as the CLI wallet, would not be displayed as a comment in the discussion thread on steemit.com.

How to determine valid set of moderators for each target post

Any one can create a reply comment that follows the metadata standards for moderation, but such comments would only be considered by the frontend if they are by an account whose name is in the set of approved moderators for the target post that is being moderated. This set is dynamically generated by one of two ways:

  1. If the top-level post of the discussion thread (depth == 0) has set the moderation.allow_submoderation field to true in the JSON metadata, then the client first initializes the approved moderator set for the target post to the empty set and then goes down the branch of ancestor posts starting from the top-level post to the target post, and for each post p does the following: grabs the array value from the moderation.moderators field of the JSON metadata of post p and iterates through the account name strings in the array; and, for each account corresponding to the account name string it checks if that account exists in the approved moderator set for the target post, and if not, inserts a tuple of that account and an integer priority to the approved moderator set for the target post where the priority is set to p.depth.
  2. Otherwise, the set of approved moderators for all posts/comments in the discussion thread are the ones described in the array value of the moderation.moderators field in the JSON metadata of the top-level post of the discussion thread, and they each have the associated priority of 0.

In either case, if an account is on a client-side list (called the moderation blacklist) it will not be inserted into the set of approved moderators. A refinement of this is also possible in which there are many moderation blacklists each specific to a certain tag (or later a specific streem, but more on that in a future proposal).

Notice that in case 1 (when moderation.allow_submoderation is true for the top-level post of a discussion thread), commenters can add additional moderators for the discussion sub-tree branching off their comment. However, the moderators imposed on the comment by their ancestors up the discussion tree still have moderation power on comments in that discussion sub-tree. And in fact, they have higher priority (lower value for the priority integer field means higher priority), meaning the moderation by moderators defined by a post higher up the discussion tree overrules that of moderators defined lower in the discussion tree.

Moderation post

A moderator can moderate any post in a discussion thread if they have authorization to do so, i.e. is in the approved moderator set for the target post (see above section), by creating a reply to that post with a post that sets the moderation.moderation_post field of their post's JSON metadata to true. If there is more than one valid moderation post (meaning properly formed according to the standard and by an author that has authorization) in reply to a target post, then a single moderation post is selected as the one relevant for the target post by choosing the moderation post that has the lowest priority value (i.e. highest priority) and in case of ties the moderation post with the largestlast_update field (i.e. most recently updated moderation post) is chosen from the set of tying moderation posts.

The standard for what a moderation post can define regarding the target post they are moderating will likely evolve. However, all the necessary fields to describe this moderation will be stored as subfields within the moderation field of the moderation post's JSON metadata.

There are two things that I think are important to have in an initial standard for moderation posts. One is a flag to auto-hide the entire contents of the post. The other is to override the explicit field of the target post (see my post on the explicit standard for more detail).

Hiding posts and comments

A moderator can hide the comment in a discussion thread by setting the moderation.hide field to either "post" or "thread" in the JSON metadata of their moderation post. If the value is set to "post", then the client will collapse the comment (but only that comment) into a small single-line item only showing the name of the author, the creation time, the voting and payout data (but no voting buttons), and a label to tell the user which moderator hid the comment. Any replies to the comment will still be shown as usual (unless they are also explicitly hidden). If the value is set to "thread", then the result is nearly the same as the previous case except that all the replies to the comment are also hidden. The user can always click to reveal the contents of any hidden comment. Whether the value is set to "post" or "thread", a hidden comment does not automatically show the summary contents of the comment in anywhere else on the site where the comment may appear (such as the Posts or Replies tab of a user's account page).

A moderator can also hide the top-level post of a discussion thread in a similar way: setting the moderation.hide field of their moderation post to either "post" or "thread". If it is set to "post", then the contents of the post will be collapsed to a single-line item in much the same way as a hidden comment, however the title will still be shown and also the remaining discussion thread will also be shown. The post itself could still be shown in the feed on the front page of the website or in the tabs of account pages, but its summary contents would be hidden in a similar way as would be done for hidden comment as described in the previous paragraph. However, if themoderation.hide field was set to "thread" the behavior would be similar to the case where it was set to "post" but there would be the following added restrictions: the post would not be shown in the feed on the front page of the website nor in any user's recommended tab; the post would still show up in the author's Blog and Post tabs, but not only would the summary contents be hidden by default but so would the title of the post; and, any reply comments in the discussion thread of that post that show up in an account's Replies tab would have their "Re <title>" title replaced with a "Re: Hidden post" with a tooltip showing the actual top-level post's title.

An obvious question to ask is if the top-level post creator is the one that gets to choose the moderator that has authorization to moderator their post, why would moderation of a top-level post ever be a thing that is done in the first place? The answer to that is that community standards for posts submitted under certain tags may effectively compel authors to add a standard set of moderators appropriate for that tag to their post to avoid being downvoted. Furthermore, a user who sets up custom feeds (called "streems", more on that on a future proposal) could have a filter set up for their streem to ignore any top-level post that hasn't set their moderators to include a set of accounts predefined in the streem definition by the user. So to make sure their posts aren't filtered out by users' streems, it would be in their best interest to include those moderators.

Overriding the explicit field

A moderator can override the explicit field of the target post by setting the new explicit values in an array set as the value of the moderation.override_explicit field of the JSON metadata of their moderation post. In this case, the client will treat the comment as if the overridden explicit values were the ones set by the target post's author themselves. However, there would be a difference in that there would be a label at the bottom of the comment stating that the explicit field of the comment has been overridden by the moderator (the moderator account would be named) and it would include a link to see the details of how the explicit field was overridden.

Users are in control

None of this censorship is outside the user's control.

First, the posts/comments aren't really censored. They can click to reveal any post/comment they wish. The system just sets it up so that community moderators can put in the effort to keep the unwanted material away from the user by default. If a user wants to see something, they can always see it.

Second, if a user doesn't like the job some moderator is doing, they can choose to ignore the recommendations of that moderator. Any account can be added to a client-side moderation blacklist. When the client is evaluating whether a particular moderation post has authorization or not for a certain post/comment, it will refer to the blacklist to see if the author of that moderation post is on it. If so, that moderation post is automatically unauthorized to moderate the target post from the perspective of that client.

Conclusion

This moderation proposal provides a way to allow members of the community (or some Steem sub-community) to moderate some discussions (possibly only limited to their sub-community) through alternative means than just up and down voting. This moderation doesn't require the moderators to own a lot of SP nor does it require them to burn away their voting power (as would be required by moderation via downvoting). So the cost (opportunity cost or otherwise) is much lower for moderators. That said, there is still the cost of the moderators' time which ideally could be compensated by regularly upvoting some of their posts describing their continued moderation efforts (assuming the (sub-)community continues to think they are doing a good job).

It is important to realize that the moderators don't have any power recognized by the blockchain. The blockchain is completely unaware of the concept of any moderation other than upvoting or downvoting using SP. The power of any moderator entirely comes from how the front-end client interprets their recommendations (and that is ultimately all their moderation is: a set of recommendations) regarding certain posts and comments. The authorization for their moderation to be recognized by a front-end client is also not global within the system. First, a post needs to opt-in to giving any particular moderator authorization to moderate the discussion in their post. Second, the end users are ultimately in control of whether they want their client to pay attention to the recommendations of any particular moderator. The user could always add a moderator to a blacklist so that their client ignores their moderations.

Censorship is not really an issue because: ultimately, as discussed above, the users are in control of who the moderators are; and, the posts are never actually removed from the client interface entirely but instead might be set to be hidden by default by moderators only to be revealed by the user through a single click.

That said, I think this provides a powerful and useful tool that coupled with community pressure to follow certain standards (for example threatening to downvote posts with certain tags that do not follow the moderator selection standards of the community associated with that tag) could do a better job, as Steem popularity grows, of cleaning up a lot of spam and trolling in certain sub-communities than relying only on downvoting would.

Sort:  

I think this is an interesting and relevant topic. Does the steem blockchain have a place to post metadata when creating a post? Or was this the proposal, that Metadata be added?

The most elegant method would be room to bake in metadata into a post, but in reality we can already do this through regular text will be the future of 2nd layer formating and filtering on the steem chain (e.g. Markdown currently used as the open source formating language/style sheet across steem). This will become increasingly true as use expands into the other web interfaces to the chain (like i'm using chainbb right now).

For certain types of communities to grow on steem, stronger curation tools are needed. Different types of flagging will allow for creativity in formating. Sites that want to use second layer moderation already have the tools available to them: This can be achieved he same way as metadata for filtering/formating a post, only inserted by a moderator as a reply to a post.

The Steem blockchain does allow arbitrary JSON to be included as metadata with each post/comment. You can check it out by viewing the post with steemd.com or steemdb.com.

Anyway, this proposal is old. And although there is still no moderation functionality on steemit.com, it is in the pipeline with the Communities feature, which I am really excited about.

Can you point me towards a post about the communities feature moderation, and where I can read more about the JSON metadata?

I'm trying to learn more about these discussions, but not sure where the best places to discuss these things.

You can use steemdb.com to view the raw data of a post and explore the different ways the JSON metadata is used by clients. For example, here is a link to the raw data for your comment. Scroll down to the json_metadata field. Notice that ChainBB reported the app/version in the app field. Also a field specifying the format of the content and an array of tags for the comment are also included. There isn't any official documentation for how JSON metadata should be used (it is not like it is enforced by the blockchain anyway), but various unofficial standards seem to have emerged, mostly by following the usage led by the steemit.com client.

There is a draft spec available for the Communities feature in the wiki of the Steemit Condenser repository. Keep in mind that this is just a draft and things may change with the design.

Just read this now. (lol, 3 months old) -- whats interesting is that people still don't seem to understand that comments need to be curated too, just like regular stories/articles.

I wrote about it on this article. Comments are a big key to the system. I hope we start using them as intended sooner than later.

Good proposal, but I think we can do better.