Can we involve society in self-sustaining Internet culture?





With the development of social media and other online platforms, people have an unprecedented opportunity to host public events. But often this ability to publish and disseminate information is not used for good. Here and the spread of violence, terrorism, fake, popularization of illegal drugs, spam, insults and much more, which causes a resonance in society. And in such cases, the ability of online platforms to disseminate information begins to work to the detriment of people.



At the same time, online platforms have enough difficulties in identifying information that we would not like to see, since the volume of content distributed on the Internet is huge, and it is constantly growing, and the task of content moderation is by no means trivial. Therefore, despite the success of automated methods, the number of moderators hired by online platforms to manually view content is also steadily growing, as is the wave of public criticism of the moderation process.







And above all, disputes over who and to what extent should be responsible for the content are increasingly flaring up in society.





In late May 2020, President Donald Trump signed an executive order on social media platforms calling for them to be held accountable for content posted on their sites and ordering the FTC and the attorney general to start investigating companies. The reason was the posting on Twitter of a fact-checking warning label on some of the president's tweets, which the president perceived as censorship, applied selectively and not transparently.



But what do we want from Internet companies? For them to be a kind of global censor of everything and everyone on their platforms, even at the expensive cost of manual moderation? Or did they not go into our communication, in the name of freedom of speech? Where is the line? Who sets the rules of the game? Is a compromise possible in the moderation of the information field for all parties involved: Internet companies, users, regulatory authorities? Could it be that maintaining an Internet culture is a complex problem that is not only a problem for Internet companies, but also requires the participation of the society itself in this process and requires it to assume at least part of the responsibility for the content it publishes?



Such questions arise more and more often ...









?





Currently, companies are trying to attract users to moderation by using the "complain" button, the implementation of which has several basic approaches.



Thus, by pressing this button, the content selected by the user can be sent for review to a pre-appointed moderator. The advantage of this approach is the ease with which end users can indicate the violation. The disadvantage of this approach is the lack of confidence in the assessment of an individual user and, as a consequence, the need to confirm the violation by a moderator, who often does not have time to respond due to the constant growth of the volume of content. And in this case, the final word, and hence the responsibility for the rejection of inappropriate information lies entirely on the shoulders of Internet companies.



Collective moderation is another major approach. There are many variations of this approach, but in general this approach can be described as follows. When the “complain” button is clicked by a certain number of users on the same content, companies can consider that there was a violation and start an automatic procedure to remove inappropriate content or flag it in any way, thereby warning their users. The advantage of this approach is that users can completely independently influence the information culture. Thus assuming group responsibility for rejecting inappropriate information. The downside is that such technology still does not trust the assessment of an individual user and, as a result, it implies somea sufficient number of voting users, who may simply not be recruited to organize voting on specific content.



As a result, the participation of society in the process of regulating network culture in practice is an indirect or cumbersome procedure that is difficult to use en masse. This is despite the fact that users are interested in maintaining an adequate Internet culture and, as the practice of using the “complain” button shows, they are ready to participate in it in case of violations! However, their votes are perceived very conditionally, and all due to the fact that on the Internet there is no trust in the assessment made by an ordinary user. No, by default.







But can we learn to trust the judgment of an individual user?





It would seem that it is impossible to simply take the Internet and trust the assessment of an ordinary user. But conditions can be created under which this assessment can be trusted. And we already have such a successful experience. This method is used by the well-known and boring reCaptcha. More precisely, one of its first versions, in which the user was asked to write a few words for authorization on the site. One of these words was validation and known to the system, and the other was unknown to the system and needed to be recognized. The users themselves did not know which word was known and which was not, and in order to log in faster, they had to try to answer honestly and not guess.



As a result, users who answered honestly and entered the test word correctly were considered objective, and reCaptcha skipped them further, and also accepted a previously unknown word as the correct result. And it turned out to be effective. With this simple method, reCaptcha users in the first six months of work successfully recognized about 150 million unrecognized words that could not be recognized by automatic methods.





What if you apply this approach in the area of ​​moderation?





Similar to reCaptcha, you can give an individual user the ability to perform a moderation test, i.e. a test to assess the compliance of a number of publications with certain predefined rules. At the same time, among the publications there will be a publication, the assessment of which is not known to us, and the rest of the publications will be verification publications with previously known estimates. Further, if the user successfully passed the test, i.e. was honest in evaluating verification publications known to the system, then we accept his assessment of an unknown publication, that is, we trust the opinion of this user. Moreover, in order to obtain an assessment of a publication unknown to us, thus, we will not need to organize a collective vote. One user will be enough.



By combining this test with the “complain” button, we can create an environment that allows users to fully moderate the information themselves. To organize this, after clicking the "complain" button, the user will first be asked to take a test. Then, if he successfully passes the test, the post he wants to report and that he believes violates some predefined rule will be sent for review.



In the process of passing the test, the user will be an unwitting, unbiased moderator of a certain publication, the rating of which is not yet known, among other publications. And the publication that the user has complained about will be sent to his own to another user who also uses the "complain" button, which works in a similar way ... And this user, in turn, will be an involuntary, unbiased moderator of this publication.







What can this approach give?



  • This approach will simplify the moderation process, make it a mass procedure, in which each user has the opportunity to identify information that violates the rules.

  • : , — , , .
  • : , -, . , , .




The use of this approach is seen primarily when moderating online discussions (comments). So with its help it is easy to organize users from different sites into a kind of community of moderators. A community in which appropriate, culturally respectful users take precedence over violators and uses simple rules. A small example of a possible implementation of such an approach and community rules is given here .



This approach can be used in other places in the information field. For example, to mark up video content, to identify rudeness, cruelty, drugs, etc. This can be a video markup that you see on YouTube or sent to you via Whatsapp to one of the groups you are a member of. Then, if this video shocked you and you want to warn other people, you can use this approach to mark it as a video containing some type of inappropriate content. Of course, after completing the moderation test. To simplify video moderation, you can optimize and send for moderation not the entire video, but only a video fragment containing inappropriate content. As a result, with the further distribution of this video, it will be distributed with a mark that may be useful to other users,who do not want to see this kind of content.



The same approach can be applied to other types of violations. For example, in the case of a fake, users can add a fact-checking label on their own or add a link to rebuttal information.



And in conclusion, it should be noted that the violating information revealed by users using the specified approach in one place or another of the information field can be transmitted to the input by automatic methods, with the aim of further automating the process of identifying such violations. This will allow users, together with Internet companies, to solve the difficult task of moderating information published and disseminated by us.






All Articles