The anti-vaccination movement and those who believe in flat-earth theory are both spreading anti-scientific and fake content. To overcome the editorial challenges of so much content, I suggest that the companies focus on a limited number of topics which are deemed important with significant consequences. They should rather develop approaches that utilize artificial and human intelligence together. Therefore, social media companies cannot solely rely on artificial intelligence or humans to monitor and edit their content. The terminology and focus of the hate speech changes over time, and most fake news articles contain some level of truthfulness in them. Assuming that each tweet contains 20 words on average, the volume of content published on Twitter in one single day will be equivalent to that of New York Times in 182 years. Take Twitter as an example: It is estimated that 500 million tweets are sent per day. The sheer volume of content shared on social media makes it impossible to establish a comprehensive editorial system. While I believe it is naïve to consider social media as merely neutral content sharing technologies with no responsibility, I do not believe that we should either have the same level of editorial expectation from social media that we have from traditional news media. As social media practically become news media, their level of responsibility over the content which they distribute should increase accordingly. Twitter moments, in which you can see a brief snapshot of the daily news, is a prime example of how Twitter is getting closer to becoming a news media. In addition to that, their users are increasingly using these platforms as the primary source of getting their news. In the intervening years, they have since set up a mix of automated and human driven editorial processes to promote or filter certain types of content. Artificial and Human Intelligence togetherĪt the beginning, social media companies established themselves not to hold any accountability over the content being published on its platform. I argue that these companies should take some responsibility over the content that is published on their platforms and suggest a set of strategies to help them with dealing with fake news and hate speech. On the other hand, one can argue that social media platforms have now evolved curators of content. There are two ways to consider a social media platform: On one hand, we can view them as technologies that merely enable individuals to publish and share content, a figurative blank sheet of paper on which anyone can write anything. Social media companies are under increased scrutiny for their mishandling of hateful speech and fake news on their platforms.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |