ID :
691322
Mon, 11/04/2024 - 14:17
Auther :

How Instagram and Twitter Moderate Published Content

A faculty member at Irandoc highlighted that social media platforms moderate published content through human moderators and artificial intelligence (AI).

According to Mehr News Agency, Dr. Somayeh Labafi, speaking at the promotional session on "Ethics and AI Policy" at Irandoc, part of the Eighth National Information Technology Award pre-events, addressed "intelligent moderation in social networks." She noted that platforms have a social responsibility to moderate content, a practice that has been in place since these platforms were first developed. Labafi mentioned that Mark Zuckerberg, founder of Meta, has stated, "Meta has over 18,000 human content moderators covering 70 languages."

Dr. Labafi explained the types of moderation: pre-moderation, post-moderation, reactive moderation, and distributed moderation. Pre-moderation involves human moderators reviewing unpublished content. Post-moderation is when moderators examine content after publication. Reactive moderation refers to actions taken based on user-reported content. Distributed moderation combines various methods, including intelligent moderation, human moderation, and user moderation, to regulate content.

Content Moderation on Instagram

Labafi provided examples of moderation on different platforms. Instagram engages in both pre- and post-publication moderation. Before publishing, it employs measures like media literacy education, algorithm improvements, system design to reduce the spread of false content, user training to detect fake news, and verifying authentic accounts.

After publishing, Instagram moderates through collaboration with fact-checking organizations, reducing access and shareability of flagged content, user reports, AI algorithms, marking suspicious content, collaborating with journalists and media, restricting suspicious accounts, and working with independent journalists.

How Twitter Moderates Content

Twitter also employs both pre- and post-publication moderation. Before publishing, it uses proactive strategies, improves algorithms, monitors suspicious accounts, trains users, verifies accounts, collaborates with researchers, provides educational resources, manages reactive trends, encourages user reporting, analyzes behavior patterns, offers reporting tools, and continuously evaluates and refines its policies.

After publishing, Twitter moderates by marking suspicious content, acting on user reports, collaborating with fact-checking organizations, working with reputable journalists and media, promoting credible sources, partnering with independent organizations, employing AI algorithms, limiting access to suspicious accounts, and enforcing stricter policies on published content.

Dr. Labafi added that AI-driven intelligent moderation has its issues, including the inability of AI to fully understand the context of discussions, potential biases and racial prejudices, and the risk of polarization within social networks.


X