According to a recent study, some of the largest social media platforms face challenges in detecting and removing dangerous content that includes suicide and self-harm. The data shows that "Pinterest" and "TikTok" were the only platforms that succeeded in discovering and removing more than 95% of such content, out of over 12 million content moderation decisions made by six of the biggest platforms. The other four platforms mentioned in the report are "Facebook," "Instagram," "X," and "Snapchat," where the Molly Rose Foundation, a UK-based charity focused on preventing suicide particularly among those under 25, found their responses to such content to be inconsistent, uneven, and inappropriate for the purpose.
The foundation noted that "Instagram" and "Facebook," which are owned by "Meta," were responsible for only 1% of all detected suicide and self-harm content, while "X" was responsible for just 700 decisions regarding content. The foundation now warns that the current online safety law is insufficient to address what it deems “clear systemic failures” in social media companies' content moderation approaches. The study shows that these sites routinely fail to detect harmful content, especially in the most dangerous areas of their services. For example, only one out of every 50 suicide and self-harm posts was detected by "Instagram," even though the short video feature "Reels" now represents half of the time users spend on the app.
In response to the study, a spokesperson for "Meta" stated that content that encourages suicide and self-harm violates their rules, pointing out that the statistics provided in the report do not reflect the company's efforts. They added that just last year, the company removed 50.6 million pieces of this type of content on "Facebook" and "Instagram" globally, with action taken on 99% of it before it was reported.
A spokesman for "Snapchat" declared that the safety and well-being of the community are a top priority, emphasizing that the platform is designed to be different from others, providing moderation for content before public distribution. They noted that the platform strictly prohibits content that promotes self-harm or suicide, sharing prevention and support resources with community members who may be in distress.
A representative from the UK Department for Science, Innovation and Technology pointed out that social media companies have a clear responsibility to keep those using their platforms safe. Under the online safety law, individuals who intentionally promote self-harm face penalties of up to five years in prison. With the full enforcement of the law, platforms will also be required to remove illegal content that encourages serious self-harm and prevent children from seeing materials that promote self-harm or suicide.