Yang Soo-young, TikTok’s partnership manager for the Trust and Safety Team in Northeast Asia, explains the company’s community guidelines and regulations at the Media Safety Workshop held at TikTok Korea in the Samseong neighborhood of Seoul on Feb. 6.
Yang Soo-young, TikTok’s partnership manager for the Trust and Safety Team in Northeast Asia, explains the company’s community guidelines and regulations at the Media Safety Workshop held at TikTok Korea in the Samseong neighborhood of Seoul on Feb. 6.

On Feb. 6, TikTok, the global short-form platform, revealed its content moderation methods aimed at creating a “safe platform environment.” The company disclosed that its content guidelines are enforced through a combination of artificial intelligence (AI) technology and human content moderation teams to address issues like deepfakes, child exploitation, and election-related misinformation.

TikTok held a Media Safety Workshop at TikTok Korea in the Samseong neighborhood in the Gangnam District of Seoul to coincide with “Safer Internet Day.” Initiated by the European Union (EU) in 2004, Safer Internet Day is now observed in about 190 countries worldwide, designated to be on the second Tuesday of February each year.

The content moderation approach at TikTok involves a dual process utilizing “automated review technologies” and “content moderation personnel.” Automated technologies employ AI to scrutinize texts, videos, and images for compliance with community guidelines from multiple perspectives. Content that is suspect but not filtered out by automated technologies undergoes further review by the Global Trust and Safety Team, a group of content moderators.

Yang Soo-young, TikTok’s partnership manager for the Trust and Safety Team in Northeast Asia, mentioned, “The Global Trust and Safety Team operates 24/7 with around 40,000 members worldwide. They not only assess the context and nuances of content that technology might overlook but also monitor various risk signals, such as user reports and unusually high view counts.”

TikTok’s content moderation is based on its own community guidelines, which include six main themes: safety and civic consciousness, mental and behavioral health, sensitive adult themes, truthfulness and authenticity, regulated goods and commercial activities, and privacy and security. TikTok stated that these guidelines were established in consultation with regulatory bodies and experts from various countries.

The platform places a strong emphasis on youth safety and welfare. In Korea, TikTok services are available to users aged 14 and above, with age-specific restrictions in place. For instance, accounts of users aged 14-15 are set to private, and direct messaging is enabled only for those aged 16 and above. TikTok also offers a “Safety Pairing” feature that allows parents to manage their teenager’s TikTok account.

TikTok adheres to a zero-tolerance policy towards videos of child sexual abuse and exploitation. Manager Yang stated, “We do not allow content that exploits or harms teenagers physically or mentally. Such videos are immediately removed from the platform and reported to local law enforcement and administrative authorities.”

In the third quarter of last year, TikTok removed a total of 136.53 million videos through its content moderation process. Of those, 96.1% were preemptively deleted videos that were removed for violating guidelines before causing harm to users. 90.6% of the deleted videos were removed within 24 hours, and 76.8% were eliminated before any user at all had viewed them on the platform.

Copyright © BusinessKorea. Prohibited from unauthorized reproduction and redistribution