Tengrinews.kz – In 2024, the TikTok platform removed over 6.5 million videos in Kazakhstan for violating community guidelines.
This figure exceeds the previous year’s number, when just over 4 million videos were deleted. According to Valdis Balodis, Head of Safety and Integrity for TikTok in Central and Eastern Europe and Central Asia, 95 percent of the removed videos were taken down proactively, meaning before any user complaints were received, and 80 percent were deleted before they could be viewed by other users. Additionally, more than 930,000 live streams were blocked due to violations.
The expert explained that some videos in Kazakhstan undergo moderation considering local traditions. For example, kokpar might be seen as animal cruelty elsewhere, but in Kazakhstan and Kyrgyzstan, it is considered cultural heritage, so TikTok does not restrict access to such content.
The platform prioritizes the safety of minors: registration is available from age 13, and accounts belonging to users younger than that are removed. In 2024, nearly 1.9 million such accounts were blocked in Kazakhstan.
For users under 16, specific restrictions apply: their accounts are private by default, video downloads are disabled, and messaging is turned off.
How TikTok moderates content
Content control is carried out through both automated and manual moderation.
- Automated moderation – The system detects certain types of content that violate community guidelines and blocks them.
- Manual moderation – Moderator teams review disputed content, assess its compliance with local laws and regulations, and decide on its removal. In Kazakhstan, moderation is conducted in both Russian and Kazakh languages.
We asked whether TikTok localizes restrictions and what content might be blocked exclusively in Kazakhstan. The expert responded that it is difficult to say for sure. However, TikTok creates country-specific lists of prohibited words, including profanity, insults, and certain symbols, to account for linguistic and cultural nuances.
"We are constantly working on this because users are very creative when it comes to insults and rule violations. Based on this, an individual list of banned words is developed for each country and language," noted Balodis.
Additionally, TikTok prohibits content related to:
- Child exploitation;
- Violence, threats, and harassment;
- Hate speech;
- Dangerous challenges involving self-harm;
- Misinformation and fake news;
- Gambling, alcohol, and drugs.
We inquired about what account data the platform might provide to law enforcement upon request. Valdis responded that user data can only be shared in emergency cases, such as when there is a threat to life.
"Data sharing is a very strict process with a high threshold. For example, if a user intends to harm themselves, we may provide data to the relevant authorities. In TikTok’s Transparency Center, we display information on how many requests we’ve received, how many were approved, and so on," Balodis explained.
According to the expert, issues related to reputation and defamation are reviewed individually. If a user takes legal action, TikTok evaluates the situation from a legal standpoint and can only provide user data with a court order.
How users can protect themselves
TikTok recommends using its built-in safety tools:
- "Report" feature – Allows users to report an account, video, or comment.
- Family settings – Parents can restrict children's access to certain content.
- "Account Review" feature – Helps users track violations associated with their accounts.