What is the secret to TikTok's popularity?
Jakub Olek: There are several factors, particularly the app's design and recommendation algorithm. There has never been anything like this on the market. TikTok is not a social network but an entertainment platform built on a different principle.
Many popular apps operate based on a social graph. This means that everything you do on the platform is determined by your social network. You know who your friends are, who follows you, and what they like. In other words, you interact with what others using the platform enjoy.
TikTok uses a fundamentally different algorithm known as a content graph. It is less important who follows you, who your friend is, and how all these users interact with the content. What matters more is how you engage with the content and what specifically interests you.
In practice, this means that you can see content from people you don't follow. You open TikTok and see recommended content in your feed. This could be five videos, one or two of which you will likely enjoy. However, it is not necessary for you to be subscribed to the people who posted this content.
You do not need to have a large number of followers for your content to appear in the feeds of other users. On TikTok, the number of likes and interactions with your content is much more important. This is what sets it apart from other platforms. TikTok is popular because it is entirely focused on content rather than social interaction.
Interestingly, entertainment content is the most popular on TikTok. This includes tourism, food, movies, sports, and music. People also watch tutorials on how to do things themselves.
According to recent studies conducted by TikTok analysts, 90% of users learn something new thanks to our platform. It's easy to find everything that interests you through hashtags on TikTok. Educational content is also very popular. However, about 75% of users still come to the platform for entertainment.
Another important factor in TikTok's success is safety. Over 75% of users rate it as a safe platform.
TikTok is often accused of having close ties with the Chinese government. Does the authority of the PRC influence the platform's policies and algorithms?
Jakub Olek: I am very glad you asked this. In reality, TikTok is not a Chinese company. This is a myth. TikTok is owned by ByteDance. Although it originates from China, its headquarters are in Singapore. 60% of ByteDance is owned by various organizations and funds based in the USA, while another 20% belongs to ByteDance employees. Only 20% belongs to the founders of ByteDance. By the way, they live in Singapore and have not been to China in the last ten years. The board of directors consists of five people, three of whom are not Chinese.
The myth about our relationship with the Chinese government is quite widespread, but it is untrue. The Chinese government has never requested user data from us, and if they do, we will refuse to provide it.
TikTok is not available in China, we have no office, no servers, and we have no Chinese users. However, ByteDance has an app called "Douyin" — an equivalent of TikTok created for the Chinese market. It is very similar to TikTok and is very popular in China. Still, these are two different companies and two different teams.
The United States government claims that ByteDance is a Chinese company that has close ties with the PRC government. In January 2025, the US may ban TikTok if ByteDance does not sell its share. How do you comment on this?
Jakub Olek: We will sue because we believe it is unconstitutional.
Has TikTok faced pressure from Russian authorities regarding censorship of content related to the war in Ukraine?
Jakub Olek: We receive such requests from many governments worldwide. Every request from the government is included in our quarterly reports on content removal. If you search for "TikTok safety center," you will find a site where we publish these reports.
How often do representatives of the Ukrainian government ask you to block content?
Jakub Olek: The safety of our users is extremely important to us. Regarding Ukraine, combating misinformation is our top priority. Since Russia began its full-scale invasion, TikTok was one of the first platforms to implement sanctions. From the very beginning, we have been in contact with the authorities in Ukraine and have restricted access to certain types of content or to state media (Russian — ed.) that produced it.
We also monitor to ensure that content on the platform does not aid the occupiers: if users post military-logistics content, we immediately message them that they should not specify, for example, street names. We have a special response team to combat misinformation and other harmful content.
We actively collaborate with the Ukrainian Center for Counteracting Disinformation. They inform us about relevant trends, and we invest significant resources to identify misinformation or hidden informational-psychological operations. Additionally, we strive to quickly detect harmful content generated by artificial intelligence.
The percentage of removed content aimed at spreading misinformation on TikTok has significantly increased this year compared to last — from 81% to 93%.
We aim to act proactively, so the amount of harmful content with zero views has also increased to 84.4% from 75% last year.
We monitor compliance with our community guidelines and their updates, and we pay great attention to content moderation.
Moreover, we actively collaborate with our partners, such as Reuters, which helps us fact-check when it comes to events in Ukraine.
How do you identify accounts that spread misinformation, and on what criteria do you block them?
Jakub Olek: We closely monitor to ensure users comply with our platform's rules. These rules need to be updated constantly due to new directives from governments or recommendations or laws adopted by institutions of the European Union or the UN.
We also have a two-level content moderation system. At one of these levels, moderation is handled by artificial intelligence. It helps identify harmful content and misinformation. Then, human moderators, who are fluent in Ukrainian and Russian, review the content. After all, artificial intelligence does not always recognize harmful content.
Platform users also help combat misinformation by reporting harmful content to us. This can be done in just a few clicks. We also have a team that collaborates with law enforcement agencies: the police and prosecutors. This is one of the channels through which we receive external information about trends and misinformation. We also work closely with fact-checkers. Another important area in combating fakes is educational campaigns, such as media literacy, which we conduct in partnership with our colleagues in Ukraine.
Petr Zhachko: Last year, we ran a powerful media literacy campaign, with content that garnered over 200 million views in Europe. About 80 million unique users saw it. Thus, considering that we have 150 million users, we can say that we managed to reach every second TikTok user.
In which countries are your content moderators located?
Jakub Olek: I can assure you that they are not in Russia or China. Most of those who moderate content related to Ukraine are in Poland or Ireland.
Why does TikTok regularly block or restrict Ukrainian content creators who document the realities of war or report from combat zones in Ukraine?
Jakub Olek: We have clear community guidelines that prohibit certain types of content. This includes hate speech, calls for violence, and so on. To answer your question, each specific case should be considered. However, I do not agree that we deliberately limit or remove content from Ukrainian users.
What would you say to people who believe TikTok undermines the information security of Europe?
Jakub Olek: I would respond that I disagree with that. TikTok cares about the information security of its users and has 40,000 specialists who focus exclusively on safety. This is a huge team.
How does TikTok combat the spread of deepfakes and manipulations related to the war in Ukraine?
Jakub Olek: We closely monitor harmful content created by artificial intelligence or edited videos that mislead users. Such content is prohibited. We remove it.
On September 23, we were the first to start using a tool that helps users label content created with AI.
In addition, we check whether the content generated by AI has been shared on other platforms. We have partnered with some of the largest tech companies to avoid misleading users. This year at the Munich Security Conference, we signed a cooperation agreement with several companies to counteract the abuse of AI, as this poses a challenge for our industry.
Do you monitor Russian informational-psychological operations aimed at undermining the morale of Ukrain