TikTok

Report March 2025

Submitted

TikTok allows users to create, share and watch short-form videos and live content, primarily for entertainment purposes

Advertising

Commitment 1

Relevant signatories participating in ad placements commit to defund the dissemination of disinformation, and improve the policies and systems which determine the eligibility of content to be monetised, the controls for monetisation and ad placement, and the data to report on the accuracy and effectiveness of controls and services around ad placements.

We signed up to the following measures of this commitment

Measure 1.1 Measure 1.2 Measure 1.3 Measure 1.4 Measure 1.5 Measure 1.6

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • In order to improve the granularity of existing ad policies, developed a specific climate misinformation ad policy.

  • Continued to enforce our four granular harmful misinformation ad policies in the EEA. As mentioned in our H2 2023 report, the policies cover:
    • Medical Misinformation
    • Dangerous Misinformation
    • Synthetic and Manipulated Media
    • Dangerous Conspiracy Theories 

  • Expanded the functionality (including choice and ability) in the EEA of our in-house pre-campaign brand safety tool, the TikTok Inventory Filter. 

  • Improved our IAB certification for Sweden Gold Standard to 2.0.
 
  • We continue to engage in the Task-force and its working groups and subgroups such as the working subgroup on Elections (Crisis Response).

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight disinformation and will report on any further development in the next report.

Measure 1.1

Relevant Signatories involved in the selling of advertising, inclusive of media platforms, publishers and ad tech companies, will deploy, disclose, and enforce policies with the aims of: - first avoiding the publishing and carriage of harmful Disinformation to protect the integrity of advertising supported businesses - second taking meaningful enforcement and remediation steps to avoid the placement of advertising next to Disinformation content or on sources that repeatedly violate these policies; and - third adopting measures to enable the verification of the landing / destination pages of ads and origin of ad placement.

QRE 1.1.1

Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 1.1 and will link to relevant public pages in their help centres.

To help keep our platform welcoming and authentic for everyone, we are focused on ensuring it is free from harmful misinformation. 

(I) Our policies and approach

Our Integrity & Authenticity (I&A) policies within our CGs are the first line of defence in combating harmful misinformation and deceptive behaviours on our platform. All users are required to comply with our CGs, which set out the circumstances where we will remove, or otherwise limit the availability of, content.

Paid ads are also subject to our ad policies and are reviewed against these policies before being allowed on our platform. Our ad policies specifically prohibit inaccurate, misleading, or false content that may cause significant harm to individuals or society, regardless of intent. They also prohibit other misleading, inauthentic and deceptive behaviours. Ads deemed in violation of these policies will not be permitted on our platform, and accounts deemed in severe or repeated violation may be suspended or banned.

In 2023, in order to improve our existing ad policies, we launched four granular policies in the EEA. The policies cover:
  • Medical Misinformation
  • Dangerous Misinformation
  • Synthetic and Manipulated Media
  • Dangerous Conspiracy Theories 

We have been constantly working on improving the implementation of these policies, and reflecting on whether there are further focused areas for which we should develop new policies. We launched a fifth granular ad policy covering climate misinformation at the end of 2024. It prohibits false or misleading claims relating to climate change, such as, denying the existence and impacts of climate change, falsely stating that long-term impacts of climate mitigation strategies are worse than those of climate changes or undermining the validity or credibility of data or research that documents well-established scientific consensus.

Our ad policies require advertisers to meet a number of requirements regarding the landing page. For example, the landing page must be functioning and must contain complete and accurate information including about the advertiser. Ads risk not being approved if the product or service advertised on the landing page does not match that included in the ad.

In line with our approach of building a platform that brings people together, not divides them, we have long prohibited political ads and political branded content. Specifically, we do not allow paid ads (nor landing pages) that promote or oppose a candidate, current leader, political party or group, or content that advocates a stance (for or against) on a local, state, or federal issue of public importance in order to influence a political decision or outcome. Similar rules apply in respect of branded content. We also classify certain accounts as Government, Politician, and Political Party Accounts (GPPPA) and we have introduced restrictions on these at an account level. This means accounts belonging to the government, politicians and political parties will automatically have their access to advertising features turned off. We make exceptions for governments in certain circumstances e.g., to promote public health. We make various brand safety tools available to advertisers to assist in helping to ensure that their ads are not placed adjacent to content they do not consider to fit with their brand values. While any content that is violative of our CGs, including our I&A policies, is removed, the brand safety tools are designed to help advertisers to further protect their brand. For example, a family-oriented brand may not want to appear next to videos containing news-related content. We have adopted the industry accepted framework in support of these principles.

(II) Verification in the context of ads

We provide verified badges on some accounts including certain advertisers. Verified badges help users make informed choices about the accounts they choose to follow. It's an easy way for notable figures to let users know they’re seeing authentic content, and it helps to build trust among high-profile accounts and their followers. For individuals, non-profits, institutions, businesses, or official brand pages, this badge builds an important layer of clarity with the TikTok community. We consider a number of factors before granting a verified badge, such as whether the notable account is authentic, unique, and active.

We strengthen our approach to countering influence attempts by:

  • Making state-affiliated media accounts that attempt to reach communities outside their home country on current global events and affairs ineligible for recommendation, which means their content won't appear in the For You feed.
  • Prohibiting state-affiliated media accounts in all markets where our state-controlled media labels are available from advertising outside of the country with which they are primarily affiliated.
  • Investing in our detection capabilities of state-affiliated media accounts.
  • Working with third party external experts to shape our state-affiliated media policy and assessment of state-controlled media labels.

SLI 1.1.1

Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict advertising on pages and/or domains that disseminate harmful Disinformation.

Methodology of data measurement: 

We have set out the number of ads that have been removed from our platform for violation of our political content policies, as well as our more granular policies on medical misinformation, dangerous misinformation, synthetic and manipulated media and dangerous conspiracy theories. We launched our granular climate misinformation policy towards the end of the reporting period and we look forward to sharing data on it once we have a full reporting period of data.

The majority of ads that violate our previously four, now five, granular misinformation ad policies, would have been removed under our existing policies. In cases where an ad is deemed violative for other policies and also for our more recent granular misinformation policies, the removal is counted under the existing policy. Therefore, the second column below shows only the number of ads removed where the sole reason was one of these four reported additional misinformation policies, and does not include ads already removed under our existing policies or where misinformation policies were not the driving factor for the removal.

We have been focused on enforcement of our political advertising prohibition as well as our internal detection capability of political content on our platform which included launching specialised political content moderator training and automoderation strategies. The data below suggests that our existing policies (such as political content and other policy areas such as our inaccurate, misleading, or false content policy) already cover the majority of harmful misinformation ads, due to their expansive nature of coverage.

Note that numbers have only been provided for monetised markets and are based on where the ads were displayed. We note that H2 2024 covered a very busy election cycle in Europe, including in Romania, France and Ireland. 

Country Number of ad removals under the political content ad policy Number of ad removals under the four granular misinformation ad policies
Austria 746 3
Belgium 1152 1
Bulgaria 328 7
Croatia 3 0
Cyprus 128 0
Czech Republic 111 0
Denmark 409 0
Estonia 90 0
Finland 235 0
France 4621 7
Germany 6498 63
Greece 911 8
Hungary 512 2
Ireland 565 1
Italy 2781 8
Latvia 131 4
Lithuania 19 0
Luxembourg 86 0
Malta 0 0
Netherlands 1179 3
Poland 1118 4
Portugal 438 1
Romania 10698 2
Slovakia 145 4
Slovenia 52 0
Spain 2558 17
Sweden 752 0
Iceland 0 0
Liechtenstein 0 0
Norway 474 2
Total EU 36266 135
Total EEA 36740 137

Measure 1.2

Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will tighten eligibility requirements and content review processes for content monetisation and ad revenue share programmes on their services as necessary to effectively scrutinise parties and bar participation by actors who systematically post content or engage in behaviours which violate policies mentioned in Measure 1.1 that tackle Disinformation.

QRE 1.2.1

Signatories will outline their processes for reviewing, assessing, and augmenting their monetisation policies in order to scrutinise and bar participation by actors that systematically provide harmful Disinformation.

All creators must comply with TikTok’s Community Guidelines, including our I&A policies. Where creators fail to comply with our Community Guidelines, this may result in loss of access to monetisation and / or loss of account access. Users in all EU member states are notified by an in-app notification in their relevant local language where there has been a restriction of their ability to monetise, restriction of their access to a feature, removal or otherwise restriction of access to their content, or a ban of their account. 

Our policies prohibit accounts verified as belonging to a government, politician or political party from accessing monetisation features. They will, for instance, be ineligible for participation in content monetisation programs such as our Creator Rewards Program. Along with our existing ban on political advertising, this means that accounts belonging to politicians, political parties and governments will not be able to give or receive money through TikTok's monetisation features, or spend money promoting their content (although exemptions are made for governments in certain circumstances such as for public health). 

We launched the Creator Code of Conduct in April 2024. These are the standards we expect creators involved in TikTok programs, features, events and campaigns to follow on and off-platform, in addition to our Community Guidelines and Terms of Service. Being a part of these creator programs is an opportunity that comes with additional responsibilities, and this code will also help provide creators with additional reassurance that other participants are meeting these standards too. We are actively improving our enforcement guidance and processes for this, including building on proactive signalling of off-platform activity.

SLI 1.2.1

Signatories will report on the number of policy reviews and/or updates to policies relevant to Measure 1.2 throughout the reporting period. In addition, Signatories will report on the numbers of accounts or domains barred from participation to advertising or monetisation as a result of these policies at the Member State level.

Methodology of data measurement:

Our I&A policies within our CGs are the first line of defence in combating harmful misinformation and deceptive behaviours on our platform. All creators are required to comply with our CGs, which set out the circumstances where we will remove, or otherwise limit the availability of, content. Creators who breach the Community Guidelines or Terms of Service are not eligible to receive rewards. We have set out the number of ads that have been removed from our platform for violation of our political content policies as well as our four more granular policies on medical misinformation, dangerous misinformation, synthetic and manipulated media and dangerous conspiracy theories in SLI 1.1.1. Further, SLI 1.1.2 aims to provide an estimate of the potential impact on revenue of demonetising disinformation. We are working towards being able to provide more data for this SLI. 

Country 0 0 0 0
Austria 0 0 0 0
Belgium 0 0 0 0
Bulgaria 0 0 0 0
Croatia 0 0 0 0
Cyprus 0 0 0 0
Czech Republic 0 0 0 0
Denmark 0 0 0 0
Estonia 0 0 0 0
Finland 0 0 0 0
France 0 0 0 0
Germany 0 0 0 0
Greece 0 0 0 0
Hungary 0 0 0 0
Ireland 0 0 0 0
Italy 0 0 0 0
Latvia 0 0 0 0
Lithuania 0 0 0 0
Luxembourg 0 0 0 0
Malta 0 0 0 0
Netherlands 0 0 0 0
Poland 0 0 0 0
Portugal 0 0 0 0
Romania 0 0 0 0
Slovakia 0 0 0 0
Slovenia 0 0 0 0
Spain 0 0 0 0
Sweden 0 0 0 0
Iceland 0 0 0 0
Liechtenstein 0 0 0 0
Norway 0 0 0 0

Measure 1.3

Relevant Signatories responsible for the selling of advertising, inclusive of publishers, media platforms, and ad tech companies, will take commercial and technically feasible steps, including support for relevant third-party approaches, to give advertising buyers transparency on the placement of their advertising.

QRE 1.3.1

Signatories will report on the controls and transparency they provide to advertising buyers with regards to the placement of their ads as it relates to Measure 1.3.

We partner with a number of industry leaders to provide a number of controls and transparency tools to advertising buyers with regard to the placement of ads:

Controls: We offer pre-campaign solutions to advertisers so they can put additional safeguards in place before their campaign goes live to mitigate the risk of their advertising being displayed adjacent to certain types of user-generated content. These measures are in addition to the CGs, which provide overarching rules around the types of content that can appear on TikTok and are eligible for the For You feed:

  • TikTok Inventory Filter: This is our proprietary system which enables advertisers to choose the profile of content they want their ads to run adjacent to. We expanded our Inventory Filter which is now available in 29 jurisdictions in the EEA and is embedded directly in TikTok Ads Manager, the system through which advertisers purchase ads. We have expanded the functionality of this Inventory Filter in various EEA countries. More details can be found here. The Inventory Filter is informed by Industry Standards and policies include topics which may be susceptible to disinformation.
  • TikTok Pre-bid Brand Safety Solution by Integral Ad Science (“IAS”): Advertisers can filter content based on industry-standard frameworks with all levels of risk (available in France and Germany). Some misinformation content may be captured and filtered out by these industry standard categories, such as “Sensitive Social Issues”.

Transparency: We have partnered with third parties to offer post-campaign solutions that enable advertisers to assess the suitability of user content that ran immediately adjacent to their ad in the For You feed, against their chosen brand suitability parameters:

  • Zefr: Through our partnership with Zefr, advertisers can obtain campaign insights into brand suitability and safety on the platform (now available in 29 countries in the EEA). Zefr aligns with the Industry Standards.

  • IAS: Advertisers can measure brand safety, viewability and invalid traffic on the platform with the IAS Signal platform (post campaign is available in 28 countries in the EEA). As with IAS’s pre-bid solution covered above, this aligns with the GARM Framework. 

DoubleVerify: We are partnering with DoubleVerify to provide advertisers with media quality measurement for ads. DoubleVerify is working actively with us to expand their suite of brand suitability and media quality solutions on the platform. DoubleVerify is available in 27 EU countries.

Measure 1.4

Relevant Signatories responsible for the buying of advertising, inclusive of advertisers, and agencies, will place advertising through ad sellers that have taken effective, and transparent steps to avoid the placement of advertising next to Disinformation content or in places that repeatedly publish Disinformation.

QRE 1.4.1

Relevant Signatories that are responsible for the buying of advertising will describe their processes and procedures to ensure they place advertising through ad sellers that take the steps described in Measure 1.4.

When TikTok advertises, we buy advertising space only through ad networks (either directly, through publishers or agencies) which allows for direct measurement of brand safety and suitability, via tagging, using leading brand safety tools across all digital media channels. This allows us to mitigate the risk of TikTok ads appearing next to sources of disinformation and be in control of the environment our content is appearing next to.

We use DoubleVerify to ensure our own ads run on or near suitable content, whilst running and monitoring brand safety and suitability metrics across other placements, always updating the context and content of our blocklists as well as to ensure the TikTok brand is protected in any context. 

For instance, we monitor the placement of our ads very closely, especially in the context of politically sensitive events such as the War in Ukraine or the Israel / Hamas conflict, and in the event of our ads appearing adjacent to or on sources of disinformation, we are able to identify and investigate the content in question to assess risks using DoubleVerify dashboards. Once identified, we will then adjust any filters or add the publication to our blocklist (which is regularly reviewed and updated) to prevent recurrence. 

Measure 1.5

Relevant Signatories involved in the reporting of monetisation activities inclusive of media platforms, ad networks, and ad verification companies will take the necessary steps to give industry-recognised relevant independent third-party auditors commercially appropriate and fair access to their services and data in order to: - First, confirm the accuracy of first party reporting relative to monetisation and Disinformation, seeking alignment with regular audits performed under the DSA. - Second, accreditation services should assess the effectiveness of media platforms' policy enforcement, including Disinformation policies.

QRE 1.5.1

Signatories that produce first party reporting will report on the access provided to independent third-party auditors as outlined in Measure 1.5 and will link to public reports and results from such auditors, such as MRC Content Level Brand Safety Accreditation, TAG Brand Safety certifications, or other similarly recognised industry accepted certifications.

We have achieved the TAG Brand Safety Certified seal and the TAG Certified Against Fraud seal by the Trustworthy Accountability Group (“TAG”) in the EEA and globally. This required appropriate verification by external auditors. Details of our TAG seal can be found by searching for “TikTok” on their public register which can be found here

We have been certified by the Interactive Advertising Bureau (“IAB”) for the IAB Ireland Gold Standard 2.1 (listed here) and IAB Sweden Gold Standard 2.0.

QRE 1.5.2

Signatories that conduct independent accreditation via audits will disclose areas of their accreditation that have been updated to reflect needs in Measure 1.5.

We have achieved the TAG Brand Safety Certified and TAG Certified Against Fraud seals and the IAB Ireland Gold Standard and IAB Sweden Gold Standard 2.0.

Measure 1.6

Relevant Signatories will advance the development, improve the availability, and take practical steps to advance the use of brand safety tools and partnerships, with the following goals: - To the degree commercially viable, relevant Signatories will provide options to integrate information and analysis from source-raters, services that provide indicators of trustworthiness, fact-checkers, researchers or other relevant stakeholders providing information e.g., on the sources of Disinformation campaigns to help inform decisions on ad placement by ad buyers, namely advertisers and their agencies. - Advertisers, agencies, ad tech companies, and media platforms and publishers will take effective and reasonable steps to integrate the use of brand safety tools throughout the media planning, buying and reporting process, to avoid the placement of their advertising next to Disinformation content and/or in places or sources that repeatedly publish Disinformation. - Brand safety tool providers and rating services who categorise content and domains will provide reasonable transparency about the processes they use, insofar that they do not release commercially sensitive information or divulge trade secrets, and that they establish a mechanism for customer feedback and appeal.

QRE 1.6.1

Signatories that place ads will report on the options they provide for integration of information, indicators and analysis from source raters, services that provide indicators of trustworthiness, fact-checkers, researchers, or other relevant stakeholders providing information e.g. on the sources of Disinformation campaigns to help inform decisions on ad placement by buyers.

We offer a variety of brand safety tools for preventing ads from being placed beside specific types of content. 

We continue to invest in our existing partnerships with leading third party brand safety and suitability providers (including DoubleVerify, Integral Ad Science, and Zefr). 

We evaluate, on an ongoing basis, whether there are potential new partnerships, including with researchers, that may be appropriate for our platform. Furthermore, our advertising policies help to ensure that the categories of content which are most likely to require such checks and integration of information do not make it onto the platform in the first place. 

QRE 1.6.2

Signatories that purchase ads will outline the steps they have taken to integrate the use of brand safety tools in their advertising and media operations, disclosing what percentage of their media investment is protected by such services.

We only purchase ads through ad networks which make robust and reputable brand safety tools available to us. All of our media investment is therefore protected by such tools. 

QRE 1.6.3

Signatories that provide brand safety tools will outline how they are ensuring transparency and appealability about their processes and outcomes.

We have partnered with several third parties (IAS, Double Verify and Zefr) to offer post-campaign solutions that enable advertisers to assess the suitability of user content that ran immediately adjacent to their ad in all feeds

QRE 1.6.4

Relevant Signatories that rate sources to determine if they persistently publish Disinformation shall provide reasonable information on the criteria under which websites are rated, make public the assessment of the relevant criteria relating to Disinformation, operate in an apolitical manner and give publishers the right to reply before ratings are published.

Not applicable as TikTok does not rate sources.

Commitment 2

Relevant Signatories participating in advertising commit to prevent the misuse of advertising systems to disseminate Disinformation in the form of advertising messages.

We signed up to the following measures of this commitment

Measure 2.1 Measure 2.2 Measure 2.3 Measure 2.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • In order to improve the granularity of existing ad policies, developed a specific climate misinformation ad policy.
  • Continued to enforce our four granular harmful misinformation ad policies in the EEA. As mentioned in our H2 2023 report, the policies cover:
    • Medical Misinformation
    • Dangerous Misinformation
    • Synthetic and Manipulated Media
    • Dangerous Conspiracy Theories 
  • Expanded the functionality (including choice and ability) in the EEA of our in-house pre-campaign brand safety tool, the TikTok Inventory Filter. 
  • Improved our IAB certification for Sweden Gold Standard to 2.0. 
  • We continue to engage in the Task-force and its working groups and subgroups such as the working subgroup on Elections (Crisis Response).

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight disinformation and will report on any further development in the next report.

Measure 2.1

Relevant Signatories will develop, deploy, and enforce appropriate and tailored advertising policies that address the misuse of their advertising systems for propagating harmful Disinformation in advertising messages and in the promotion of content.

QRE 2.1.1

Signatories will disclose and outline the policies they develop, deploy, and enforce to meet the goals of Measure 2.1 and will link to relevant public pages in their help centres.

Paid ads are subject to our strict ad policies, which specifically prohibit misleading, inauthentic and deceptive behaviours. Ads are reviewed against these policies before being allowed on our platform. In order to improve our existing ad policies, we launched four more granular policies in the EEA in 2023 (covering Medical Misinformation, Dangerous Misinformation, Synthetic and Manipulated Media and Dangerous Conspiracy Theories) which advertisers also need to comply with. Towards the end of 2024, we launched a fifth granular policy covering climate misinformation.

SLI 2.1.1

Signatories will report, quantitatively, on actions they took to enforce each of the policies mentioned in the qualitative part of this service level indicator, at the Member State or language level. This could include, for instance, actions to remove, to block, or to otherwise restrict harmful Disinformation in advertising messages and in the promotion of content.

Methodology of data measurement:

We have set out the number of ads that have been removed from our platform for violation of our political content policies, as well as our four granular policies on medical misinformation, dangerous misinformation, synthetic and manipulated media and dangerous conspiracy theories. We launched our climate misinformation policy towards the end of the reporting period and we look forward to sharing data on it along with our 4 other granular misinformation ad policies once we have a full reporting period of data.

The majority of ads that violate our newly launched misinformation policies, would have been removed under our existing policies. In cases where an ad is deemed violative for other policies and also for these additional misinformation policies, the removal is counted under the older policy. Therefore, the second column below shows only the number of ads removed where the sole reason was one of these four additional misinformation policies, and does not include ads already removed under our existing policies or where misinformation policies were not the driving factor for the removal.

The data below suggests that our existing policies (such as political content) already cover the majority of harmful misinformation ads, due to their expansive nature of coverage.

Note that numbers have only been provided for monetised markets and are based on where the ads were displayed. 

Country Number of ad removals under the political content ad policy Number of ad removals under the four granular misinformation ad policies
Austria 746 3
Belgium 1152 1
Bulgaria 328 7
Croatia 3 0
Cyprus 128 0
Czech Republic 111 0
Denmark 409 0
Estonia 90 0
Finland 235 0
France 4621 7
Germany 6498 63
Greece 911 8
Hungary 512 2
Ireland 565 1
Italy 2781 8
Latvia 131 4
Lithuania 19 0
Luxembourg 86 0
Malta 0 0
Netherlands 1179 3
Poland 1118 4
Portugal 438 1
Romania 10698 2
Slovakia 145 4
Slovenia 52 0
Spain 2558 17
Sweden 752 0
Iceland 0 0
Liechtenstein 0 0
Norway 474 2
Total EU 36266 135
Total EEA 36740 137

Measure 2.2

Relevant Signatories will develop tools, methods, or partnerships, which may include reference to independent information sources both public and proprietary (for instance partnerships with fact-checking or source rating organisations, or services providing indicators of trustworthiness, or proprietary methods developed internally) to identify content and sources as distributing harmful Disinformation, to identify and take action on ads and promoted content that violate advertising policies regarding Disinformation mentioned in Measure 2.1.

QRE 2.2.1

Signatories will describe the tools, methods, or partnerships they use to identify content and sources that contravene policies mentioned in Measure 2.1 - while being mindful of not disclosing information that'd make it easier for malicious actors to circumvent these tools, methods, or partnerships. Signatories will specify the independent information sources involved in these tools, methods, or partnerships.

In order to identify content and sources that breach our ad policies, ads go through moderation prior to going “live” on the platform. 

TikTok places considerable emphasis on proactive moderation of advertisements. Advertisements and advertiser accounts are reviewed against our Advertising Policies at the pre-posting and post-posting stage through a combination of automated and human moderation.

The majority of ads that violate our misinformation policies would have been removed under our existing policies. Our granular advertising policies currently cover:
  • Dangerous Misinformation
  • Dangerous Conspiracy Theories
  • Medical Misinformation
  • Manipulated Media
  • Climate Misinformation

After the ad goes live on the platform, users can report any concerns using the “report” button, and the ad will be reviewed again and appropriate action taken if necessary. 

TikTok also operates a "recall" process whereby ads already on TikTok will go through an additional stage of review if certain conditions are met, including reaching certain impression thresholds. TikTok also conducts additional reviews on random samples of ads to ensure its processes are functioning as expected.

We work with 14 fact-checking partners who provide fact-checking coverage in 23 EEA languages, including at least one official language of every EU Member State, plus Georgian, Russian, Turkish, and Ukrainian.

Measure 2.3

Relevant Signatories will adapt their current ad verification and review systems as appropriate and commercially feasible, with the aim of preventing ads placed through or on their services that do not comply with their advertising policies in respect of Disinformation to be inclusive of advertising message, promoted content, and site landing page.

QRE 2.3.1

Signatories will describe the systems and procedures they use to ensure that ads placed through their services comply with their advertising policies as described in Measure 2.1.

In order to identify content and sources that breach our ad policies, ads go through moderation prior to going “live” on the platform. 

TikTok places considerable emphasis on proactive moderation of advertisements. Advertisements and advertiser accounts are reviewed against our Advertising Policies at the pre-posting and post-posting stage through a combination of automated and human moderation.

The majority of ads that violate our misinformation policies would have been removed under our existing policies. Our granular advertising policies currently cover:
  • Dangerous Misinformation
  • Dangerous Conspiracy Theories
  • Medical Misinformation
  • Manipulated Media
  • Climate Misinformation

After the ad goes live on the platform, users can report any concerns using the “report” button, and the ad will be reviewed again and appropriate action taken if necessary.

TikTok also operates a "recall" process whereby ads already on TikTok will go through an additional stage of review if certain conditions are met, including reaching certain impression thresholds. TikTok also conducts additional reviews on random samples of ads to ensure its processes are functioning as expected.

SLI 2.3.1

Signatories will report quantitatively, at the Member State level, on the ads removed or prohibited from their services using procedures outlined in Measure 2.3. In the event of ads successfully removed, parties should report on the reach of violatory content and advertising.

We are pleased to be able to report on the ads removed for breach of our political content policies, as well as our more granular misinformation ad policies, including the impressions of those ads in this report. We launched our climate misinformation policy towards the end of the reporting period and we look forward to sharing data on it along with our 4 other granular misinformation ad policies once we have a full reporting period of data.

Country Number of ad removals under the political content ad policy Number of ad removals under the four granular misinformation ad policies Number of impressions for ads removed under the political content ad policy Number of impressions for ads removed under the four granular misinformation ad policies
Austria 746 3 2,405,688 0
Belgium 1152 1 414,078 16,971
Bulgaria 328 7 21,839 0
Croatia 3 0 69 0
Cyprus 128 0 10,838 0
Czech Republic 111 0 187,494 0
Denmark 409 0 1,333,325 12,268
Estonia 90 0 14,889 0
Finland 235 0 7,543,943 0
France 4621 7 14,427,406 510
Germany 6498 63 45,161,261 0
Greece 911 8 512,170 12,873
Hungary 512 2 3,675,505 0
Ireland 565 1 1,341,419 0
Italy 2781 8 6,836,564 12,029
Latvia 131 4 4,551 0
Lithuania 19 0 59,348 0
Luxembourg 86 0 5,472 0
Malta 0 0 0 0
Netherlands 1179 3 879,250 1,048
Poland 1118 4 610,009 0
Portugal 438 1 409,358 0
Romania 10698 2 27,208,895 0
Slovakia 145 4 52,215 0
Slovenia 52 0 53,989 0
Spain 2558 17 9,622,981 8,551
Sweden 752 0 4,565,753 0
Iceland 0 0 0 0
Liechtenstein 0 0 0 0
Norway 474 2 120,449 1,367
Total EU 36266 135 127,358,309 64,250
Total EEA 36740 137 127,478,758 65,617

Measure 2.4

Relevant Signatories will provide relevant information to advertisers about which advertising policies have been violated when they reject or remove ads violating policies described in Measure 2.1 above or disable advertising accounts in application of these policies and clarify their procedures for appeal.

QRE 2.4.1

Signatories will describe how they provide information to advertisers about advertising policies they have violated and how advertisers can appeal these policies.

We are clear with advertisers that their ads must comply with our strict ad policies (see TikTok Business Help Centre). We explain that all ads are reviewed before being uploaded on our platform - usually within 24 hours. Ads already on TikTok may go through an additional stage of review if they are reported, if certain conditions are met (e.g., reaching certain impression thresholds) or because of random sampling conducted at TikTok’s own initiative.

Where an advertiser has violated an ad policy they are informed by way of a notification. This is visible in their TikTok Ads Manager account and/or sent by email (if they have provided a valid email address), or where an advertiser has booked their ad through a TikTok representative, then the representative will inform the advertiser of any violations. Advertisers are able to make use of functionality to appeal rejections of their ads in certain circumstances. 

As part of our overarching DSA compliance programme, we improved how we notify and increase transparency to our advertisers. Notifications of restrictions include the restriction itself, reason for restriction , whether we made that decision by automated means, how we came to detect the violation (e.g. as a result of a user report or proactive TikTok initiatives) and what their rights of redress are .Advertisers can access online functionality to appeal restrictions on their account or ads. These appeals are then also reviewed against our ad policies and additional information could be provided to advertisers to help them understand the violation and what to do about it.

SLI 2.4.1

Signatories will report quantitatively, at the Member State level, on the number of appeals per their standard procedures they received from advertisers on the application of their policies and on the proportion of these appeals that led to a change of the initial policy decision.

We are pleased to be able to share the number of appeals for ads removed under our political content ad policies and our four granular misinformation ad policies as well as the number of respective overturns. The data shows a reduced number of appeals for ads removed under the political content policy evidencing our improved moderation and decision making processes. We launched our climate misinformation policy towards the end of the reporting period and we look forward to sharing data on it along with our 4 other granular misinformation ad policies once we have a full reporting period of data.

Country Number of ad removals under the political content ad policy Number of ad removals under the four granular misinformation ad policies Number of impressions for ads removed under the political content ad policy Number of impressions for ads removed under the four granular misinformation ad policies
Austria 746 3 2,405,688 0
Belgium 1152 1 414,078 16,971
Bulgaria 328 7 21,839 0
Croatia 3 0 69 0
Cyprus 128 0 10,838 0
Czech Republic 111 0 187,494 0
Denmark 409 0 1,333,325 12,268
Estonia 90 0 14,889 0
Finland 235 0 7,543,943 0
France 4621 7 14,427,406 510
Germany 6498 63 45,161,261 0
Greece 911 8 512,170 12,873
Hungary 512 2 3,675,505 0
Ireland 565 1 1,341,419 0
Italy 2781 8 6,836,564 12,029
Latvia 131 4 4,551 0
Lithuania 19 0 59,348 0
Luxembourg 86 0 5,472 0
Malta 0 0 0 0
Netherlands 1179 3 879,250 1,048
Poland 1118 4 610,009 0
Portugal 438 1 409,358 0
Romania 10698 2 27,208,895 0
Slovakia 145 4 52,215 0
Slovenia 52 0 53,989 0
Spain 2558 17 9,622,981 8,551
Sweden 752 0 4,565,753 0
Iceland 0 0 0 0
Liechtenstein 0 0 0 0
Norway 474 2 120,449 1,367
Total EU 36266 135 127,358,309 64,250
Total EEA 36740 137 127,478,758 65,617

Commitment 3

Relevant Signatories involved in buying, selling and placing digital advertising commit to exchange best practices and strengthen cooperation with relevant players, expanding to organisations active in the online monetisation value chain, such as online e-payment services, e-commerce platforms and relevant crowd-funding/donation systems, with the aim to increase the effectiveness of scrutiny of ad placements on their own services.

We signed up to the following measures of this commitment

Measure 3.1 Measure 3.2 Measure 3.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

We continue to engage in the Task-force and all its working groups and subgroups such as the working subgroup on Elections (Crisis Response).

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight disinformation and will report on any further development in the next report.

Measure 3.1

Relevant Signatories will cooperate with platforms, advertising supply chain players, source-rating services, services that provide indicators of trustworthiness, fact-checking organisations, advertisers and any other actors active in the online monetisation value chain, to facilitate the integration and flow of information, in particular information relevant for tackling purveyors of harmful Disinformation, in full respect of all relevant data protection rules and confidentiality agreements.

QRE 3.1.1

Signatories will outline how they work with others across industry and civil society to facilitate the flow of information that may be relevant for tackling purveyors of harmful Disinformation.

As set out later on in this report, we cooperate with a number of third parties to facilitate the flow of information that may be relevant for tackling purveyors of harmful misinformation. This information is shared internally to help ensure consistency of approach across our platform.

We also continue to be actively involved in the Task-force working group for Chapter 2, specifically the working subgroup on Elections (Crisis Response) which we co-chaired. We work with other signatories to define and outline metrics regarding the monetary reach and impact of harmful misinformation. We are in close collaboration with industry to ensure alignment and clarity on the reporting of these code requirements.

We work with 14 fact-checking partners who provide fact-checking coverage in 23 EEA languages, including at least one official language of every EU Member State, plus Georgian, Russian, Turkish, and Ukrainian.

Measure 3.2

Relevant Signatories will exchange among themselves information on Disinformation trends and TTPs (Tactics, Techniques, and Procedures), via the Code Task-force, GARM, IAB Europe, or other relevant fora. This will include sharing insights on new techniques or threats observed by Relevant Signatories, discussing case studies, and other means of improving capabilities and steps to help remove Disinformation across the advertising supply chain - potentially including real-time technical capabilities.

QRE 3.2.1

Signatories will report on their discussions within fora mentioned in Measure 3.2, being mindful of not disclosing information that is confidential and/or that may be used by malicious actors to circumvent the defences set by Signatories and others across the advertising supply chain. This could include, for instance, information about the fora Signatories engaged in; about the kinds of information they shared; and about the learnings they derived from these exchanges.

We work with industry partners to discuss common standards and definitions to support consistency of categorising content, adjacency & measurement relevant topics, in appropriate fora. We work closely with IAB Sweden, IAB Ireland and other organisations such as TAG in the EEA and globally. We are also on the board of the Brand Safety Institute. 

We continue to share relevant insights and metrics within our quarterly transparency reports, which aim to inform industry peers and the research community. We continue to engage in the sub groups set up for insights sharing between signatories and the Commission.

Measure 3.3

Relevant Signatories will integrate the work of or collaborate with relevant third-party organisations, such as independent source-rating services, services that provide indicators of trustworthiness, fact-checkers, researchers, or open-source investigators, in order to reduce monetisation of Disinformation and avoid the dissemination of advertising containing Disinformation.

QRE 3.3.1

Signatories will report on the collaborations and integrations relevant to their work with organisations mentioned.

We continue to work closely with IAB Sweden, IAB Ireland and other organisations such as TAG in the EEA and globally.

Political Advertising

Commitment 4

Relevant Signatories commit to adopt a common definition of "political and issue advertising".

We signed up to the following measures of this commitment

Measure 4.1 Measure 4.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

As we prohibit political advertising we are continuing to focus on enforcement of this policy in light of Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising coming into force and the majority of provisions applying from October 2025.

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 4.1

Relevant Signatories commit to define "political and issue advertising" in this section in line with the definition of "political advertising" set out in the European Commission's proposal for a Regulation on the transparency and targeting of political advertising.

QRE 4.1.1

Relevant Signatories will declare the relevant scope of their commitment at the time of reporting and publish their relevant policies, demonstrating alignment with the European Commission's proposal for a Regulation on the transparency and targeting of political advertising.

TikTok is first and foremost an entertainment platform, and we're proud to be a place that brings people together through creative and entertaining content. While sharing political beliefs and engaging in political conversation is allowed as organic content on TikTok, our policies prohibit our community, including politicians and political party accounts, from placing political ads or posting political branded content.

Specifically, our Politics, Culture and Religion policy prohibits ads and landing pages which: 
  • reference, promote, or oppose candidates or nominees for public office, political parties, or elected or appointed government officials;
  • reference an election, including voter registration, voter turnout, and appeals for votes;
  • include advocacy for or against past, current, or proposed referenda, ballot measures, and legislative, judicial, or regulatory outcomes or processes (including those that promote or attack government policies or track records); and
  • reference, promote, or sell, merchandise that features prohibited individuals, entities, or content, including campaign slogans, symbols, or logos.
Where accounts are designated as Government, Politician, and Political Party Accounts (“GPPPA”), those accounts are banned from placing ads on TikTok, accessing monetisation features and from campaign fundraising. We may allow some cause-based advertising and public services advertising from government agencies, non-profits and other entities if they meet certain conditions and are working with a TikTok sales representative.

We prohibit political content in branded content i.e. content which is posted in exchange for payment or any other incentive by a third party.

We have been reviewing our policies to ensure that our prohibition is at least as broad as that defined by the Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising. Our prohibition on political advertising is one part of our election integrity efforts which you can read more about in the elections crisis reports. 

QRE 4.1.2

After the first year of the Code's operation, Relevant Signatories will state whether they assess that further work with the Task-force is necessary and the mechanism for doing so, in line with Measure 4.2.

Not applicable at this stage. 

Commitment 5

Relevant Signatories commit to apply a consistent approach across political and issue advertising on their services and to clearly indicate in their advertising policies the extent to which such advertising is permitted or prohibited on their services.

We signed up to the following measures of this commitment

Measure 5.1

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

We prohibit political advertising and are continuing to focus on enforcement of this policy in light of Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising coming into force and the majority of provisions applying from October 2025.

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 5.1

Relevant Signatories will apply the labelling, transparency and verification principles (as set out below) across all ads relevant to their Commitments 4 and 5. They will publicise their policy rules or guidelines pertaining to their service's definition(s) of political and/or issue advertising in a publicly available and easily understandable way.

QRE 5.1.1

Relevant Signatories will report on their policy rules or guidelines and on their approach towards publicising them.

Not applicable as TikTok does not allow political advertising, as outlined in our Politics, Culture and Religion policy. We do not allow featuring political content in any form of advertising, extending this prohibition to both government, politician, or political party accounts and non-political advertisers expressing political views in advertising.

Commitment 6

Relevant Signatories commit to make political or issue ads clearly labelled and distinguishable as paid-for content in a way that allows users to understand that the content displayed contains political or issue advertising.

We signed up to the following measures of this commitment

Measure 6.1 Measure 6.2 Measure 6.3 Measure 6.4 Measure 6.5

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

We prohibit political advertising and are continuing to focus on enforcement of this policy in light of Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising coming into force and the majority of provisions applying from October 2025.

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 6.1

Relevant Signatories will develop a set of common best practices and examples for marks and labels on political or issue ads and integrate those learnings as relevant to their services.

QRE 6.1.1

Relevant Signatories will publicise the best practices and examples developed as part of Measure 2.2.1 and describe how they relate to their relevant services.

Not applicable as TikTok does not allow political advertising.

Measure 6.2

Relevant Signatories will ensure that relevant information, such as the identity of the sponsor, is included in the label attached to the ad or is otherwise easily accessible to the user from the label.

QRE 6.2.1

Relevant Signatories will publish examples of how sponsor identities and other relevant information are attached to ads or otherwise made easily accessible to users from the label.

Not applicable as TikTok does not allow political advertising.

QRE 6.2.2

Relevant Signatories will publish their labelling designs.

Not applicable as TikTok does not allow political advertising.

Measure 6.3

Relevant Signatories will invest and participate in research to improve users's identification and comprehension of labels, discuss the findings of said research with the Task-force, and will endeavour to integrate the results of such research into their services where relevant.

QRE 6.3.1

Relevant Signatories will publish relevant research into understanding how users identify and comprehend labels on political or issue ads and report on the steps they have taken to ensure that users are consistently able to do so and to improve the labels' potential to attract users' awareness.

Not applicable as TikTok does not allow political advertising.

Measure 6.4

Relevant Signatories will ensure that once a political or issue ad is labelled as such on their platform, the label remains in place when users share that same ad on the same platform, so that they continue to be clearly identified as paid-for political or issue content.

QRE 6.4.1

Relevant Signatories will describe the steps they put in place to ensure that labels remain in place when users share ads.

Not applicable as TikTok does not allow political advertising.

Measure 6.5

Relevant Signatories that provide messaging services will, where possible and when in compliance with local law, use reasonable efforts to work towards improving the visibility of labels applied to political advertising shared over messaging services. To this end they will use reasonable efforts to develop solutions that facilitate users recognising, to the extent possible, paid-for content labelled as such on their online platform when shared over their messaging services, without any weakening of encryption and with due regard to the protection of privacy.

QRE 6.5.1

Relevant Signatories will report on any solutions in place to empower users to recognise paid-for content as outlined in Measure 6.5.

This commitment is not applicable as TikTok is not a messaging app.

Commitment 7

Relevant Signatories commit to put proportionate and appropriate identity verification systems in place for sponsors and providers of advertising services acting on behalf of sponsors placing political or issue ads. Relevant signatories will make sure that labelling and user-facing transparency requirements are met before allowing placement of such ads.

We signed up to the following measures of this commitment

Measure 7.1 Measure 7.2 Measure 7.3 Measure 7.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

We prohibit political advertising and are continuing to focus on enforcement of this policy in light of Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising coming into force and the majority of provisions applying from October 2025.

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 7.1

Relevant Signatories will make sure the sponsors and providers of advertising services acting on behalf of sponsors purchasing political or issue ads have provided the relevant information regarding their identity to verify (and re-verify where appropriate) said identity or the sponsors they are acting on behalf of before allowing placement of such ads.

QRE 7.1.1

Relevant Signatories will report on the tools and processes in place to collect and verify the information outlined in Measure 7.1.1, including information on the timeliness and proportionality of said tools and processes.

Where accounts are designated as Government, Politician, and Political Party Accounts (“GPPPA”), those accounts are banned from placing ads on TikTok (with the exception of certain government agencies that may have a specific reason to advertise e.g. to promote public health initiatives) and from monetisation features. We publish the details of our GPPPA policy on our website, where we set out who we consider to be a GPPPA and the restrictions on those types of account. We explain how the actor of a government agency should act on our platform and what it can advertise in our TikTok Business Help Centre.

In the EU, we apply an internal label to accounts belonging to a government, politician, or political party. Once an account has been labelled in this manner, a number of policies will be applied that help prevent misuse of certain features e.g., access to advertising features and solicitation for campaign fundraising are not allowed.

Measure 7.2

Relevant Signatories will complete verifications processes described in Commitment 7 in a timely and proportionate manner.

QRE 7.2.1

Relevant Signatories will report on the actions taken against actors demonstrably evading the said tools and processes, including any relevant policy updates.

Not applicable as TikTok does not allow political advertising. 

Our Actor Policy aims to protect the integrity and authenticity of our community and prevent actors from evading our tools and processes. If an actor consistently demonstrates behaviour that deceives, misleads or is inauthentic to users and/or to TikTok we apply account level enforcement. This is not exclusive to ads containing political content.

TikTok is dedicated to investigating and disrupting confirmed cases of CIO on the platform. Covert influence operations (CIOs) are organised attempts to manipulate or corrupt public debate while also misleading TikTok systems or users about identity, origin, operating location, popularity, or overall purpose. Suspension logic is dependent on strikes, where we take into account ad-level violation and advertiser account behaviours. Confirmed critical policy violations lead to permanent suspension. Further information on our policy can be found in our Business Help Centre Article.

QRE 7.2.2

Relevant Signatories will provide information on the timeliness and proportionality of the verification process.

Not applicable as TikTok does not allow political advertising.

Measure 7.3

Relevant Signatories will take appropriate action, such as suspensions or other account-level penalties, against political or issue ad sponsors who demonstrably evade verification and transparency requirements via on-platform tactics. Relevant Signatories will develop - or provide via existing tools - functionalities that allow users to flag ads that are not labelled as political.

QRE 7.3.1

Relevant Signatories will report on the tools and processes in place to request a declaration on whether the advertising service requested constitutes political or issue advertising.

Not applicable as TikTok does not allow political advertising.

QRE 7.3.2

Relevant Signatories will report on policies in place against political or issue ad sponsors who demonstrably evade verification and transparency requirements on-platform.

Not applicable as TikTok does not allow political advertising.

Measure 7.4

Relevant Signatories commit to request that sponsors, and providers of advertising services acting on behalf of sponsors, declare whether the advertising service they request constitutes political or issue advertising.

QRE 7.4.1

Relevant Signatories will report on research and publish data on the effectiveness of measures they take to verify the identity of political or issue ad sponsors.

Not applicable as TikTok does not allow political advertising.

Commitment 8

Relevant Signatories commit to provide transparency information to users about the political or issue ads they see on their service.

We signed up to the following measures of this commitment

Measure 8.1 Measure 8.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

We prohibit political advertising and are continuing to focus on enforcement of this policy in light of Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising coming into force and the majority of provisions applying from October 2025.

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 8.2

Relevant Signatories will provide a direct link from the ad to the ad repository.

QRE 8.2.1

Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard.

Not applicable as TikTok does not allow political advertising. 

Commitment 9

Relevant Signatories commit to provide users with clear, comprehensible, comprehensive information about why they are seeing a political or issue ad.

We signed up to the following measures of this commitment

Measure 9.1 Measure 9.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

We prohibit political advertising and are continuing to focus on enforcement of this policy in light of Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising coming into force and the majority of provisions applying from October 2025.

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 9.2

Relevant Signatories will explain in simple, plain language, the rationale and the tools used by the sponsors and providers of advertising services acting on behalf of sponsors (for instance: demographic, geographic, contextual, interest or behaviourally-based) to determine that a political or issue ad is displayed specifically to the user.

QRE 9.2.1

Relevant Signatories will describe the tools and features in place to provide users with the information outlined in Measures 9.1 and 9.2, including relevant examples for each targeting method offered by the service.

Not applicable as TikTok does not allow political advertising.

Commitment 10

Relevant Signatories commit to maintain repositories of political or issue advertising and ensure their currentness, completeness, usability and quality, such that they contain all political and issue advertising served, along with the necessary information to comply with their legal obligations and with transparency commitments under this Code.

We signed up to the following measures of this commitment

Measure 10.1 Measure 10.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

We prohibit political advertising and are continuing to focus on enforcement of this policy in light of Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising coming into force and the majority of provisions applying from October 2025.

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 10.2

The information in such ad repositories will be publicly available for at least 5 years.

QRE 10.2.1

Relevant Signatories will detail the availability, features, and updating cadence of their repositories to comply with Measures 10.1 and 10.2. Relevant Signatories will also provide quantitative information on the usage of the repositories, such as monthly usage.

Not applicable as TikTok does not allow political advertising.

In compliance with our obligations pursuant to the Digital Services Act, TikTok maintains a publicly searchable Ad Library that features ads that TikTok has been paid to display to users, including those that are not currently active or have been paused by the advertisers. This includes information on the total number of recipients reached, with aggregate numbers broken down by Member State for the group or groups of recipients that the ad specifically targeted, including for political ads which have been removed. Each ad entry is available for the duration that it is shown on TikTok and for a year afterwards in compliance with the Digital Services Act. 

Article 39(3) of the Digital Services Act requires that such libraries should not include the content of the ad, the identity of the person on whose behalf it was presented, or who paid for it where an ad has been removed for incompatibility with a platform’s terms and conditions. As political ads are prohibited on TikTok, in order to comply with its legal obligations TikTok must remove these specific details of any political ads that have been removed from its platform (as such ad breaches its terms and conditions). For this reason TikTok’s ad library is required to display different information in respect of political ads in comparison to platforms that do allow them.

Commitment 11

Relevant Signatories commit to provide application programming interfaces (APIs) or other interfaces enabling users and researchers to perform customised searches within their ad repositories of political or issue advertising and to include a set of minimum functionalities as well as a set of minimum search criteria for the application of APIs or other interfaces.

We signed up to the following measures of this commitment

Measure 11.1 Measure 11.2 Measure 11.3 Measure 11.4

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Commitment 13

Relevant Signatories agree to engage in ongoing monitoring and research to understand and respond to risks related to Disinformation in political or issue advertising.

We signed up to the following measures of this commitment

Measure 13.1 Measure 13.2 Measure 13.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

No

If yes, list these implementation measures here

N/A

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

We prohibit political advertising and are continuing to focus on enforcement of this policy in light of Regulation EU 2024/900 on the Transparency and Targeting of Political Advertising coming into force and the majority of provisions applying from October 2025. 

If yes, which further implementation measures do you plan to put in place in the next 6 months?

N/A

Measure 13.1

Relevant Signatories agree to work individually and together through the Task-force to identify novel and evolving disinformation risks in the uses of political or issue advertising and discuss options for addressing those risks.

QRE 13.1.1

Through the Task-force, the Relevant Signatories will convene, at least annually, an appropriately resourced discussion around novel risks in political advertising to develop coordinated policy.

Whilst we do not allow political advertising, we remain engaged with discussions being held through the Task-force and other fora to ensure our policies and processes remain current and emerging threats are addressed in our policies and enforcement.

Measure 13.2

TikTok does not allow political advertising and this continues in blackout periods. 

Integrity of Services

Commitment 14

In order to limit impermissible manipulative behaviours and practices across their services, Relevant Signatories commit to put in place or further bolster policies to address both misinformation and disinformation across their services, and to agree on a cross-service understanding of manipulative behaviours, actors and practices not permitted on their services. Such behaviours and practices include: The creation and use of fake accounts, account takeovers and bot-driven amplification, Hack-and-leak operations, Impersonation, Malicious deep fakes, The purchase of fake engagements, Non-transparent paid messages or promotion by influencers, The creation and use of accounts that participate in coordinated inauthentic behaviour, User conduct aimed at artificially amplifying the reach or perceived public support for disinformation.

We signed up to the following measures of this commitment

Measure 14.1 Measure 14.2 Measure 14.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Building on our new AI-generated label for creators to disclose content that is completely AI-generated or significantly edited by AI, we have expanded our efforts in the AIGC space by:
    • Implementing the Coalition for Content Provenance and Authenticity (C2PA) Content Credentials, which enables our systems to instantly recognize and automatically label AIGC.
    • Supporting the coalition’s working groups as a C2PA General Member.
    • Joining the Content Authenticity Initiative (CAI) to drive wider adoption of the technical standard.
    • Publishing a new Transparency Center article Supporting responsible, transparent AI-generated content.
    • Building on our new AI-generated content label for creators, and implementation of C2PA Content Credentials, we launched a number of media literacy campaigns with guidance from expert organisations like Mediawise and WITNESS, including in Brazil, Germany, France, Mexico and the UK, that teach our community how to spot and label AI-generated content. This AIGC Transparency Campaign informed by WITNESS has reached 80M users globally, including more than 8.5M and 9.5M in Germany and France respectively.
  • Continued to join industry partners as a party to the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ which is a joint commitment to combat the deceptive use of AI in elections. 
  • Continued to participate in the working groups on integrity of services and Generative AI.
  • We have continued to enhance our ability to detect covert influence operations. To provide more regular and detailed updates about the covert influence operations we disrupt, we have a dedicated Transparency Report on covert influence operations, which is available in TikTok’s transparency centre. In this report, we include information about operations that we have previously removed and that have attempted to return to our platform with new accounts.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight disinformation and will report on any further development in the next COPD report.

Measure 14.1

Relevant Signatories will adopt, reinforce and implement clear policies regarding impermissible manipulative behaviours and practices on their services, based on the latest evidence on the conducts and tactics, techniques and procedures (TTPs) employed by malicious actors, such as the AMITT Disinformation Tactics, Techniques and Procedures Framework.

QRE 14.1.1

Relevant Signatories will list relevant policies and clarify how they relate to the threats mentioned above as well as to other Disinformation threats.

As well as our I&A policies in our CGs which safeguard against harmful misinformation (see QRE 18.2.1), our I&A policies also expressly prohibit deceptive behaviours. Our policies on deceptive behaviours relate to the TTPs as follows:

TTPs which pertain to the creation of assets for the purpose of a disinformation campaign, and to ways to make these assets seem credible: 

Creation of inauthentic accounts or botnets (which may include automated, partially automated, or non-automated accounts)  

Our I&A policies which address Spam and Deceptive Account Behaviours expressly prohibit account behaviours that may spam or mislead our community. You can set up multiple accounts on TikTok to create different channels for authentic creative expression, but not for deceptive purposes.

We do not allow spam including:
  • Operating large networks of accounts controlled by a single entity, or through automation;
  • Bulk distribution of a high-volume of spam; and
  • Manipulation of engagement signals to amplify the reach of certain content, or buying and selling followers, particularly for financial purposes

We also do not allow impersonation including:
  • Accounts that pose as another real person or entity without disclosing that they are a fan or parody account in the account name, such as using someone's name, biographical details, content, or image without disclosing it
  • Presenting as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform

If we determine someone has engaged in any of these deceptive account behaviours, we will ban the account, and may ban any new accounts that are created.

Use of fake / inauthentic reactions (e.g. likes, up votes, comments) and use of fake followers or subscribers

Our I&A policies which address fake engagement do not allow the trade or marketing of services that attempt to artificially increase engagement or deceive TikTok’s recommendation system. We do not allow our users to: 

  • facilitate the trade or marketing of services that artificially increase engagement, such as selling followers or likes; or
  • provide instructions on how to artificially increase engagement on TikTok.

If we become aware of accounts or content with inauthentically inflated metrics, we will remove the associated fake followers or likes. Content that tricks or manipulates others as a way to increase engagement metrics, such as “like-for-like” promises and false incentives for engaging with content (to increase gifts, followers, likes, views, or other engagement metrics) is ineligible for our For You feed.

Creation of inauthentic pages, groups, chat groups, fora, or domains 
TikTok does not have pages, groups, chat groups, fora or domains. This TTP is not relevant to our platform.

Account hijacking or Impersonation
Again, our policies prohibit impersonation which refers to accounts that pose as another real person or entity or present as a person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform. Our users are not allowed to use someone else's name, biographical details, or profile picture in a misleading manner. 

In order to protect freedom of expression, we do allow accounts that are clearly parody, commentary, or fan-based, such as where the account name indicates that it is a fan, commentary, or parody account and not affiliated with the subject of the account. We continue to develop our policies to ensure that impersonation of entities (such as businesses or educational institutions, for example) are prohibited and that accounts which impersonate people or entities who are not on the platform are also prohibited. We also issue warnings to users of suspected impersonation accounts and do not recommend those accounts on our For You Feed.

We also have a number of policies that address account hijacking. Our privacy and security policies under our CGs expressly prohibit users from providing access to their account credentials to others or enable others to conduct activities against our CGs. We do not allow access to any part of TikTok through unauthorised methods; attempts to obtain sensitive, confidential, commercial, or personal information; or any abuse of the security, integrity, or reliability of our platform. We also provide practical guidance to users if they have concerns that their account may have been hacked.  

TTPs which pertain to the dissemination of content created in the context of a disinformation campaign, which may or may not include some forms of targeting or attempting to silence opposing views: 

Deliberately targeting vulnerable recipients (e.g. via personalised advertising, location spoofing or obfuscation), inauthentic coordination of content creation or amplification, including attempts to deceive/manipulate platforms algorithms (e.g. keyword stuffing or inauthentic posting/reposting designed to mislead people about popularity of content, including by influencers), use of deceptive practices to deceive/manipulate platform algorithms, such as to create, amplify or hijack hashtags, data voids, filter bubbles, or echo chambers and coordinated mass reporting of non-violative opposing content or accounts.

We fight against CIOs as our policies prohibit attempts to sway public opinion while also misleading our systems or users about the identity, origin, approximate location, popularity or overall purpose.

When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing. We know that CIOs will continue to evolve in response to our detection and networks may attempt to reestablish a presence on our platform - that's why we take continuous action against these attempts including banning accounts found to be linked with previously disrupted networks. We continue to iteratively research and evaluate complex deceptive behaviours on our platform and develop appropriate product and policy solutions as appropriate in the long term. We publish all of the CIO networks we identify and remove voluntarily in a dedicated report within our transparency centre here.

Use “hack and leak” operation (which may or may not include doctored content) 

We have a number of policies that address hack and leak related threats (some examples are below):
  • Our hack and leak policy, which aims to further reduce the harms inflicted by the unauthorised disclosure of hacked materials on the individuals, communities and organisations that may be implicated or exposed by such disclosures
  • Our CIO policy addresses use of leaked documents to sway public opinion as part of a wider operation
  • Our Edited Media and AI-Generated Content (AIGC) policy captures materials that have been digitally altered without an appropriate disclosure
  • Our harmful misinformation policies combats conspiracy theories related to unfolding events and dangerous misinformation
  • Our Trade of Regulated Goods and Services policy prohibits trading of hacked goods

Deceptive manipulated media (e.g. “deep fakes”, “cheap fakes”...)  

Our ‘Edited Media and AI-Generated Content (AIGC)’ policy includes commonly used and easily understood language when referring to AIGC, and outlines our existing prohibitions on AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed. We also do not allow content that contains the likeness of young people, or the likeness of adult private figures used without their permission.

For the purposes of our policy, AIGC refers to content created or modified by artificial intelligence (AI) technology or machine-learning processes, which may include images of real people, and may show highly realistic-appearing scenes, or use a particular artistic style, such as a painting, cartoons, or anime. ‘Significantly edited content’ is content that shows people doing or saying something they did not do or say, or altering their appearance in a way that makes them difficult to recognise or identify. Misleading AIGC or edited media is audio or visual content that has been edited, including by combining different clips together, to change the composition, sequencing, or timing in a way that alters the meaning of the content and could mislead viewers about the truth of real world events.
 
In accordance with our policy, we prohibit AIGC that features:
  • Realistic-appearing people under the age of 18
  • The likeness of adult private figures, if we become aware it was used without their permission
  • Misleading AIGC or edited media that falsely shows:
    • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation
    • A crisis event, such as a conflict or natural disaster
    • A public figure who is:
      • being degraded or harassed, or engaging in criminal or antisocial behaviour
      • taking a position on a political issue, commercial product, or a matter of public importance (such as an elections)
      • being politically endorsed or condemned by an individual or group

As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.

Non-transparent compensated messages or promotions by influencers 

Our Terms of Service and Branded Content Policy require users posting about a brand or product in return for any payment or other incentive to disclose their content by enabling the branded content toggle which we make available for users. We also provide functionality to enable users to report suspected undisclosed branded content which reminds the user who posted the suspected undisclosed branded content of our requirements and prompts them to turn the branded content toggle on if required. We made this requirement even clearer to users in our Commercial Disclosures and Paid Promotion policy in our March 2023 CG refresh, by increasing the information around our policing of this policy and providing specific examples.

In addition to branded content policies, our CIO policy can also apply to non-transparent compensated messages or promotions by influencers where it is found that those messages or promotions formed part of a covert influence campaign.

QRE 14.1.2

Signatories will report on their proactive efforts to detect impermissible content, behaviours, TTPs and practices relevant to this commitment.

At TikTok, we place considerable emphasis on proactive content moderation and use a combination of technology and safety professionals to detect and remove harmful misinformation (see QRE 18.1.1) and deceptive behaviours on our Platform, before it is reported to us by users or third parties. 

For instance, we take proactive measures to prevent inauthentic or spam accounts from being created. Thus, we have created and use detection models and rule engines that:

  • prevent inauthentic accounts from being created based on malicious patterns; and
  • remove registered accounts based on certain signals (ie, uncommon behaviour on the platform).

We also manually monitor user reports of inauthentic accounts in order to detect larger clusters or similar inauthentic behaviours.

However, given the complex nature of the TTPs, human moderation is critical to success in this area and TikTok's moderation teams therefore play a key role assessing and addressing identified violations. We provide our moderation teams with detailed guidance on how to apply the I&A policies in our CGs, including providing case banks of harmful misinformation claims to support their moderation work, and allow them to route new or evolving content to our fact-checking partners for assessment. 

In addition, where content reaches certain levels of popularity in terms of the number of video views, it will be flagged for further review. Such review is undertaken given the extent of the content’s dissemination and the increase in potential harm if the content is found to be in breach of our CGs including our I&A policies.

Furthermore, during the reporting period, we improved automated detection and enforcement of our ‘Edited Media and AI-Generated Content (AIGC)’ policy, resulting in an effective increase in the number of videos removed for policy violations. This also resulted in the number of visitors per video decreasing over the reporting period, demonstrating an effective control strategy as the scope of enforcement increased.

We have also set up specifically-trained teams that are focused on investigating and detecting CIO on our Platform. We've built international trust & safety teams with specialized expertise across threat intelligence, security, law enforcement, and data science to work on influence operations full-time. These teams continuously pursue and analyse on-platform signals of deceptive behaviour, as well as leads from external sources. They also collaborate with external intelligence vendors to support specific investigations on a case-by-case basis. When we investigate and remove these operations, we focus on behaviour and assessing linkages between accounts and techniques to determine if actors are engaging in a coordinated effort to mislead TikTok’s systems or our community. In each case, we believe that the people behind these activities coordinate with one another to misrepresent who they are and what they are doing.

Accounts that engage in influence operations often avoid posting content that would be violative of platforms' guidelines by itself. That's why we focus on accounts' behaviour and technical linkages when analysing them, specifically looking for evidence that:

  • They are coordinating with each other. For example, they are operated by the same entity, share technical similarities like using the same devices, or are working together to spread the same narrative.
  • They are misleading our systems or users. For example, they are trying to conceal their actual location, or using fake personas to pose as someone they're not.
  • They are attempting to manipulate or corrupt public debate to impact the decision making, beliefs and opinions of a community. For example, they are attempting to shape discourse around an election or conflict.

These criteria are aligned with industry standards and guidance from the experts we regularly consult with. They're particularly important to help us distinguish malicious, inauthentic coordination from authentic interactions that are part of healthy and open communities. For example, it would not violate our policies if a group of people authentically worked together to raise awareness or campaign for a social cause, or express a shared opinion (including political views). However, multiple accounts deceptively working together to spread similar messages in an attempt to influence public discussions would be prohibited and disrupted.

Measure 14.2

Relevant Signatories will keep a detailed, up-to-date list of their publicly available policies that clarifies behaviours and practices that are prohibited on their services and will outline in their reports how their respective policies and their implementation address the above set of TTPs, threats and harms as well as other relevant threats.

QRE 14.2.1

Relevant Signatories will report on actions taken to implement the policies they list in their reports and covering the range of TTPs identified/employed, at the Member State level.

The implementation of our policies is ensured by different means, including specifically-designed tools (such as toggles to disclose branded content - see QRE 14.1.1) or human investigations to detect deceptive behaviours (for CIO activities - see QRE 14.1.2).

The implementation of these policies is also ensured through enforcement measures applied in all Member States. 

CIO investigations are resource intensive and require in-depth analysis to ensure high confidence in proposed actions. Where our teams have the necessary high degree of confidence that an account is engaged in CIO or is connected to networks we took down in the past as part of a CIO, it is removed from our Platform.

Similarly, where our teams have a high degree of confidence that specific content violates one of our TTPs-related policies (See QRE 14.1.1), such content is removed from TikTok.

Lastly, we may reduce the discoverability of some content, including by making videos ineligible for recommendation in the For You feed section of our platform. This is, for example, the case for content that tricks or manipulates users in order to inauthentically increase followers, likes, or views

SLI 14.2.1

Number of instances of identified TTPs and actions taken at the Member State level under policies addressing each of the TTPs as well as information on the type of content.

TTP No. 1: Creation of inauthentic accounts or botnets (which may include automated, partially automated, or non-automated accounts) 


Methodology of data measurement

We have based the number of: (i) fake accounts removed; and (ii) followers of the fake accounts (identified at the time of removal of the fake account), on the country the fake account was last active in.

We have updated our methodology to report the ratio of monthly average of fake accounts over of monthly active users, based on the latest publication of monthly active users, in order to better reflect TTPs related content in relation to overall content on the service.

TTP no. 2: Use of fake / inauthentic reactions (e.g. likes, up votes, comments)  


Methodology of data measurement:

We based the number of fake likes that we removed on the country of registration of the user. We also based the number of fake likes prevented on the country of registration of the user.

TTP No. 3: Use of fake followers or subscribers  


Methodology of data measurement:

We based the number of fake followers that we removed on the country of registration of the user. We also based the number of fake followers prevented on the country of registration of the user.

TTP No. 4: Creation of inauthentic pages, groups, chat groups, fora, or domains


TikTok does not have pages, groups, chat groups, fora or domains. This TTP is not relevant to our platform.

TTP No. 5:  Account hijacking or impersonation  

Methodology of data measurement:

The number of accounts removed under our impersonation policy is based on the approximate location of the users. We have updated our methodology to report the ratio of monthly average impersonation accounts banned over monthly active users, based on the latest publication of monthly active users, in order to better reflect TTPs related content in relation to overall content on the service.

TTP No. 6. Deliberately targeting vulnerable recipients (e.g. via personalised advertising, location spoofing or obfuscation)  


Methodology of data measurement:

The number of new CIO network discoveries found to be targeting EU markets relates to our public disclosures for the period July 1st 2024 to December 31st 2024. We have categorised disrupted CIO networks by the country we assess that the network targeted. We have included any network which we assess to have targeted one or more European markets, or have operated from an EU market. We publish all of the CIO networks we identify and remove within our transparency reports here.

CIO networks identified and removed are detailed below, including the assessed geographic location of network operation and the assessed target audience of the network, which we assess via technical and behavioural evidence from proprietary and open sources. The number of followers of CIO networks has been based on the number of accounts that followed any account within a network as of the date of that network’s removal.

Note: TTP No. 6 data cannot be shown on this page due to limitations with the website. We provide a full list of CIOs disrupted originating in Member States in our full report, which can be downloaded from this website.

TTP No. 7: Deploy deceptive manipulated media (e.g. “deep fakes”, “cheap fakes”...)

We have based the following numbers on the country in which the video was posted: videos removed because of violations of the Edited Media and AI-Generated Content (AIGC) policy. The number of views of videos removed because of violation of each of these policies is based on the approximate location of the user.

TTP No. 8: Use “hack and leak” operation (which may or may not include doctored content)

We have provided data on the CIO networks that we have disrupted in the reporting period under TTP No. 6. We have also provided data on violations of our Edited Media and AI-Generated Content (AIGC) policy under TTP No. 7. Our  hack and leak policy was recently launched in H1 2024,  but we do not have meaningful metrics under this policy to report for H2.

TTP No. 9: Inauthentic coordination of content creation or amplification, including attempts to deceive/manipulate platforms algorithms (e.g. keyword stuffing or inauthentic posting/reposting designed to mislead people about popularity of content, including by influencers)


We have provided data on the CIO networks that we have disrupted in the reporting period under TTP No. 6.

TTP No. 10: Use of deceptive practices to deceive/manipulate platform algorithms, such as to create, amplify or hijack hashtags, data voids, filter bubbles, or echo chambers


We have provided data on the CIO networks that we have disrupted in the reporting period under TTP No. 6.

TTP No. 11. Non-transparent compensated messages or promotions by influencers 


Methodology of data measurement:
We are unable to provide this metric due to insufficient data available for the reporting period. 

TTP No. 12: Coordinated mass reporting of non-violative opposing content or accounts


We have provided data on the CIO networks that we have disrupted in the reporting period under TTP No. 6.

Country TTP 1 (Number of Fake Accounts Removed) TTP 2 (Number of Fake Likes Removed) TTP 3 (Number of Fake Followers Removed) TTP 5 (Number of Accounts Banned Under Misinformation Policy) TTP 7 (Number of videos removed for violation of Edited Media & AIGC Policy)
Austria 92511 12262551 9980544 177 110859
Belgium 176327 16913076 11916866 300 166222
Bulgaria 423060 6468521 4561129 175 75036
Croatia 74704 1821268 1965426 77 27536
Cyprus 86741 4176517 1706405 54 59263
Czech Republic 194925 3052689 4342681 134 51417
Denmark 155675 4183605 3154022 115 49328
Estonia 111506 687649 482641 29 19687
Finland 99745 3086208 3204999 92 60083
France 2061174 78227394 109481878 2587 1399713
Germany 1678822 131158324 125941360 2277 1380835
Greece 133443 14621872 7880295 215 206528
Hungary 84057 1821268 2589692 141 63319
Ireland 321237 4520433 3213842 235 32936
Italy 672344 60514367 35511559 805 746928
Latvia 60145 1690473 732030 48 99265
Lithuania 79417 1682687 2057659 76 42778
Luxembourg 73258 1920605 1574849 43 40901
Malta 60192 1395676 401869 0 12100
Netherlands 886619 23557961 17070055 567 202203
Poland 360959 8833014 10128172 1251 203835
Portugal 190906 9239486 3714261 206 151389
Romania 294195 11254476 14021343 1300 287851
Slovakia 131567 1208123 4288570 63 21883
Slovenia 298807 727133 678185 43 10131
Spain 709560 38331442 31084803 709 676935
Sweden 239020 15782957 12342226 284 163490
Iceland 31476 230931 120003 15 3353
Liechtenstein 1369 24827 893407 0 357
Norway 92800 5457966 3756414 178 59556
All EU 9750916 459139775 424027361 12003 6362451
All EEA 9876561 464982867 428797185 12196 6425717

SLI 14.2.2

Views/impressions of and interaction/engagement at the Member State level (e.g. likes, shares, comments), related to each identified TTP, before and after action was taken.

Please see SLI 14.2.1 for definitions of individual TTPs

Country TTP 1 (Number of followers of fake accounts identified at the time of removal) TTP 2 (Number of fake likes prevented) TTP 3 (Number of Fake Followers Prevented) TTP 7 (Number of views of videos removed because of Edited Media and AI-Generated Content (AIGC) policy)
Austria 467635 39213306 25000123 216433
Belgium 544073 56682105 34550567 1119223
Bulgaria 188995 40004761 26400841 5977
Croatia 175230 17901159 18990456 58579
Cyprus 124021 6960047 18497473 19441
Czech Republic 348626 31099711 18233387 8287531
Denmark 298306 17585666 23806634 2742457
Estonia 239039 7385026 16887949 2063380
Finland 195684 19264460 20303735 464824
France 20207105 336499329 127136908 312078908
Germany 20545728 357582219 138933948 23904234
Greece 1702918 84211417 38712931 145950
Hungary 184291 28069699 24773097 86870
Ireland 697840 31110363 25239860 103199
Italy 5900534 606697045 158916638 1892355
Latvia 124765 11600082 17952175 4519
Lithuania 300241 11795998 18928046 25410
Luxembourg 611602 7987636 21051498 8729
Malta 226073 3466698 15758979 5811847
Netherlands 1575641 101316771 35162609 9080526
Poland 3192516 208518568 54501610 13404186
Portugal 370719 56146620 26901973 339124
Romania 4045608 83405388 44172801 623525
Slovakia 1347301 18154505 21010637 2014
Slovenia 45359 5843233 1942793 605
Spain 5351682 161280031 73920335 21882268
Sweden 528326 48240073 36451604 377862
Iceland 253997 1564206 2572695 6113
Liechtenstein 11129 70045 1045728 525
Norway 151088 20708187 7242021 139984
All EU 69539858 2398021916 1084139607 404749976
All EEA 69956072 2420364354 1095000051 404896598

SLI 14.2.3

Metrics to estimate the penetration and impact that e.g. Fake/Inauthentic accounts have on genuine users and report at the Member State level (including trends on audiences targeted; narratives used etc.).

Please see SLI 14.2.1 for definitions of individual TTPs

Country Number of unique videos labelled with AIGC tag of "Creator labeled as AI-generated"
Austria 110859
Belgium 166222
Bulgaria 75036
Croatia 27536
Cyprus 59263
Czech Republic 51417
Denmark 49328
Estonia 19687
Finland 60083
France 1399713
Germany 1380835
Greece 206528
Hungary 63319
Ireland 32936
Italy 746928
Latvia 99265
Lithuania 42778
Luxembourg 40901
Malta 12100
Netherlands 202203
Poland 203835
Portugal 151389
Romania 287851
Slovakia 21883
Slovenia 10131
Spain 676935
Sweden 163490
Iceland 3353
Liechtenstein 357
Norway 59556
Total EU 6362451
Total EEA 6425717

SLI 14.2.4

Estimation, at the Member State level, of TTPs related content, views/impressions and interaction/engagement with such content as a percentage of the total content, views/impressions and interaction/engagement on relevant signatories' service.

Please see SLI 14.2.1 for definitions of individual TTPs

Country TTP 1 (Ratio of monthly average of Fake accounts over monthly active users) TTP 5 (Impersonation accounts as a % of monthly active users) TTP 7 (Number of unique videos labelled with AIGC tag of "AI-generated")
Austria 38531
Belgium 75316
Bulgaria 78668
Croatia 18595
Cyprus 3165
Czech Republic 89409
Denmark 30694
Estonia 11220
Finland 49106
France 432739
Germany 502916
Greece 7936
Hungary 74704
Ireland 34736
Italy 393642
Latvia 18852
Lithuania 21581
Luxembourg 3319
Malta 3444
Netherlands 29448
Poland 316048
Portugal 64975
Romania 37467
Slovakia 28439
Slovenia 6969
Spain 493675
Sweden 105253
Iceland 4720
Liechtenstein 61
Norway 42172
Total EU 0.001 0.000013 2970847
Total EEA 3017800

Measure 14.3

Relevant Signatories will convene via the Permanent Task-force to agree upon and publish a list and terminology of TTPs employed by malicious actors, which should be updated on an annual basis.

QRE 14.3.1

Signatories will report on the list of TTPs agreed in the Permanent Task-force within 6 months of the signing of the Code and will update this list at least every year. They will also report about the common baseline elements, objectives and benchmarks for the policies and measures.

We collaborated as part of the Integrity of Services working group to set up the first list of TTPs.

Commitment 15

Relevant Signatories that develop or operate AI systems and that disseminate AI-generated and manipulated content through their services (e.g. deepfakes) commit to take into consideration the transparency obligations and the list of manipulative practices prohibited under the proposal for Artificial Intelligence Act.

We signed up to the following measures of this commitment

Measure 15.1 Measure 15.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Building on our new AI-generated label for creators to disclose content that is completely AI-generated or significantly edited by AI, we have expanded our efforts in the AIGC space by:
    • Implementing the Coalition for Content Provenance and Authenticity (C2PA) Content Credentials, which enables our systems to instantly recognize and automatically label AIGC.
    • Supporting the coalition’s working groups as a C2PA General Member.
    • Joining the Content Authenticity Initiative (CAI) to drive wider adoption of the technical standard.
    • Publishing a new Transparency Center article Supporting responsible, transparent AI-generated content.
    • Building on our new AI-generated content label for creators, and implementation of C2PA Content Credentials, we launched a number of media literacy campaigns with guidance from expert organisations like Mediawise and WITNESS, including in Brazil, Germany, France, Mexico and the UK, that teach our community how to spot and label AI-generated content. This AIGC Transparency Campaign informed by WITNESS has reached 80M users globally, including more than 8.5M and 9.5M in Germany and France respectively.
  • Continued to join industry partners as a party to the “Tech Accord to Combat Deceptive Use of AI in 2024 Elections’ which is a joint commitment to combat the deceptive use of AI in elections.
  • We continue to participate in relevant working groups, such as the Generative AI working group, which commenced in September 2023.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight disinformation and will report on any further development in the next COPD report. 

Measure 15.1

Relevant signatories will establish or confirm their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content, such as warning users and proactively detect such content.

QRE 15.1.1

In line with EU and national legislation, Relevant Signatories will report on their policies in place for countering prohibited manipulative practices for AI systems that generate or manipulate content.

Our Edited Media and AI-Generated Content (AIGC) policy includes commonly used and easily understood language when referring to AIGC, and outlines our existing prohibitions on AIGC showing fake authoritative sources or crisis events, or falsely showing public figures in certain contexts including being bullied, making an endorsement, or being endorsed. As AI evolves, we continue to invest in combating harmful AIGC by evolving our proactive detection models, consulting with experts, and partnering with peers on shared solutions.

While we welcome the creativity that new AI may unlock, in line with our updated policy, users must proactively disclose when their content is AI-generated or manipulated but shows realistic scenes (i.e. fake people, places or events that look like they are real). We launched an AI toggle in September 2023, which allows users to self-disclose AI-generated content when posting. When this has been turned on, a tag “Creator labelled as AI-generated” is displayed to users. Alternatively, this can be done through the use of a sticker or caption, such as ‘synthetic’, ‘fake’, ‘not real’, or ‘altered’. 

We also automatically label content made with TikTok effects if they use AI. TikTok may automatically apply the "AI-generated" label to content we identify as completely generated or significantly edited with AI. This may happen when a creator uses TikTok AI effects or uploads AI-generated content that has Content Credentials attached, a technology from the Coalition for Content Provenance and Authenticity (C2PA). Content Credentials attach metadata to content that we can use to recognize and label AIGC instantly. Once content is labeled as AI-generated with an auto label, users are unable to remove the label from the post.

We do not allow: 

  • AIGC that shows realistic-appearing people under the age of 18
  • AIGC that shows the likeness of adult private figures, if we become aware it was used without their permission
  • Misleading AIGC or edited media that falsely shows:
    • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation
    • A crisis event, such as a conflict or natural disaster
    • A public figure who is:
      • being degraded or harassed, or engaging in criminal or antisocial behaviour
      • taking a position on a political issue, commercial product, or a matter of public importance (such as an elections)
      • being politically endorsed or condemned by an individual or group

Measure 15.2

Relevant Signatories will establish or confirm their policies in place to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices impermissibly distorting their behaviour in line with Union and Member States legislation.

QRE 15.2.1

Relevant Signatories will report on their policies and actions to ensure that the algorithms used for detection, moderation and sanctioning of impermissible conduct and content on their services are trustworthy, respect the rights of end-users and do not constitute prohibited manipulative practices in line with Union and Member States legislation.

We have a number of measures to ensure the AI systems we develop uphold the principles of fairness and comply with applicable laws. To that end:
  • We have in place internal guidelines and training to help ensure that the training and deployment of our AI systems comply with applicable data protection laws, as well as principles of fairness.
  • We have instituted a compliance review process for new AI systems that meet certain thresholds, and are working to prioritise review of previously developed algorithms.

We are also proud to be a launch partner of the Partnership on AI's Responsible Practices for Synthetic Media.

Commitment 16

Relevant Signatories commit to operate channels of exchange between their relevant teams in order to proactively share information about cross-platform influence operations, foreign interference in information space and relevant incidents that emerge on their respective services, with the aim of preventing dissemination and resurgence on other services, in full compliance with privacy legislation and with due consideration for security and human rights risks.

We signed up to the following measures of this commitment

Measure 16.1 Measure 16.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Actively engaged with the Crisis Response working group, sharing insights and learnings about relevant areas including CIOs. 

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?


We are continuously reviewing and improving our tools and processes to fight disinformation and will report on any further development in the next COPD report. 

Measure 16.1

Relevant Signatories will share relevant information about cross-platform information manipulation, foreign interference in information space and incidents that emerge on their respective services for instance via a dedicated sub-group of the permanent Task-force or via existing fora for exchanging such information.

QRE 16.1.1

Relevant Signatories will disclose the fora they use for information sharing as well as information about learnings derived from this sharing.

Central to our strategy for identifying and removing CIO on our platforms is working with our stakeholders including from civil society to user reports. This approach facilitates us - and others - disrupting the network’s operations in their early stages. In addition to continuously enhancing our in-house capabilities, we proactively engage in comprehensive reviews of our peers' publicly disclosed findings and swiftly implement necessary actions in alignment with our policies.

To provide more regular and detailed updates about the CIO we disrupt, we have introduced a new dedicated Transparency Report on covert influence operations, which is available in TikTok’s transparency centre. In this report, we have also added new information about operations that we have previously removed and that have attempted to return to our platform with new accounts. The insights and metrics in this report aim to inform industry peers and the research community. 

We share relevant insights and metrics within our quarterly transparency reports, which aim to inform industry peers and the research community. We also review relevant insights and metrics from other industry peers to cross-compare for any similar behaviour on TikTok.

We continue to engage in the sub groups set up for insights sharing between signatories and the Commission. 

As we have detailed in other chapters to this report, we have robust monetisation integrity policies in place and have established joint operating procedures between specialist CIO investigations teams and monetisation integrity teams to work on joint investigations of CIOs involving monetised products.

Measure 16.2

Relevant Signatories will pay specific attention to and share information on the tactical migration of known actors of misinformation, disinformation and information manipulation across different platforms as a way to circumvent moderation policies, engage different audiences or coordinate action on platforms with less scrutiny and policy bandwidth.

QRE 16.2.1

As a result of the collaboration and information sharing between them, Relevant Signatories will share qualitative examples and case studies of migration tactics employed and advertised by such actors on their platforms as observed by their moderation team and/or external partners from Academia or fact-checking organisations engaged in such monitoring.

We publish all of the CIO networks we identify and remove within our transparency reports here. As new deceptive behaviours emerge, we’ll continue to evolve our response, strengthen enforcement capabilities, and publish our findings.

Empowering Users

Commitment 17

In light of the European Commission's initiatives in the area of media literacy, including the new Digital Education Action Plan, Relevant Signatories commit to continue and strengthen their efforts in the area of media literacy and critical thinking, also with the aim to include vulnerable groups.

We signed up to the following measures of this commitment

Measure 17.1 Measure 17.2 Measure 17.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Rolled out two new ongoing general media literacy and critical thinking skills campaigns in the EU and two in EU candidate countries in collaboration with our fact-checking and media literacy partners:
    • France:  Agence France-Presse (AFP)
    • Portugal: Polígrafo
    • Georgia: Fact Check Georgia
    • Moldova: StopFals!
  • This brings the number of  general media literacy and critical thinking skills campaigns in Europe to 11 (Denmark, Finland, France, Georgia, Ireland, Italy, Spain, Sweden, Moldova, Netherlands, and Portugal).
  • Onboarded two new fact-checking partners in wider Europe:
    • Albania & Kosovo: Internews Kosova
    • Georgia: Fact Check Georgia
  • Expanded our fact-checking coverage to a number of wider-European and EU candidate countries:
    • Albania & Kosovo: Internews Kosova
    • Georgia: Fact Check Georgia.
    • Kazakhstan: Reuters
    • Moldova: AFP/Reuters 
    • Serbia: Lead Stories
  • We ran 14 temporary media literacy election integrity campaigns in advance of regional elections, most in collaboration with our fact-checking and media literacy partners:
    • 8 in the EU (Austria, Croatia, France, 2 x Germany, Ireland, Lithuania, and Romania)
      • Austria: Deutsche Presse-Agentur (dpa)
      • Croatia: Faktograf
      • France: Agence France-Presse (AFP)
      • Germany (regional elections): Deutsche Presse-Agentur (dpa)
      • Germany (federal election): Deutsche Presse-Agentur (dpa)
      • Ireland: The Journal
      • Lithuania: N/A
      • Romania: Funky Citizens.
    • 1 in EEA
      • Iceland: N/A
    • 5 in wider Europe/EU candidate countries (Bosnia, Bulgaria, Czechia, Georgia, and Moldova)
      • Bosnia: N/A
      • Bulgaria: N/A
      • Czechia: N/A
      • Georgia: Fact Check Georgia
      • Moldova: StopFals!
  • During the reporting period, we ran 9 Election Speaker Series sessions, 7 in EU Member States and 2 in Georgia and Moldova. 
    • France: Agence France-Presse (AFP)
    • Germany: German Press Agency (dpa)
    • Austria: German Press Agency (dpa)
    • Lithuania: Logically Facts
    • Romania: Funky Citzens
    • Ireland: Logically Facts
  • Croatia: Faktograf
    • Georgia: FactCheck Georgia
    • Moldova: Stop Fals!
  • Launched four new temporary in-app natural disaster media literacy search guides that link to authoritative 3rd party agencies and organisations:
    • Central & Eastern European Floods (Austria, Bosnia, Czechia, Germany, Hungary, Moldova, Poland, Romania, and Slovakia) 
    • Portugal Wildfires 
    • Spanish floods
    • Mayotte Cyclone
  • Continued our in-app interventions, including video tags, search interventions and in-app information centres, available in 23 official EU languages and Norwegian and Icelandic for EEA users, around the elections, the Israel-Hamas Conflict, Climate Change,, Holocaust Education, Mpox, and the War in Ukraine.
    Actively participated in the UN COP29 climate change summit by:
    • Working with the COP29 presidency to promote their content and engage new audiences around the conference as a strategic media partner.
    • Re-launching our global #ClimateAction campaign with over 7K posts from around the world. Content across #ClimateAction has now received over 4B video views since being launched in 2021.
    • Bringing 5 creators to the summit, who collectively produced 15+ videos that received over 60M video views.
    • Launching two global features (a video notice tag and search intervention guide) to point users to authoritative climate related content between 29th October and 25th November, which were viewed 400k times.
  • Our partnership with Verified for Climate, a joint initiative of the UN and social impact agency Purpose, continued to be our flagship climate initiative, which saw a network of 35 Verified Champions across Brazil, the United Arab Emirates, and Spain, work with select TikTok creators to develop educational content tackling climate misinformation and disinformation, and drive climate action within the TikTok community.
  • Partnered with the World Health Organisation (WHO), including a US$ 3 million donation, to support mental well-being awareness and literacy by creating reliable content and combat misinformation through the Fides network, a diverse community of trusted healthcare professionals and content creators in the United Kingdom, United States, France, Japan, Korea, Indonesia, Mexico, and Brazil. 
  • Building on these efforts, we also launched the UK Clinician Creator Network, an initiative bringing together 19 leading NHS qualified clinicians who are actively sharing their medical expertise on TikTok, engaging a community of over 2.2 million followers.
  • Strengthened our approach to state-affiliated media by:
    • Working with third party external experts to shape our state-affiliated media policy, assessment of state-controlled media labels, and continuing to expand its use. 
    • Continued investment in our detection capabilities for state-affiliated media (SAM) accounts, with a focus on automation and scaled detection. 
  • Building on our AI-generated content label for creators, and implementation of C2PA Content Credentials, we launched a number of media literacy campaigns with guidance from expert organisations like Mediawise and WITNESS, including in Brazil, Germany, France, Mexico and the UK, that teach our community how to spot and label AI-generated content.
    • Our AIGC Transparency Campaign informed by WITNESS has reached 80M users globally, including more than 8.5M and 9.5M in Germany and France respectively.
  • Brought greater transparency about our systems and our integrity and authenticity efforts to our community by sharing regular insights and updates.  In H2 2024, we continued to expand our Transparency Center with resources like our first-ever US Elections Integrity Hub, European Elections Integrity Hub, dedicated Covert Influence Operations Reports, and a new Transparency Center blog.
  • Continued our partnership with Amadeu Antonio Stiftung in Germany on the Demo:create project, an educational initiative supporting young TikTok users to effectively deal with online hate speech, disinformation and misinformation.
  • Continued to invest in training and development for our human moderation teams. 
  • TikTok continues to co-chair the working group on Elections.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report

Measure 17.1

Relevant Signatories will design and implement or continue to maintain tools to improve media literacy and critical thinking, for instance by empowering users with context on the content visible on services or with guidance on how to evaluate online content.

QRE 17.1.1

Relevant Signatories will outline the tools they develop or maintain that are relevant to this commitment and report on their deployment in each Member State.

In addition to systematically removing content that violates our I&A policies, we continue to dedicate significant resources to: expanding our in-app measures that show users additional context on certain content; redirecting them to authoritative information; and making these tools available in 23 EU official languages (plus, for EEA users, Norwegian & Icelandic).

We work with external experts to combat harmful misinformation. For example, we work with the World Health Organisation (WHO) on medical information, and our global fact-checking partners, taking into account their feedback, as well as user feedback, to continually identify new topics and consider which tools may be best suited for raising awareness around that topic.

We deploy a combination of in-app user intervention tools on topical issues such as elections , the Israel-Hamas Conflict, Holocaust Education, Mpox and the War in Ukraine..

Video notice tags. 

A video notice tag is an information bar at the bottom of a video which is automatically applied to a specific word or hashtag (or set of hashtags). The information bar is clickable and invites users to “Learn more about [the topic]”. Users will be directed to an in-app guide, or reliable third party resource, as appropriate.

Search intervention. If users search for terms associated with a topic, they will be presented with a banner encouraging them to verify the facts and providing a link to a trusted source of information. Search interventions are not deployed for search terms that violate our Community Guidelines, which are actioned according to our policies. 

  • For example, the four new ongoing general media literacy and critical thinking skills campaigns rolled out in France, Georgia, Moldova, and Portugal, are all supported with search guides to direct users to authoritative sources. 
  • Our COP29 global search intervention, which ran from 29th October and 25th November, pointed users to authoritative climate related content, and was viewed 400k times.

Public service announcement (PSA). If users search for a hashtag on the topic, they will be served with a public service announcement reminding them about our Community Guidelines and presenting them with links to a trusted source of information. 

Unverified content label. In addition to the above mentioned tools, to encourage users to consider the reliability of content related to an emergency or unfolding event, which has been assessed by our fact-checking partners but cannot be verified as accurate i.e., ‘unverified content’, we apply warning labels and we prompt people to reconsider sharing such content. Details of these warning labels are included in our Community Guidelines.

Where users continue to post despite the warning:
  • To limit the spread of potentially misleading information, the video will become ineligible for recommendation in the For You feed.
  • The video's creator is also notified that their video was flagged as unsubstantiated content and is provided additional information about why the warning label has been added to their content. Again, this is to raise the creator’s awareness about the credibility of the content that they have shared. 

State-controlled media label. Our state-affiliated media policy is to label accounts run by entities whose editorial output or decision-making process is subject to control or influence by a government. We apply a prominent label to all content and accounts from state-controlled media. The user is also shown a screen pop-up providing information about what the label means, inviting them to “learn more”, and redirecting them to an in-app page. The measure brings transparency to our community, raises users’ awareness, and encourages users to consider the reliability of the source.  We continue to work with experts to inform our approach and explore how we can continue to expand its use. 

In the EU, Iceland and Liechtenstein, we have also taken steps to restrict access to content from the entities sanctioned by the EU in 2024:  
  • RT - Russia Today UK
  • RT - Russia Today Germany
  • RT - Russia Today France
  • RT- Russia Today Spanish
  • Sputnik
  • Rossiya RTR / RTR Planeta
  • Rossiya 24 / Russia 24
  • TV Centre International
  • NTV/NTV Mir
  • Rossiya 1
  • REN TV
  • Pervyi Kanal / Channel 1
  • RT Arabic
  • Sputnik Arabic
  • RT Balkan
  • Oriental Review
  • Tsargrad
  • New Eastern Outlook
  • Katehon
  • Voice of Europe
  • RIA Novosti
  • Izvestija
  • Rossiiskaja Gazeta

AI-generated content labels. As more creators take advantage of Artificial Intelligence (AI) to enhance their creativity, we want to support transparent and responsible content creation practices. In 2023 TikTok launched a AI-generated label for creators to disclose content that is completely AI-generated or significantly edited by AI.The launch of this new tool to help creators label their AI-generated content was accompanied by a creator education campaign, a Help Center page, and a Newsroom Post. In May 2024, we started using the Coalition for Content Provenance and Authenticity (C2PA) Content Credentials, which enables our systems to instantly recognize and automatically label AIGC. In the interests of transparency, we also renamed TikTok AI effects to explicitly include "AI" in their name and corresponding effects label, and updated our guidelines for Effect House creators to do the same. 

Dedicated online and in-app information resources. The above mentioned tools provide links to users to accurate and up-to-date information from trusted sources. Depending on the topic, or the relevant EU country, users may be directed to an external authoritative source (e.g., a national government website or an independent national electoral commission), an in-app information centre (e.g., War in Ukraine), or a dedicated page in the TikTok Safety Center or Transparency Center. 

We use our Safety Center to inform our community about our approach to safety, privacy, and security on our platform. Relevant to combating harmful misinformation, we have dedicated information on:


Users can learn more about our transparency efforts in our dedicated Transparency Center, available in a number of EU languages, which houses our transparency reports, including the standalone Covert Influence Operations report and the reports we have published under this Code, as well as information on our commitments to maintaining platform integrity e.g., Protecting the integrity of elections, Combating misinformation, Countering influence operation, Supporting responsible, transparent AI-generated content, and details of Government Removal Requests

We also use Newsroom posts to keep our community informed about our most recent updates and efforts across News, Product, Community, Safety and Product. Users can select their country, including EU, for preferred language where available, and regionally relevant posts. For example, upon publication of our fourth Code report in September 2024, we provided users with an overview of our continued  commitment to Combating Disinformation under the EU Code of Practice . We also updated users about how we are partnering with our industry to advance AI transparency and literacy. and how we protected the integrity of the platform during the Romanian presidential elections. 

SLI 17.1.1

Relevant Signatories will report, at the Member State level, on metrics pertinent to assessing the effects of the tools described in the qualitative reporting element for Measure 17.1, which will include: the total count of impressions of the tool; and information on the interactions/engagement with the tool.

Methodology of data measurement:

The number of impressions, clicks and click through rates of video notice tags, search interventions and public service announcements are based on the approximate location of the users that engaged with the tools. The number of impressions of the Safety Center pages is based on the IP location of the users. 

Country Number of impressions of state-affiliated media label (SAM) Number of clicks on state-affiliated media label (SAM) Clicks through rate of state-affiliated media label (SAM) Number of impressions of topic covered by video Intervention (Holocaust Misinformation/Denial) Number of impressions of topic covered by video Intervention (mpox) Number of impressions of topic covered by video Intervention (Elections) CTR of video Intervention (Holocaust Misinformation/Denial) CTR of video Intervention (mpox) CTR by video Intervention (Elections) Number of impressions for search interventions (Holocaust Misinformation/Denial) Number of impressions for search interventions (mpox) Number of impressions for search interventions (Elections) Number of impressions for search interventions (Climate change) Number of clicks on search interventions (Holocaust Misinformation/Denial) Number of clicks on search interventions (mpox) Number of clicks on search interventions (Elections) Number of clicks on search interventions (Climate change) CTR of search interventions (Holocaust Misinformation/Denial) CTR of search interventions (mpox) CTR of search interventions (Elections) CTR of search interventions (Climate change) Number of impressions of public service announcements (Holocaust Misinformation/Denial) Number of impressions of public service announcements (mpox)
Austria 3705075 5771 0.001557593 3987721 6065332 78371511 0.00277878 0.005051826 0.001992012 156298 467253 708656 228390 6187 2111 3263 196 0.03958464 0.004517895 0.004604491 0.000858181 16 26
Belgium 3789615 6994 0.00184557 3501679 15383226 0 0.003052536 0.004787292 0 200324 669059 0 207808 10626 2501 0 159 0.053044069 0.003738086 0 0.000765129 41 26
Bulgaria 7727480 9390 0.001215144 697482 5869376 29664185 0.004028778 0.00679033 0.001686141 62548 446240 121672 140769 1701 2724 367 183 0.027195114 0.006104338 0.003016306 0.001300002 17 39
Croatia 1656809 2786 0.001681546 658928 5206361 24640666 0.003373661 0.007821202 0.001692933 100586 561621 546661 159478 2631 3321 1767 128 0.026156722 0.00591324 0.003232351 0.000802619 6 13
Cyprus 752546 1166 0.001549407 311308 1309314 0 0.003819369 0.006010017 0 24629 77553 0 19126 445 452 0 20 0.018068131 0.005828272 0 0.001045697 4 4
Czech Republic 7602192 7762 0.001021021 2888696 6134172 329546 0.004759587 0.009942499 0.001007447 88635 587165 17994 163222 2476 4180 56 172 0.027934789 0.007118953 0.003112148 0.00105378 99 76
Denmark 2074577 4350 0.002096813 1690719 4604268 0 0.004065134 0.008214118 0 59881 435391 0 148528 1572 2571 0 134 0.026252067 0.005905037 0 0.000902187 13 17
Estonia 1391192 2402 0.001726577 406818 2279691 0 0.003987041 0.00802872 0 14970 147804 0 29383 476 926 0 44 0.031796927 0.006265054 0 0.001497465 60 12
Finland 3310339 9274 0.002801526 3314306 8904456 0 0.003627305 0.007459524 0 118664 648096 0 238286 2024 4591 0 213 0.017056563 0.007083827 0 0.000893884 27 30
France 32521568 28995 0.000891562 2293975 123453307 1301158781 0.004792554 0.003822433 0.001599036 1592000 3031084 15712577 652102 111776 5826 7306 446 0.070211055 0.001922085 0.000464978 0.000683942 562 473
Germany 37522365 49125 0.001309219 40208515 51643857 209773848 0.002756879 0.004731521 0.001247319 1383744 3901089 7265486 1761399 66652 15278 13805 1488 0.048167869 0.003916342 0.001900079 0.000844783 344 385
Greece 3107902 6491 0.002088547 2559183 9476029 0 0.003910232 0.006915661 0 492728 1145046 0 250015 2946 6796 0 361 0.005978958 0.005935133 0 0.001443913 14 28
Hungary 41012350 27450 0.000669311 4785260 5483667 0 0.003591236 0.007310437 0 118829 495768 0 270274 4853 3857 0 330 0.040840199 0.007779849 0 0.001220983 14 33
Ireland 3250908 6757 0.002078496 3352323 9413972 278245 0.003129173 0.00624168 0.00127226 112337 714141 1651434 221386 1848 2252 16293 118 0.016450502 0.003153439 0.009865971 0.000533006 20 38
Italy 12463432 16462 0.001320824 1987326 31117509 0 0.004668082 0.005592125 0 877504 2604995 0 1113346 11749 36148 0 818 0.013389113 0.013876418 0 0.000734722 74 63
Latvia 3063840 4027 0.001314364 491701 2821714 0 0.004234281 0.008449829 0 18212 206307 0 37048 668 1375 0 63 0.036679113 0.006664825 0 0.001700497 89 9
Lithuania 3025380 5056 0.001671195 685542 5065658 9102070 0.003958911 0.007301519 0.001649515 37622 484805 41034 112397 820 2767 127 176 0.021795758 0.005707449 0.003094994 0.001565878 41 11
Luxembourg 439714 628 0.001428201 181085 823972 0 0.003633653 0.005151874 0 11059 39807 0 14448 571 180 0 17 0.051632155 0.004521818 0 0.001176633 2 1
Malta 417073 605 0.001450585 206924 645827 0 0.002860954 0.004848048 0 7744 33713 0 9186 178 148 0 5 0.022985537 0.004389998 0 0.000544307 2 1
Netherlands 14557284 23801 0.001634989 12745193 15404762 0 0.002615025 0.007486971 0 490423 1010537 0 406345 6942 4547 0 308 0.014155127 0.004499588 0 0.000757977 80 147
Poland 203836052 63752 0.000312761 27175740 25070807 0 0.002600224 0.008385929 0 1069545 2560968 0 888768 842 19079 0 1136 0.000787251 0.007449917 0 0.001278174 154 172
Portugal 1518762 4985 0.003282279 1990257 6017068 0 0.003294047 0.006165794 0 221507 714886 0 198458 2507 3653 0 162 0.011317927 0.005109906 0 0.000816294 11 12
Romania 36420337 58883 0.001616762 3636828 12931412 1093883826 0.004134097 0.00773334 0.001400099 222020 1339325 21733061 375102 5125 7857 70746 513 0.023083506 0.005866388 0.003255225 0.001367628 21 40
Slovakia 1971329 3352 0.001700376 640942 1798295 0 0.003516699 0.007978669 0 42499 329754 0 93322 1331 2027 0 104 0.031318384 0.006147007 0 0.001114421 16 24
Slovenia 724668 1403 0.001936059 462528 2037178 0 0.003353311 0.006032364 0 30051 164647 0 34445 1520 793 0 26 0.05058068 0.004816365 0 0.000754827 3 5
Spain 6639002 11904 0.001793041 8574010 47595074 0 0.003606247 0.003745682 0 2155115 1775339 0 842382 39383 6112 0 503 0.018274199 0.003442723 0 0.000597116 51 82
Sweden 11757565 12977 0.001103715 5540266 15829378 0 0.004684613 0.00741501 0 175333 1106486 0 486125 4056 6128 0 405 0.023133124 0.005538254 0 0.000833119 87 55
Iceland 291908 589 0.002017759 215411 679537 4620010 0.003574562 0.006076196 0.004255618 5203 22964 78449 4668 147 245 1095 7 0.028252931 0.010668873 0.013958113 0.001499572 4 5
Liechtenstein 50186 48 0.000956442 11568 21397 0 0.004062932 0.006169089 0 478 1406 0 548 25 10 0 1 0.052301255 0.007112376 0 0.001824818 1 0
Norway 4367100 8605 0.001970415 2909307 6765469 0 0.004536476 0.009467784 0 89193 539291 0 223306 2505 3472 0 179 0.028085164 0.006438083 0 0.000801591 27 18
Total EU 446259356 376548 0.000843787 134975255 422385682 2747202678 0.003136642 0.005407563 0.001506023 9884807 25698879 47798575 9101538 291905 148200 113730 8228 0.029530673 0.005766789 0.00237936 0.000904023 1868 1822
Total EEA 450968550 385790 0.00085547 138111541 429852085 2751822688 0.00316689 0.005472562 0.001510639 9979681 26262540 47877024 9330060 294582 151927 114825 8415 0.029518178 0.005784932 0.002398332 0.000901923 1900 1845

Measure 17.2

Relevant Signatories will develop, promote and/or support or continue to run activities to improve media literacy and critical thinking such as campaigns to raise awareness about Disinformation, as well as the TTPs that are being used by malicious actors, among the general public across the European Union, also considering the involvement of vulnerable communities.


We are pleased to report metrics on the four new general media literacy and critical thinking skills campaigns in France, Georgia, Moldova, and Portugal as well as the existing permanent campaigns that ran through the reporting period in: Denmark, Finland, Ireland, Italy, Spain, Sweden, and Netherlands.

QRE 17.2.1

Relevant Signatories will describe the activities they launch or support and the Member States they target and reach. Relevant signatories will further report on actions taken to promote the campaigns to their user base per Member States targeted.

In order to raise awareness among our users about specific topics and empower them, we run a variety of on and off-platform media literacy campaigns. Our approach may differ depending on the topic. We localise certain campaigns (e.g., for elections) meaning we collaborate with national partners to develop an approach that best resonates with the local audience. For other campaigns such as the War in Ukraine, our emphasis is on scalability and connecting users to accurate and trusted resources. 

Below are examples of the campaigns we have most recently run in-app which have leveraged a number of the intervention tools we have outlined in our response to QRE 17.1.1 (e.g. search interventions and video notice tags).

(I) Promoting election integrity. As well as the election integrity pages on TikTok's Safety Center and Transparency Center, and the new dedicated European Elections Integrity Hub, which bring awareness and visibility to how we tackle election misinformation and covert influence operations on our platform, we launched media literacy campaigns in advance of several elections in the EU and wider Europe.

France Legislative Elections 2024: From 17 June 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 French legislative elections. The centre contained a section about spotting misinformation, which included videos created in partnership with fact-checking organisation Agence France-Presse (AFP).

Germany Regional Elections 2024 (Saxony, Thuringia, Brandenburg): From 8 Aug 2024, we launched an in-app Election Centre to provide users with up-to-date information about the German regional elections. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Deutsche Presse-Agentur (dpa).

Austria Federal Election 2024: From 13 Aug 2024, we launched an in-app Election Centre to provide users with up-to-date information about the election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Deutsche Presse-Agentur (dpa).

Moldova Presidential Election and EU Referendum 2024: From 6 Sept 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Moldova presidential election and EU referendum. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation StopFals!

Georgia Parliamentary Election 2024: From 16 Sept 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Georgia parliamentary election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Fact Check Georgia.

Bosnia Parliamentary Election 2024: From 17 Sept 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Bosnian regional elections, which contained a section about spotting misinformation.

Lithuania Parliamentary Election 2024: From 17 Sept 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Lithuanian parliamentary elections, which contained a section about spotting misinformation.

Czechia Regional Elections 2024: From 13 Sept 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Czechia regional elections, which contained a section about spotting misinformation.

Bulgaria Parliamentary Election 2024: From 1 Oct 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Bulgaria parliamentary election, which contained a section about spotting misinformation. 

Romania Presidential and Parliamentary Election 2024: From 11 Nov 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Romanian elections. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Funky Citizens. [On 6 Dec 2024, following the Constitutional Court's decision to annul the first round of the presidential election, we updated our in-app Election Centre to guide users on rapidly changing events].

Ireland General Election: From 7 Nov 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Irish general election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation The Journal.

Iceland Parliamentary Election 2024: From 7 Nov 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Iceland parliamentary election, which contained a section about spotting misinformation. 

Croatia Presidential Election 2024: From 6 Dec 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2024 Croatia presidential election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Faktograf

Germany Federal Election 2024: From 16 Dec 2024, we launched an in-app Election Centre to provide users with up-to-date information about the 2025 German federal election. The centre contained a section about spotting misinformation, which included videos created in partnership with the fact-checking organisation Deutsche Presse-Agentur (dpa).


(II) Election Speaker Series.
To further promote election integrity, and inform our approach to elections, we invited suitably qualified local and regional external experts to share their insights and market expertise with our internal teams. During this reporting period, we ran 9 Election Speaker Series sessions, 7 in EU Member States and 2 in Georgia and Moldova. 

  1. France: Agence France-Presse (AFP)
  2. Germany: German Press Agency (dpa)
  3. Austria: German Press Agency (dpa)
  4. Lithuania: Logically Facts
  5. Romania: Funky Citzens
  6. Ireland: Logically Facts
  7. Croatia: Faktograf
  8. Georgia: FactCheck Georgia
  9. Moldova: Stop Fals!

(III) Media literacy (General). We rolled out two new ongoing general media literacy and critical thinking skills campaigns in the EU and two in EU candidate countries in collaboration with our fact-checking and media literacy partners:
  • France:  Agence France-Presse (AFP)
  • Portugal: Polígrafo
  • Georgia: Fact Check Georgia
  • Moldova: StopFals!

This brings the number of  general media literacy and critical thinking skills campaigns in Europe to 11 (Denmark, Finland, France, Georgia, Ireland, Italy, Spain, Sweden, Moldova, Netherlands, and Portugal).

(IV) Media literacy (War in Ukraine). We continue to serve 17 localised media literacy campaigns specific to the war in Ukraine in: Ukraine, Romania, Slovakia, Hungary, Latvia, Estonia, Lithuania, Czechia, Poland, Croatia, Slovenia, Bulgaria, Germany, Austria, Bosnia, Montenegro, and Serbia.

  • Partnered with Lead Stories: Ukraine, Romania, Slovakia, Hungary, Latvia, Estonia, Lithuania.
  • Partnered with fakenews.pl: Poland.
  • Partnered with Correctiv: Germany, Austria.

Through these media literacy campaigns, users searching for keywords relating to the war in Ukraine on TikTok are directed to tips prepared in partnership with local media literacy bodies and our trusted fact-checking partners, to help them identify misinformation and prevent its spread on the platform. 

(V) Israel-Hamas conflict. To help raise awareness and to protect our users, we have search interventions which are triggered when users search for neutral terms related to this topic (e.g., Israel, Palestine). These search interventions remind users to pause and check their sources and also directs them to well-being resources.

(VI) Climate literacy. 
  • Our climate change search intervention tool is available in 23 official EU languages (plus Norwegian and Icelandic for EEA users). It redirects users looking for climate change-related content to authoritative information and encourages them to report any potential misinformation they see.
  • In April 2024, in partnership with The Mary Robinson Centre, TikTok launched the TikTok Youth Climate Leaders Alliance, a programme aimed at 18-30-year-olds looking to make significant changes in the face of the climate crisis.
  • Actively participated in the UN COP29 climate change summit by:
    • Working with the COP29 presidency to promote their content and engage new audiences around the conference as a strategic media partner.
    • Re-launching our global #ClimateAction campaign with over 7K posts from around the world. Content across #ClimateAction has now received over 4B video views since being launched in 2021.
    • Bringing 5 creators to the summit, who collectively produced 15+ videos that received over 60M video views.
    • Launching two global features (a video notice tag and search intervention guide) to point users to authoritative climate related content between 29th October and 25th November, which were viewed 400k times.
  • As of August 2024, popular hashtags #ClimateChange, #SustainableLiving, and #ClimateAction have more than 800,000 associated posts on TikTok, combined.

SLI 17.2.1

Relevant Signatories report on number of media literacy and awareness raising activities organised and or participated in and will share quantitative information pertinent to show the effects of the campaigns they build or support at the Member State level.

We are pleased to report metrics on the four new general media literacy and critical thinking skills campaigns in France, Georgia, Moldova, and Portugal as well as the existing permanent campaigns that ran through the reporting period in: Denmark, Finland, Ireland, Italy, Spain, Sweden, and Netherlands.

Country Total number of impressions of H5 Page between July 1 and December 31 2024 Number of impressions of search intervention Number of clicks on search intervention Click through rate of the search intervention
France 72861 229676 1370 0.60%
Portugal 3400 107964 426 0.39%
Denmark 1540 10854 30 0.28%
Netherlands 2492 64241 226 0.35%
Ireland 1320 14282 46 0.32%
Finland 595 3725 25 0.67%
Sweden 1197 13444 64 0.48%
Spain 26213 1253955 3220 0.26%
Italy 1948 41297 181 0.44%
Austria and Germany 33220 15072256 45865 0.30%
Bulgaria 741 309132 1095 0.35%
Croatia 811 449332 1452 0.32%
Czech Republic 1025 954741 1722 0.18%
Slovenia 286 118972 407 0.34%

Measure 17.3

For both of the above Measures, and in order to build on the expertise of media literacy experts in the design, implementation, and impact measurement of tools, relevant Signatories will partner or consult with media literacy experts in the EU, including for instance the Commission's Media Literacy Expert Group, ERGA's Media Literacy Action Group, EDMO, its country-specific branches, or relevant Member State universities or organisations that have relevant expertise.

QRE 17.3.1

Relevant Signatories will describe how they involved and partnered with media literacy experts for the purposes of all Measures in this Commitment.

As documented in the TikTok Safety Center Safety Partners page and TikTok’s Advisory Councils,  we work with an array of industry experts, non-governmental organisations, and industry associations around the world in our commitment to building a safe platform for our community. They include media literacy bodies, to develop campaigns that educate users and redirect them to authoritative resources, and fact-checking partners. Specific examples of partnerships within the campaigns and projects set out in QRE 17.2.1 are:

(I) Promoting election integrity. We partner with various media organisations and fact-checkers to promote election integrity on TikTok. For more detail about the input our fact-checking partners provide please refer to QRE 30.1.3.
  • We ran 14 temporary media literacy election integrity campaigns in advance of regional elections, most in collaboration with our fact-checking and media literacy partners:
    • 7 in the EU (Austria, Croatia, France, 2 x Germany, Ireland and Lithuania)
      • Austria: Deutsche Presse-Agentur (dpa)
      • Croatia: Faktograf
      • France: Agence France-Presse (AFP)
      • Germany (regional elections): Deutsche Presse-Agentur (dpa)
      • Germany (federal election): Deutsche Presse-Agentur (dpa)
      • Ireland: The Journal
    • 1 in EEA (Iceland)
    • 6 in wider Europe/EU candidate countries (Bosnia, Bulgaria, Czechia, Georgia, Moldova and Romania)
      • Georgia: Fact Check Georgia
      • Moldova: StopFals!
      • Romania: Funky Citizens.
  • Election speaker series. To further promote election integrity, and inform our approach to elections, we invited suitably qualified local and regional external experts to share their insights and market expertise with our internal teams. During this reporting period, we ran 9 Election Speaker Series sessions, 7 in EU Member States and 2 in Georgia and Moldova. 
    • France: Agence France-Presse (AFP)
    • Germany: German Press Agency (dpa)
    • Austria: German Press Agency (dpa)
    • Lithuania: Logically Facts
    • Romania: Funky Citzens
    • Ireland: Logically Facts
    • Croatia: Faktograf
    • Georgia: FactCheck Georgia
    • Moldova: Stop Fals!

(II) War in Ukraine.
We continue to run our media literacy campaigns about the war in Ukraine, developed in partnership with our media literacy partners Correctiv in Austria and Germany, Fakenews.pl in Poland and Lead Stories in Ukraine, Romania, Slovakia, Hungary, Latvia, Estonia, Lithuania. We also expanded this campaign to Serbia, Bosnia, Montenegro, Czechia, Croatia, Slovenia, Bulgaria.

Commitment 18

Relevant Signatories commit to minimise the risks of viral propagation of Disinformation by adopting safe design practices as they develop their systems, policies, and features.

We signed up to the following measures of this commitment

Measure 18.1 Measure 18.2 Measure 18.3

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

  • Onboarded two new fact-checking partners in wider Europe:
    • Albania & Kosovo: Internews Kosova
    • Georgia: Fact Check Georgia
  • Continued to improve the accuracy of, and overall coverage provided by, our machine learning detection models. 
  • Members of the EDMO working group for the creation of the Independent Intermediary Body (IIB) to support research on digital platforms.
    • Refined our standard operating procedure (SOP) for vetted researcher access to ensure compliance with the provisions of the Delegated Act on Data Access for Research.
    • Participated in the EC Technical Roundtable on data access in December, 2024.
    • Invested in training and development for our Trust and Safety team, including regular internal sessions dedicated to knowledge sharing and discussion about relevant issues and trends and attending external events to share their expertise and support continued professional learning. For example: 
      • In the lead-up to certain elections, we invite suitably qualified external local/regional experts, as part of our Election Speaker Series. Sharing their market expertise with our internal teams provides us with insights to better understand areas that could potentially amount to election manipulation, and informs our approach to the upcoming election.
      • In June 2024, 12 members of our Trust & Safety team (including leaders of our fact-checking program) attended the GlobalFact11 and participated in an on-the-record mainstage presentation answering questions about our misinformation strategy and partnerships with professional fact-checkers.
    • Continued to participate in, and co-chair, the working group on Elections.
  • In October, we sponsored, attended, and presented at Disinfo24 the annual EU DisinfoLab Conference in Riga.

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

We are continuously reviewing and improving our tools and processes to fight misinformation and disinformation and will report on any further development in the next COPD report.

Measure 18.1

Relevant Signatories will take measures to mitigate risks of their services fuelling the viral spread of harmful Disinformation, such as: recommender systems designed to improve the prominence of authoritative information and reduce the prominence of Disinformation based on clear and transparent methods and approaches for defining the criteria for authoritative information; other systemic approaches in the design of their products, policies, or processes, such as pre-testing.

QRE 18.1.1

Relevant Signatories will report on the risk mitigation systems, tools, procedures, or features deployed under Measure 18.1 and report on their deployment in each EU Member State.

TikTok takes a multi-faceted approach to tackling the spread of harmful misinformation, regardless of intent. This includes our policies, products, practices and external partnerships with fact-checkers, media literacy bodies, and researchers.
 
(I) Removal of violating content or accounts. To reduce potential harm, we aim to remove content or accounts that violate our CGs including our I&A policies before they are viewed or shared by other people. We detect and take action on this content by using a combination of automation and human moderation.
  • Automated Review We place considerable emphasis on proactive detection to remove violative content. Content that is uploaded to the platform is typically first reviewed by our automated moderation technology, which looks at a variety of signals across content, including keywords, images, captions, and audio, to identify violating content. We work with various external experts, like our fact-checking partners, to inform our keyword lists. If our automated moderation technology identifies content that is a potential violation, it will either be automatically removed from the platform or flagged for further review by our human moderation teams.  In line with our safeguards to help ensure accurate decisions are made, automated removal is applied when violations are the most clear-cut.  We also carry out targeted sweeps of certain types of violative content including harmful misinformation, where we have identified specific risks or where our fact-checking partners or other experts have alerted us to specific risks. 
  • Human Moderation While some misinformation can be enforced through technology alone—for example, repetitions of previously debunked content—misinformation evolves quickly and is highly nuanced. That’s why we have misinformation moderators with enhanced training and access to tools like our global repository of previously fact-checked claims from the IFCN-accredited fact-checking partners, who help assess the accuracy of content. We also have teams on the ground who partner with experts to prioritise local context and nuance. We may also issue guidance to our moderation teams to help them more easily spot and take swift action on violating content. Human moderation will also occur if a video gains popularity or has been reported. Community members can report violations in-app and on our website. Our fact-checking partners and other stakeholders can also report potential violating content to us directly.

(II) Safety in our recommendations. In addition to removing content that clearly violates our CGs, we have a number of safeguards in place to ensure the For You feed (as the primary access point for discovering original and entertaining content on the platform) has safety built-in.

  1. For content that does not violate our CGs but may negatively impact the authenticity of the platform, we reduce its prominence on the For You feed and / or label it. The types of misinformation we may make  ineligible for the For You feed are made clear to users here; general conspiracy theories, unverified information related to an emergency or unfolding event  and potential high-harm misinformation that is undergoing a fact-check. We also label accounts and content of state-affiliated media entities to empower users to consider the sources of information. Our moderators take additional precautions to review videos as they rise in popularity to reduce the likelihood of content that may not be appropriate entering our recommended system. 
  2. Providing access to authoritative information is an important part of our overall strategy to counter misinformation. There are a number of ways in which we do this, including launching information centres with informative resources from authoritative third-parties in response to global or local events, adding public service announcements on hashtag or search pages, or labelling content related to a certain topic to prompt our community to seek out authoritative information. 

(III) Safety by Design. Within our Trust and Safety Product and Policy teams, we have subject matter experts dedicated to integrity and authenticity. When we develop a new feature or policy, these teams work closely with external partners to ensure we are building safety into TikTok by design and reflecting industry best practice. For example:

  • We collaborate with Irrational Labs to develop and implement specialised prompts to help users consider before sharing unverified content (as outlined in QRE 21.3.1),
  • Yad Vashem created an enrichment program on the Holocaust for our Trust and Safety team. The five week program aimed to give our team a deeper understanding about the Holocaust, its lessons and misinformation related to antisemitism and hatred.
  • We worked with local/regional experts through our Election Speaker Series to ensure their insights and expertise informs our internal teams ahead of particular elections throughout 2024.

QRE 18.1.2

Relevant Signatories will publish the main parameters of their recommender systems, both in their report and, once it is operational, on the Transparency Centre.

The For You feed is the interface users first see when they open TikTok. It is central to the TikTok experience and where most of our users spend their time exploring the platform. User interactions act as signals that help the recommender systems predict content they are more likely to be interested in as well as the content they might be less interested in and may prefer to skip. User interactions across TikTok can impact how the system ranks and serves content. 
These are some examples of information that may influence TikTok content in your For You feed:
  • User interactions: Content you like, share, comment on, and watch in full or skip, as well as accounts of followers that you follow back.
  • Content information: Sounds, hashtags, number of views, and the country in which the content was published.
  • User information: Device settings, language preference, location, time zone and day, and device type.

For most users, user interactions, which may include the time spent watching a video, are generally weighted more heavily than others. 

Aside from the signals users provide by how they interact with content on TikTok, there are additional tools we have built to help them better control what kind of content is recommended to them.

  • Not interested: Users can long-press on the video in their For You feed and select ‘Not interested’ from the pop-up menu. This will let us know they are not interested in this type of content and we will limit how much of that content we recommend in their feed.
  • Video keyword filters: They can add keywords – both words or hashtags – they’d like to filter from their For You feed.
  • For You refresh: To help you discover new content, users can refresh their For You feed, enabling them to explore entirely new sides of TikTok.
We share more information about our recommender systems in our Help Center and Transparency Center and below in our response to QRE 19.1.1.

QRE 18.1.3

Relevant Signatories will outline how they design their products, policies, or processes, to reduce the impressions and engagement with Disinformation whether through recommender systems or through other systemic approaches, and/or to increase the visibility of authoritative information.

We take action to prevent and mitigate the spread of inaccurate, misleading, or false misinformation that may cause significant harm to individuals or the public at large. We do this by removing content and accounts that violate our rules, investing in media literacy and connecting our community to authoritative information, and partnering with external experts. Our I&A policies make clear that we do not allow activities that may undermine the integrity of our platform or the authenticity of our users. We remove content or accounts that involve misleading information that causes significant harm or, in certain circumstances, reduce the prominence of content. The types of misinformation we may make ineligible For You feed are set out in our Community Guidelines.

  •  Misinformation
    • Conspiracy theories that are unfounded and claim that certain events or situations are carried out by covert or powerful groups, such as "the government" or a "secret society".
    • Moderate harm health misinformation, such as an unproven recommendation for how to treat a minor illness.
    • Repurposed media, such as showing a crowd at a music concert and suggesting it is a political protest.
    • Misrepresenting authoritative sources, such as selectively referencing certain scientific data to support a conclusion that is counter to the findings of the study.
    • Unverified claims related to an emergency or unfolding event.
    • Potential high-harm misinformation while it is undergoing a fact-checking review.
  • Civic and Election Integrity
    • Unverified claims about an election, such as a premature claim that all ballots have been counted or tallied.
    • Statements that significantly misrepresent authoritative civic information, such as a false claim about the text of a parliamentary bill.

  • Fake Engagement
    • Content that tricks or manipulates others as a way to increase gifts, or engagement metrics, such as "like-for-like" promises or other false incentives for engaging with content.

To enforce our CGs at scale, we use a combination of automated review and human moderation. While some misinformation can be enforced through technology alone—for example, repetitions of previously debunked content—misinformation evolves quickly and is highly nuanced. Assessing harmful misinformation requires additional context and assessment by our misinformation moderators who have enhanced training, expertise and tools to identify such content, including our global repository of previously fact-checked claims from the IFCN-accredited fact-checking partners and direct access to our fact-checking partners where appropriate.

Our network of independent fact-checking partners do not moderate content directly on TikTok, but assess whether a claim is true, false, or unsubstantiated so that our moderators can take action based on our Community Guidelines. We incorporate fact-checker input into our broader content moderation efforts through:

  • Proactive insight reports that flag new and evolving claims they’re seeing across the internet. This helps us detect harmful misinformation and anticipate misinformation trends on our platform.
  • A repository of previously fact-checked claims to help misinformation moderators make swift and accurate decisions. 

Working with our network of independent fact-checking organisations enables TikTok to identify and take action on misinformation and connect our community to authoritative information around important events. This is an important part of our overall strategy to counter misinformation. There are a number of ways in which we do this, including launching information centers with resources from authoritative third-parties in response to global or local events,  adding public service announcements (PSAs) on hashtags or search pages, or labelling content related to a certain topic to prompt our community to seek out authoritative information.

We are also committed to civic and election integrity and mitigating the spread of false or misleading content about an electoral or civic process. We work with national electoral commissions, media literacy bodies and civil society organisations to ensure we are providing our community with accurate up-to-date information about an election through our in-app election information centers, election guides, search interventions and content labels.


SLI 18.1.1

Relevant Signatories will provide, through meaningful metrics capable of catering for the performance of their products, policies, processes (including recommender systems), or other systemic approaches as relevant to Measure 18.1 an estimation of the effectiveness of such measures, such as the reduction of the prevalence, views, or impressions of Disinformation and/or the increase in visibility of authoritative information. Insofar as possible, Relevant Signatories will highlight the causal effects of those measures.

Methodology of data measurement:

The share cancel rate (%) following the unverified content label share warning pop-up indicates the percentage of users who do not share a video after seeing the label pop up. This metric is based on the approximate location of the users that engaged with these tools.

Country Share cancel rate (%) following the unverified content label share warning pop-up (users who do not share the video after seeing the pop up)
Austria 31.80%
Belgium 33.80%
Bulgaria 34.00%
Croatia 33.70%
Cyprus 32.90%
Czech Republic 29.50%
Denmark 30.20%
Estonia 28.50%
Finland 27.20%
France 37.10%
Germany 30.10%
Greece 32.10%
Hungary 31.40%
Ireland 29.60%
Italy 37.70%
Latvia 30.90%
Lithuania 30.80%
Luxembourg 33.60%
Malta 35.40%
Netherlands 27.80%
Poland 28.90%
Portugal 33.10%
Romania 30.10%
Slovakia 28.90%
Slovenia 33.30%
Spain 34.10%
Sweden 29.40%
Iceland 27.90%
Liechtenstein 19.60%
Norway 25.40%
Total EU 32.20%
Total EEA 32.10%

Measure 18.2

Relevant Signatories will develop and enforce publicly documented, proportionate policies to limit the spread of harmful false or misleading information (as depends on the service, such as prohibiting, downranking, or not recommending harmful false or misleading information, adapted to the severity of the impacts and with due regard to freedom of expression and information); and take action on webpages or actors that persistently violate these policies.

QRE 18.2.1

Relevant Signatories will report on the policies or terms of service that are relevant to Measure 18.2 and on their approach towards persistent violations of these policies.

We take action against misinformation that causes significant harm to individuals, our community, or the larger public regardless of intent. We do this by removing content and accounts that violate our rules, by investing in media literacy and connecting our community to authoritative information, and by partnering with experts.

Our Terms of Service and I&A policies under our CGs are the first line of defence in combating harmful misinformation and (as outlined in more detail in QRE 14.1.1) deceptive behaviours on our platform. These rules make clear to our users what content we remove or make ineligible for the For You feed when they pose a risk of harm to our users and our community.

Specifically, our policies do not allow:

  • Misinformation 
    • Misinformation that poses a risk to public safety or may induce panic about a crisis event or emergency, including using historical footage of a previous attack as if it were current, or incorrectly claiming a basic necessity (such as food or water) is no longer available in a particular location.Health misinformation, such as misleading statements about vaccines, inaccurate medical advice that discourages people from getting appropriate medical care for a life-threatening disease, or other misinformation which may cause negative health effects on an individual's life
    • Climate change misinformation that undermines well-established scientific consensus, such as denying the existence of climate change or the factors that contribute to it.
    • Conspiracy theories that name and attack individual people.
    • Conspiracy theories that are violent or hateful, such as making a violent call to action, having links to previous violence, denying well-documented violent events, or causing prejudice towards a group with a protected attribute.

  • Civic and Election Integrity
    • Election misinformation, including:
      • How, when, and where to vote or register to vote;
      • Eligibility requirements of voters to participate in an election, and the qualifications for candidates to run for office;
      • Laws, processes, and procedures that govern the organisation and implementation of elections and other civic processes, such as referendums, ballot propositions, or censuses;
      • Final results or outcome of an election.

  • Edited Media and AI-Generated Content (AIGC)
    • Realistic-appearing people under the age of 18.
    • The likeness of adult private figures, if we become aware it was used without their permission.
    • Misleading AIGC or edited media that falsely shows:
      • Content made to seem as if it comes from an authoritative source, such as a reputable news organisation;
      • A crisis event, such as a conflict or natural disaster.
    • A public figure who is:
      • being degraded or harassed, or engaging in criminal or antisocial behaviour;
      • taking a position on a political issue, commercial product, or a matter of public importance (such as an election);
      • being politically endorsed or condemned by an individual or group.

  • Fake Engagement
    • Facilitating the trade or marketing of services that artificially increase engagement, such as selling followers or likes.
    • Providing instructions on how to artificially increase engagement on TikTok.

We have made even clearer to our users here that the following content is ineligible for the For You feed:

  • Misinformation 
    • Conspiracy theories that are unfounded and claim that certain events or situations are carried out by covert or powerful groups, such as "the government" or a "secret society"
    • Moderate harm health misinformation, such as an unproven recommendation for how to treat a minor illness
    • Repurposed media, such as showing a crowd at a music concert and suggesting it is a political protest
    • Misrepresenting authoritative sources, such as selectively referencing certain scientific data to support a conclusion that is counter to the findings of the study
    • Unverified claims related to an emergency or unfolding event
    • Potential high-harm misinformation while it is undergoing a fact-checking review

  • Civic and Election Integrity
    • Unverified claims about an election, such as a premature claim that all ballots have been counted or tallied
    • Statements that significantly misrepresent authoritative civic information, such as a false claim about the text of a parliamentary bill

  • Fake Engagement
    • Content that tricks or manipulates others as a way to increase gifts, or engagement metrics, such as "like-for-like" promises or other false incentives for engaging with content

As outlined in the QRE 14, we also remove accounts that seek to mislead people or use TikTok to deceptively sway public opinion. These activities range from inauthentic or fake account creation, to more sophisticated efforts to undermine public trust.

We have policy experts within our Trust and Safety team dedicated to the topic of integrity and authenticity. They continually keep these policies under review and collaborate with external partners and experts  to understand whether updates or new policies are required and ensure they are informed by a diversity of perspectives, expertise, and lived experiences. In particular, our Safety Advisory Council for Europe, which brings together independent leaders from academia and civil society, represent a diverse array of backgrounds and perspectives, and are made up of experts in free expression, misinformation and other safety topics.They work collaboratively with us to inform and strengthen our policies, product features, and safety processes.

Enforcing our policies. We remove content – including video, audio, livestream, images, comments, links, or other text – that violates our I&A policies. Individuals are notified of our decisions and can appeal them if they believe no violation has occurred. We also make clear in our CGs that we will temporarily or permanently ban accounts and/or users that are involved in serious or repeated violations, including violations of our I&A policies.

We enforce our CGs policies, including our I&A policies, through a mix of technology and human moderation. To do this effectively at scale, we continue to invest in our automated review process as well as in people and training. At TikTok we place a considerable emphasis on proactive content moderation. This means our teams work to detect and remove harmful material before it is reported to us.

However, misinformation is different from other content issues. Context and fact-checking are critical to consistently and accurately enforcing our misinformation policies. So while we use machine learning models to help detect potential misinformation, ultimately our approach today is having our moderation team assess, confirm, and remove misinformation violations. We have misinformation moderators who have enhanced training, expertise, and tools to take action on harmful misinformation. This includes a repository of previously fact-checked claims to help misinformation moderators make swift and accurate decisions and direct access to our fact-checking partners who help assess the accuracy of new content.

We strive to maintain a balance between freedom of expression and protecting our users and the wider public from harmful content. Our approach to combating harmful misinformation, as stated in our CGs, is to remove content that is both false and can cause harm to individuals or the wider public. This does not include simply inaccurate information which does not pose a risk of harm. Additionally, in cases where fact-checks are inconclusive, especially during emergency or unfolding events, content may not be removed and may instead become ineligible for recommendation in the For You feed and labelled with the “unverified content” label to limit the spread of potentially misleading information. 

We are pleased to include in this report the number of videos made ineligible for the For You feed under the relevant I&A policies as explained to users here.

Note that in relation to the metrics we have shared at SLI 18.2.1 below, of all the views that occurred in H2 2024, approximately less than 1 per 10,000 views occurred on content identified and removed for violating our policies around harmful misinformation. 

SLI 18.2.1

Relevant Signatories will report on actions taken in response to violations of policies relevant to Measure 18.2, at the Member State level. The metrics shall include: Total number of violations and Meaningful metrics to measure the impact of these actions (such as their impact on the visibility of or the engagement with content that was actioned upon).

Methodology of data measurement:

We have based the following numbers on the country in which the video was posted: videos removed because of violations of our Misinformation,  Civic and Election Integrity and Edited media and AIGC policies.

The number of views of videos removed because of violation of each of these policies is based on the approximate location of the user.

We also updated the methodology on the number of videos made ineligible for the For You feed under our Misinformation policy. 

Country Number of videos removed because of violation of Misinformation policy Number of views of videos removed because of violation of Misinformation policy Number of videos removed because of violation of Civic and Election Integrity policy Number of views of videos removed because of violation of Civic and Election Integrity policy Number of videos removed because of violation of Edited Media and AI-Generated Content (AIGC) policy Number of views of videos removed because of violation of Edited Media and AI-Generated Content (AIGC) policy Number of videos ineligible for promotion under Misinformation policy
Austria 2888 1313102 472 843182 414 216433 1696
Belgium 3902 2844929 1002 107828 2092 1119223 2688
Bulgaria 1568 5435715 182 110186 227 5977 1600
Croatia 789 973202 64 3753 1361 58579 616
Cyprus 511 1241327 86 1333 948 19441 326
Czech Republic 2720 4705302 275 25952 465 8287531 6470
Denmark 1455 2979180 335 14082 315 2742457 1157
Estonia 319 77555 41 866 208 2063380 453
Finland 984 1784968 199 1944 716 464824 811
France 44354 61693484 4390 8369126 8563 312078908 24035
Germany 50335 162220869 12231 3510858 11199 23904234 30934
Greece 4198 4431258 649 1726365 8742 145950 1735
Hungary 2002 9947587 308 273247 261 86870 957
Ireland 4676 4802257 2051 568596 1063 103199 2154
Italy 21035 39078480 3910 1578217 3574 1892355 19481
Latvia 694 3745925 48 9 129 4519 459
Lithuania 520 1122197 57 26 203 25410 647
Luxembourg 279 162787 66 2180 223 8729 121
Malta 168 5599 70 97 183 5811847 173
Netherlands 5422 2811880 1046 55695 1883 9080526 6189
Poland 13028 59545691 768 3942081 772 13404186 9872
Portugal 2629 31071224 535 28529 1010 339124 1400
Romania 14103 64183832 4276 33123122 937 623525 11739
Slovakia 1365 4714713 41 677 98 2014 1472
Slovenia 574 22494 28 111 66 605 346
Spain 22581 37024505 2126 3554918 4392 21882268 54592
Sweden 3489 9893681 633 6424 762 377862 2423
Iceland 122 153566 26 19 85 6113 77
Liechtenstein 35 0 20 0 48 525 33
Norway 1798 5158745 313 1152478 679 139984 1200
Total EU 206588 517833743 35889 57849404 50806 404749976 184546
Total EEA 208543 523146054 36248 59001901 51618 404896598 185856

Measure 18.3

Relevant Signatories will invest and/or participate in research efforts on the spread of harmful Disinformation online and related safe design practices, will make findings available to the public or report on those to the Code's taskforce. They will disclose and discuss findings within the permanent Task-force, and explain how they intend to use these findings to improve existing safe design practices and features or develop new ones.

QRE 18.3.1

Relevant Signatories will describe research efforts, both in-house and in partnership with third-party organisations, on the spread of harmful Disinformation online and relevant safe design practices, as well as actions or changes as a result of this research. Relevant Signatories will include where possible information on financial investments in said research. Wherever possible, they will make their findings available to the general public.

We regularly consult with third party experts and researchers in relation to the development of policies and features which are designed to reduce the spread of disinformation. For example, we engaged with experts globally on our Election Misinformation policies, which help inform updates of our I&A policies. 

We are proud of our close work with behavioural psychologists, Irrational Labs, which led to the development of the following warning and labelling features (more detail at QRE 21.3.1):
  • specialised prompts for unverified content, which alerts viewers to unverified content identified during an emergency or unfolding event and
  • our state-controlled media label, which brings transparency to our community in relation to state affiliated media entities and raises awareness among users to encourage users to consider the reliability of the source.

We are proud to be a signatory to the Partnership on AI's (PAI) Responsible Practices for Synthetic Media. We contributed to developing this code of industry best practices for AI transparency and responsible innovation, balancing creative expression with the risks of emerging AI technology. And, in accordance with our commitments as a launch partner, we worked on a case study outlining how the Practices informed our policy making on synthetic media.

Commitment 19

Relevant Signatories using recommender systems commit to make them transparent to the recipients regarding the main criteria and parameters used for prioritising or deprioritising information, and provide options to users about recommender systems, and make available information on those options.

We signed up to the following measures of this commitment

Measure 19.1 Measure 19.2

In line with this commitment, did you deploy new implementation measures (e.g. changes to your terms of service, new tools, new policies, etc)?

Yes

If yes, list these implementation measures here

Do you plan to put further implementation measures in place in the next 6 months to substantially improve the maturity of the implementation of this commitment?

No

If yes, which further implementation measures do you plan to put in place in the next 6 months?

No

Measure 19.1

Relevant Signatories will make available to their users, including through the Transparency Centre and in their terms and conditions, in a clear, accessible and easily comprehensible manner, information outlining the main parameters their recommender systems employ.

QRE 19.1.1

Relevant Signatories will provide details of the policies and measures put in place to implement the above-mentioned measures accessible to EU users, especially by publishing information outlining the main parameters their recommender systems employ in this regard. This information should also be included in the Transparency Centre.

The For You feed is the interface users first see when they open TikTok. It's central to the TikTok experience and where most of our users spend their time exploring the platform. 

We make clear to users in our Terms of Service and CGs (and also provide more context in our Help Center article and Transparency Center page) that each account holder’s For You feed is based on a personalised recommendation system. The For You feed is curated to each user. Safety is built into our recommendations. As well as removing harmful misinformation content that violates our CGs, we take steps to avoid recommending certain categories of content that may not be appropriate for a broad audience including general conspiracy theories and unverified information related to an emergency or unfolding event. We may also make some of this content harder to find in search. 

Main parameters. The system recommends content by ranking content based on a combination of factors including:
  • user interactions (e.g. content users like, share, comment on, and watch in full or skip, as well as accounts of followers that users follow back); 
  • Content information (e.g. sounds, hashtags, number of views, and the country the content was published); and 
  • User information  (e.g. device settings, language preferences, location, time zone and day, and device types). 


The main parameters help us make predictions on the content users are likely to be interested in. Different factors can play a larger or smaller role in what’s recommended, and the importance – or weighting – of a factor can change over time. For many users, the time spent watching a specific video is generally weighted more heavily than other factors. These predictions are also influenced by the interactions of other people on TikTok who appear to have similar interests. For example, if a user likes videos 1, 2, and 3 and a second user likes videos 1, 2, 3, 4 and 5, the recommendation system may predict that the first user will also like videos 4 and 5.

Users can also access the “Why this video” feature, which allows them to see with any particular video that appears in their For You feed factors that influenced why it appeared in their feed. This feature provides added transparency in relation to how our ranking system works and empowers our users to better understand why a particular video has been recommended to them. The feature essentially explains to users how past interactions on the platform have impacted the video they have been recommended. For further information, see our newsroom post

User preferences. Together with the safeguards we build into our platform by design, we also empower our users to customise their experience to their preferences and comfort. 
These include a number of features to help shape the content they see. For example, in the For You feed:

  • Users can click on any video and select “not interested” to indicate that they do not want to see similar content.
  • Users are able to automatically filter out specific words or hashtags from the content recommended to them(see here). 

Users are able to refresh their For You feed if they no longer feel like recommendations are relevant to them or are too similar. When the For You feed is refreshed, users view a number of new videos which include popular videos (e.g., they have a high view count or a high like rate). Their interaction with these new videos will inform future recommendations. 

As part of our obligations under the DSA (Article 38), we introduced non-personalized feeds on our platform, which provide our European users with an alternative to recommender systems. They are able to turn off personalisation so that feeds show non-personalised content. For example, the For You feed, will instead show popular videos in their regions and internationally. See here.

Measure 19.2

Relevant Signatories will provide options for the recipients of the service to select and to modify at any time their preferred options for relevant recommender systems, including giving users transparency about those options.

SLI 19.2.1

Relevant Signatories will provide aggregated information on effective user settings, such as the number of times users have actively engaged with these settings within the reporting period or over a sample representative timeframe, and clearly denote shifts in configuration patterns.

Methodology of data measurement:

The number of users who have filtered hashtags or a keyword to set preferences for For You feed, the number of times users clicked “not interested” in relation to the For You feed, and the number of times users clicked on the For You Feed Refresh are all based on the approximate location of the users that engaged with these tools.

The number for videos tagged with AIGC label includes both automatic and creator-generated labeling.

Country Number of users that filtered hashtags or words Number of users that clicked on "not interested" Number of times users clicked on the For You Feed Refresh
,Number of Videos tagged with AIGC label
Austria 53057 886639 52559 149390
Belgium 67734 1322561 83721 241538
Bulgaria 34081 744333 38568 153704
Croatia 20196 486259 23134 46131
Cyprus 7895 176600 13456 62428
Czech Republic 45392 753417 35791 140826
Denmark 35294 573821 27747 80022
Estonia 11648 151267 11558 30907
Finland 45185 586897 43657 109189
France 332521 7939397 486316 1832452
Germany 503549 7977800 648033 1883751
Greece 52519 1344879 68577 214464
Hungary 46966 1020692 28543 138023
Ireland 54952 801523 52714 67672
Italy 261272 6455485 295958 1140570
Latvia 15527 279241 24888 118117
Lithuania 21247 325564 23209 64359
Luxembourg 4519 76244 5508 44220
Malta 3137 77760 4923 15544
Netherlands 135944 2081920 150651 231651
Poland 196496 3383567 175988 519883
Portugal 57677 1152515 61327 216364
Romania 85551 2629162 165990 325318
Slovakia 18482 347681 13822 50322
Slovenia 9983 177990 19591 17100
Spain 275604 6889325 381588 1170610
Sweden 82868 1371265 111934 268743
Iceland 4720 57250 3175 8073
Liechtenstein 129 3563 291 418
Norway 48188 685406 63483 101728
Total EU 2479296 50013804 3049751 9333298
Total EEA 2532333 50760023 3116700 9443517

Commitment 21