Brand Safety @Twitter

Brand safety

Our purpose is to serve the public conversation

Improving the health of the public conversation is a top priority for Twitter. Violence, harassment and other similar types of behavior discourage people from expressing themselves and create an unsafe environment, and this behavior has no place on Twitter. In addition to our work to combat this negative activity platform-wide, Twitter has invested in a suite of solutions aimed at ensuring a safe advertising experience for everyone who uses Twitter.

 

The Twitter Rules

Our Rules are in place to ensure all people can participate in the public conversation freely and safely. These policies are enforced for all people who use Twitter, and set the standard for content and behavior not permitted on the platform. These policies address Violence, Terrorism/violent extremism, Child sexual exploitation, Abuse/harassment, Hateful conduct, Suicide or self-harm, Sensitive media, Illegal or regulated goods & services, Private information, Non-consensual nudity, Platform manipulation and spam, Civic integrity, Impersonation, Synthetic and manipulated media, and Copyright & trademark. Learn more about the Twitter Rules here, and our range of options to enforce them here.

 

Twitter Brand Safety Policies

Our Brand Safety policies, as well as the controls we offer people and advertisers, build upon the foundation laid by the Twitter Rules to promote a safe advertising experience for all users and brands. Our Brand Safety policies inform the context in which we serve ads, and exclude the following types of content:

  • Violent and Graphic Content 

  • Hateful, Abusive and Sensitive Content

  • Restricted and Illegal Products and Services 

  • Adult Sexual Content

  • Profanity & Offensive Language

For more information on how our Brand Safety policies are enforced across the platform, see “Safeguarding Advertising Experiences on Twitter” below. Learn more about our Brand Safety policies and their application in the Amplify Pre-Roll program here

While our Brand Safety policies help inform ad placement on Twitter, we also have Advertising Policies that determine permissible content in ads and conduct of advertisers on Twitter. Learn more about our Ads Policies here.

 

Controls for Advertisers

We strongly believe in empowering our advertisers to control the placement of their ads on Twitter, and we’re actively working to expand our available controls. Here’s what we offer today:

Amplify Pre-Roll Brand Safety Controls
Advertisers can exclude any of our standard IAB categories and can exclude up to 100 specific Content Partner handles.

Campaign Placement Controls
Advertisers can control where their campaigns run by opting out of delivering Ads on Profiles or within Search results for all objectives aside from Amplify Pre-Roll and Amplify Sponsorships. We are actively prioritizing providing these controls for Amplify campaigns.

Twitter Audience Platform Controls
Advertisers running campaigns on the Twitter Audience Platform (TAP) can select up to 2,000 apps to exclude from delivery. Note that TAP placement is only available as an option for Website Clicks, App Download or App Re-engagement objectives.

 

Controls for Everyone

We’re experimenting with new ways to give additional control over the conversations people start on Twitter, and have several features actively in use today.

Hidden Replies
All people on Twitter have the capability to hide any replies to any of their Tweets that they deem abusive or irrelevant to the conversation. Learn more here.

Conversation Controls
We are testing a new capability that allows people on Twitter to specify who is allowed to reply to their Tweets. Learn more here.

 

Safeguarding Advertising Experiences on Twitter

In addition to the controls available to everyone on Twitter and our brand safety controls for advertisers, Twitter leverages a combination of Machine Learning and human review to ensure that ads do not serve around objectionable content. 

Adjacency to Sensitive Media in Timeline
Twitter prevents ad placement adjacent to Tweets that have been deemed “Sensitive Content” by our Twitter Service Team. Our Sensitive Media policy covers media that is excessively gory and media depicting sexual violence and/or assault. People on Twitter are also able to self-classify their Tweets as sensitive.

Ensuring brand safety in the Amplify Pre-Roll program
Every Tweet from our partners goes through two levels of scrutiny before it becomes monetizable through our Amplify Pre-Roll program:

  1. Algorithmic check which scans Tweet text for any unsafe language

  2. Manual human review of every single monetized video to ensure that they meet our Brand Safety standards

We also hold regular proactive educational sessions with our partners to help them successfully monetize their content on Twitter within our brand safety standards. 

Promoting brand-safe placement in Search
Twitter monitors conversations and trending topics around the world 24 hours a day, and removes ads from search results we deem unsafe for ads. This internal keyword denylist list is updated on a regular basis, and applies to all campaigns globally. As a search is conducted, this denylist is referenced and if a search term appears on the list, no Promoted Tweets will serve on that term’s search results page. The same denylist applies when users click a trending topic and are taken to the results page for that trend.

Brand Safety Controls for Ads on Profiles
Every time a profile is updated, our machine learning model searches the content of the profile page with the goal of ensuring that content is brand safe, according to our Brand Safety policies, before a Promoted Tweet is served. We only serve ads on profiles that we deem to be safe for ads. We may also block ads from serving on individual user profiles based on the content or behavior of the account and lack of alignment with our Brand Safety policies.

Keyword Targeting Restrictions
Twitter maintains a global denylist of Keyword Targeting terms that are not permitted to be used as parameters for positive keyword targeting (audiences associated with these terms can still be excluded through negative keyword targeting). This list is continually updated and includes a wide variety of terms that most brands would consider to be unsafe, as well as terms that are not allowed to be targeted per our Ads Policies. Learn more about our policies for Keyword Targeting here.

Audience Filtering and Validation
Twitter excludes accounts we suspect may be automated from monetizable audiences, helping to ensure valid traffic on ads. We also offer viewability measurement through integrations with multiple MRC-accredited third parties.

Private Conversations
Twitter is a public platform, and we work to ensure this open forum remains healthy through our policies and platform capabilities. Direct Messages, while private between the sender and recipients (up to a max of 50), are subject to the Twitter Rules, as are all individuals and content on Twitter. In a Direct Message conversation, when a participant reports another person, we will stop the violator from sending messages to the person who reported them. The conversation will also be removed from the reporter's inbox. We will review reports and action appropriately.

 

Transparency and Measurement

First published in July 2012, our biannual Twitter Transparency Report highlights trends in legal requests, intellectual property-related requests, Twitter Rules enforcement, platform manipulation, and email privacy best practices. The report also provides insight into whether or not we take action on these requests. See our latest report here.

Transparency in advertising is also a core belief for us. Twitter provides additional transparency into campaign performance through measurement solutions and third-party studies based on your objectives. Our goal is to empower advertisers with measurement solutions to help you understand how your campaigns help achieve your broader marketing and business goals. Learn more about our measurement solutions here.

 

Our Commitment to Health Over Time

We’ve made significant improvements in the Health and Brand Safety over the past several years. Health has always been and will remain a top priority for Twitter and our work is ever-evolving. Here are a few notable changes we’ve made in the last few years:

2020

June

  • We made our latest disclosure of information on more than 30,000 accounts in our archive of state-linked information operations, the only one of its kind in the industry, regarding three distinct operations that we attributed to the People's Republic of China (PRC), Russia, and Turkey

May

  • We began testing new settings that let you choose who can reply to your Tweet and join your conversation.
  • We introduced new labels and warning messages that provide additional context and information on some Tweets containing disputed or misleading information.

March

  • We further expanded our rules against dehumanizing speech to prohibiting language that dehumanizes on the basis of age, disability or disease.

  • We broadened our definition of harm to address content that goes directly against guidance on COVID-19 from authoritative sources of global and local public health information.

February

  • Informed by public feedback, we launched our policy on synthetic information and manipulated media, outlining how we’ll treat this content when we identify it. 

January

  • We launched a dedicated search prompt intended to protect the public conversation and help people find authoritative health information around COVID-19. This work is constantly evolving, and the latest information can be found here.

2019

November

  • We made the decision to globally prohibit the promotion of political content. We made this decision based on our belief that political message reach should be earned, not bought.

  • We launched the option to hide replies to Tweets to everyone globally.

  • Twitter became certified against the DTSG Good Practice Principles from JICWEBS.

  • We asked the public for feedback on a new rule to address synthetic and manipulated media.

October

  • We clarified our principles & approach to reviewing reported Tweets from world leaders.

  • We published our most recent Transparency Report covering H1 2019.

  • We launched 24/7 internal monitoring of trending topics to promote brand safety on search results.

August

  • We updated our advertising policies to reflect that we would no longer accept advertising from state-controlled news media entities.

July

  • Informed by public feedback, we launched our policy prohibiting dehumanizing speech on the basis of religion.

June

  • We joined the Global Alliance for Responsible Media at Cannes.

  • We refreshed our Rules with simple, clear language, paring down from 2,500 words to under 600.

  • We clarified our criteria for allowing certain Tweets that violate our rules to remain on Twitter because they are in the public’s interest.

April

  • We shared an update on our progress towards improving the health of the public conversation, one year after declaring it a top company priority.

2018

October

  • We released all of the accounts and related content associated with potential information operations that we found on our service since 2016. This was our first of many disclosures we’ve since made for our public archive of state-backed information operations.

September

  • We asked the public for feedback on an upcoming policy expansion around dehumanizing speech, and took this feedback into consideration to update our rules.

March

  • We launched 24/7 human review of all monetized publisher content for Amplify Pre-Roll, along with an all-new Brand Safety policy for the program.

  • Jack publicly announced our commitment and approach to making Twitter a safer place.

 

Opportunities

We recognize the importance of measurable, brand-safe ad placement and we're actively exploring opportunities to enhance our controls available to advertisers, our measurement capabilities, and our third-party reporting in this space.

Ready to get started?