Fb is engaged on the management of promoting matters

Facebook is developing tools that advertisers can use to keep their ad placements away from certain topics in its newsfeed.

The company announced that it would begin testing controls to exclude topics with a small group of advertisers. For example, a children’s toy company could avoid “crime and tragedy” content if it so wished. Other topics are “News & Politics” and “Social Issues”.

The company said it would take “much of the year” to develop and test the tools.

Facebook, along with players like Google’s YouTube and Twitter, has worked with marketers and agencies through a group called the Global Alliance for Responsible Media (GARM) to develop standards in this area. They have worked on measures to help “consumer and advertiser safety” including establishing harmful content definitions, reporting standards, independent monitoring, and agreeing to develop tools to better manage ad adjacency .

Facebook’s news feed tools build on tools that run in other areas of the platform, such as: B. In-Stream Videos or Audience Network, which enables mobile software developers to deliver in-app advertising to users based on Facebook data.

The concept of “brand safety” is important to any advertiser who wants to make sure their company’s ads are not near specific topics. But the advertising industry is also increasingly pushing for platforms like Facebook to be made safer, not just in the vicinity of their ad placements.

The CEO of the World Federation of Advertisers, who founded GARM, told CNBC last summer that it was a change from “brand safety” to focus more on “social safety”. The whole point is that even if ads don’t appear in or next to certain videos, many platforms are essentially funded by advertising dollars. In other words, ad-supported content helps subsidize all ad-free content. Many advertisers claim that they feel responsible for what happens on the ad-supported web.

This became very apparent last summer when a number of advertisers temporarily ripped their advertising dollars off Facebook and urged them to take stricter steps to stop the spread of hate speech and misinformation on their platform. Some of these advertisers not only wanted their ads to stay away from hateful or discriminatory content, they also wanted a plan to ensure that the content was completely removed from the platform.

Twitter is working on its own security tools for in-feed brands, it said in December.

Comments are closed.