Andrew Nho, Ikon Digital Director, takes a look at Brand Safety in light of the recent Google boycotts.
On March 18th, 2017, The London Times released their front page investigation findings into Google’s failure to remove ‘virulent antisemitic content’ from its YouTube platform. Whilst initial analysis by The Times uncovered that there were almost 200 antisemitic videos hosted within the platform, it is now known that the actual figure was considerably higher. With content which centred around:
- Anti-Semitic and Hate Speech
In subsequent weeks, Google has experienced an unprecedented global backlash with over 250 advertisers & brands all pulling advertising spend from YouTube & Google Display Network platforms.
Financial losses have been estimated at nearly $1 billion dollars in lost ad revenue globally & cost parent company – Alphabet, a market value drop of $35 billion dollars. The long-term effect is anyone’s guess.
This has brought the issue of brand safety to the forefront of all digital media conversations. Questions are being raised as to what measures & technologies are being deployed by both publishers & agencies in order to protect brands.
Brand safety is of the utmost important for all of digital activity we run on behalf of Ikon clients.
Whilst the role that platforms such as YouTube provide for campaign objectives such as reach and scale, it is also imperative that we continue to monitor complimentary quality control metrics such as viewability and video completion rates in order to best optimise our media.
While the exposé has broadened market understanding, the industry has been aware of the risks for some time.
So, How Did This Happen?
This is not a new issue for or limited just to, Google.
The UK Parliament issued results in 2016 of a special task force’s year-long findings into the whether social media giants YouTube, Facebook & Twitter were ‘consciously failing’ to combat the prevalence of terrorism and other negative content on their platforms. One significant acknowledgement was that “Facebook and Twitter should implement a trusted flagger system similar to Google’s.”
In 2014 Google removed more than 14 million videos globally based on complaints, however considering that nearly 400 hours of video content is uploaded on YouTube every minute – this was merely a drop in the ocean.
The main criticism of Google was their ‘lacksidasical’ approach in reporting & eliminating offensive content. They have indicated that they rely on the online community to report negative content as it cannot police the site due to the sheer scale of content being uploaded. Critics hit back calling it Google’s attempt to minimising its coporate & community responsibilities.
What Are They Doing About It?
Google have been swift in its response to the news, stating “Google believes in the right for people to express views that we and many others find abhorrent, but we do not tolerate hate speech. We have clear policies against inciting violence or hatred and we remove content that breaks our rules or is illegal when we’re made aware of it.”
Let’s review them.
Additional Exclusion Parameters: Earlier this year Google introduced the below exclusion strategies across all video inventory.
- Profanity and rough language: Moderate or heavy use of profane language and curse words
- Sexually suggestive content: Provocative pictures, text, and more.
- Sensational and shocking: Content that creates shock value, including sensational, gross, and crude.
They have since introduced the additional parameters:
- Politics, Broadcast & Headline News (Within ‘News’)
- Ethnic & Identity groups, Social Issues & Advocacy, Religion & Belief (Within ‘People & Society’)
- Religious Music (Within ‘Arts & Entertainment’)
- Opt-Out for content within ‘Tragedy & Conflict’ / ‘Sensitive Social Issues’.
- Do not target any live streams.
Global Tech Partnerships : In December 2016 YouTube, Microsoft, Twitter & Facebook have agreed to a partnership to combat terrorist content across their platforms.
This partnership creates a shared databased with the unique digital fingerprints or “hashes” – of terrorism images & videos that violate their content policies. By sharing access to the wider group database of identifiers it is working to being able to more efficiently flagging and removing content before it gets published to the social platforms.
It has also been announced that they are working closely with Alphabet’s Inc’s main division to improve their A.I. systems & protocols which will allow outside parties to verify their ad quality standards across its video inventory.
Break down the Wall (Gardens): In response to industry push back – Google are in the process of allowing 3rd party verification/auditing across their platform on all metrics of viewability & brand-safety. This is major news as the issue of transparency & 3rd party measurement has clearly reached boiling point.
This will allow technologies such as Integral Ad Science & MOAT who specialise in quality-control metrics such as viewability & brand-safety to monitor & report on Google’s inventory. Whilst they are currently unable to block ads serving on Google properties, it is a major step towards full transparency & quality control assurance for brands & advertisers.
What are Ikon doing about it?
Ikon has developed our own safety process via manual mechanics to provide market leading brand safety on YouTube, including website, topic and keyword exclusions which are continuously updated.
Ikon have been proactively managing the recent issues nationally with our partners at Google, implementing all newly available exclusion strategies across client Google activity.
We also continue to monitor and optimise all campaigns via our 3rd party measurement tools such as MOAT and Sizmek/DCM to ensure that all digital campaigns are performing to the highest of standards.
Disclaimer: Any opinions expressed in this article are those of the author(s) and do not necessarily reflect the views of Ikon Communications Pty Ltd or its associated entities (Ikon). No responsibility is accepted by Ikon for the accuracy of information contained in this article. Ikon and the author(s) expressly disclaim any liability arising from the contents of this article or reliance on such contents by any person.