Commentary

Video Audience Measurement Is More Broken Than You Think

There has been a lot of debate over the years regarding consumer audience measurement across platforms, across providers, across different types of video, etc. The ARF just concluded its annual AUDIENCExSCIENCE 2024 conference. Unsurprisingly, about 50% of presentation titles included the letters A and I. But many presentation titles also referenced how measurement works or could work across platforms, audiences, creative formats, etc.

Our industry clearly is concerned about the fragmentation of the delivery of messages and absence – mostly – of any kind of coherent measurement approaches that solve for it. That was consistent across the many topics presented this year. And last year, and five years ago. And I agree that it is a vexing and real challenge.

In the olden days (“Come, children, Grampa explains”), Nielsen measured our viewing behavior with a panel of households who had the Nielsen box. The box captured what channel the TV was tuned to, and for how long (in 15-minute blocks). The panel recorded, in a written diary, who was watching.

advertisement

advertisement

Sure, there were problems with the system. Were people faithfully recording how they “really” watched TV in those diaries? Are 15-minute intervals representative for commercial breaks? When cable came along and created the first wave of fragmentation, people questioned if there were enough households in the panel to capture what people viewed.  And how about zapping behavior during ad breaks? And so on.

But in today’s world, we know EXACTLY where people watched something. We just can’t combine the metrics together to figure out the totality of their viewing behavior. And… what defines “viewed” is a super-low threshold that I would call almost meaningless.

In order to view something, the ad needs to be viewable. In the old TV era, there was no question about this. TV ads were always viewable. But in the current environment, “viewability” needed a definition.

According to the Media Ratings Councils viewability guidelines, for a video ad to be considered viewable, it must meet the following criteria: At least 50% of the video player must be visible on the viewable space of the browser window. For video ads shorter than 30 seconds, the ad must be played for at least 2 continuous seconds. For video ads 30 seconds or longer, the ad must be played for at least 50% of its total duration, with no less than 2 continuous seconds. The audio component of the video ad must be initiated and played for the duration required for viewability.

The main thing is the 2-second rule. If it passes the 2-second threshold, it is considered “viewable. But is it?

I would like you to consider your own viewing behavior. If you can skip an ad, do you? You would still be considered in the “viewable” category if you do, because most ads are only skippable after five seconds. If you can fast-forward, do you? Do you mute during the ad breaks? Do you grab your phone during ad breaks?

And of course, ad fraud, such as bot traffic and impression laundering, can artificially inflate viewability metrics, leading to inaccurate reporting and wasted ad spend.

So next time you review your media buy, consider the reality of the numbers you are looking at. In my mind, they are probably nothing more than an approximation of an inflated, unduplicated reality. In that sense, the old Nielsen TV world was a lot more precise.

5 comments about "Video Audience Measurement Is More Broken Than You Think".
Check to receive email when comments are posted.
  1. Ed Papazian from Media Dynamics Inc, March 29, 2024 at 6:24 p.m.

    Maarten, I assume that you are referrencing how "TV's" audience was measured in the U.S.

    In the  1950s, we had Nielsen's national household meter panel of roughly 1,000 homes which recorded only set usage on a minute basis---not by quarter hours. In addition, The American Research Bureau ( ARB )---later known as Arbitron---conducted a nationwide household diary study one week per month---every month. In these studies, the diary keeper recorded what shows were tuned in and who watched by quarter hour hour.

    The procedure for estimating  average minute viewers for national TV shows was to use Nielsen's homes tuned in estimate times the number of men, women, teens, kid viewers- per - home from the ARB diary study, which also provided age/sex breaks. This was deemed an acceptable practice as the ARB diaries were producing virtually the same tune in projections as  the meters for virtually all shows.

    In addition, Trendex conducted telephone coincidental studies in the major markets which produced fast almost national ratings and viewer demos as it took Nielsen four weeks to retrieve its meter tapes and process them. Finally, the local markets were served by ARB one-week diary studies--- of about 400-1000 homes per city per month, though only the largest markets were measured every month.

    As competition between Nielsen and ARB heated up, Nielsen launched its own local market rating service using whatever national meter homes fell into each city area, plus household diaries. In addition, Nielsen set up a national household diary panel---separate from its meter panel---which conducted national studies to provide viewer-per-set estimates which were used in the same manner as the national ARB findings had been employed---to project age/sex viewer estimates with the separate meter data as the base. This remained in force until 1987 when Nielsen switched to its version of  AGB's "People meter" which was, in effect, an electronic diary using meters for set usage plus button pressing by household members to indicate "viewing". 

    None of this had anything to do wth measureing commercial audiences as it was never intended that any of these studies could do that. The problem, today, is not so much about how we define commercial audiences but rather, whether we should try to measure them at all. By vetoing the inclusion of attentiveness metrics, the sellers have forced Nielsen and any would- be competitor to  provide device usage stats modified, in Nielsen's case, by program viewing claims that are assumed to reflect average minute---or even second----commercial audiences but,  can't be taken seriously in this regard. The button pushing system was never intended to be that precise and it produces commercial viewing projections that are triple reality. That's the real issue.

  2. Maarten Albarda from Flock Associates (USA), March 30, 2024 at 8:46 a.m.

    Ed, thank you for adding the helpful historical perspective. We columnists are given a 600 word count for our columns. 600! Obviously that's not nearly enough to be nuanced or detailed.



    The other part of the "briefing" for columnists is (or at least, 'was' when I started) to deliver food for thought and/or be a little controversial. So I always try to at least stir the pot a little. 


    Having said all of that: my point is that, beyond how video is measured, or who measures it, or what platform you measure, the viewability definition for digital video ads on any platform outside of TV is laughable. And it won't be long before TV itself will mostly conform to that same low, low threshold as more and more households watch TV via an app, and it conforms to "digital viewing". 


    Add to this the fact that the best we have come up with as "new ratings" is to use the hollow gross "impressions" metric as non-duplicated currency means that most advertisers that look at their monthly media analysis are looking at predominantly "empty calories". 

  3. Tony Jarvis from Olympic Media Consultancy, April 1, 2024 at 5:42 p.m.

    Or, as Euan MacKay of Route Research UK has astutely suggested reflecting your conclusion, the US likes to use phony OTS "impressions" rather than REAL OTS. 
    A really shameful aspect of the current measurement currency farrago is that according to MRC Guidelines (which are NOT "Standards"),  their so called "viewable impressions", aka content-rendered-counts, solely reflect device/surface measurement with NO persons exposure measure whatsoever.  And yet MRC in the fine print identify such "viewable impressions" as OTS.  BS!
    Time for everyone to review the ARF Media Model?  And, to remember with no persons Eyes/Ears-On, contact, or attention measurement, which has been used as OOH currency for 20+ years, there can be no outcomes. Is the ANA Cross-Media Measurement initiative listening???

  4. Ed Papazian from Media Dynamics Inc, April 1, 2024 at 7:55 p.m.

    Right, Tony. The idea that we now have OTS---"opportunity to see" ----measurements is laughable when you realize that on average 30% of the assumed commercial "audience" isn't even present in the room. What attentiveness  would give us for the first time is a measurement of how many people---and what kinds of people---not only saw a commercial but how many ---might have seen it. Without that we are just kidding ourselves  by thinking that  the currency we have been using for many years---which will continue in the future---- is reporting OTS---it isn't.

  5. John Grono from GAP Research, April 1, 2024 at 10:01 p.m.

    Two points to add to the discussion.

    1. OTS is a valuable metric.   But if it is stand-alone it is pretty irrelevant as to what the audience was.   When we developed MOVE (OOH metrics) in Australia, to simplise, we had three levels.   The first thing we did was to evaluate the size and location of a billboard ... that is, how many people would pass by that billboard.   The second thing to do was to evaluate what proportion of passer-bys would have the "Opportunity To See" the billboard - as an example we removed billboards that were, say, because of its height, or off-set, or obstruction - so the OTS might be two-thirds of passer-bys.   We then added the "Likelihood to See" (LTS) by using hundreds of hours of recorded video of a representative sample of people, determining what proportion had 'gazed' at the billboard (I can't remember the duration).   The end result was believable data .. for example, the LTS might be something like one-third of passer-bys.

    2. I have also noticed that the US IAB data reports monthly data.   One of the metrics was viewing 'duration'.   As far as I can tell, being a monthly report the 'duration' seems to be the sum total across the month.   IMHO that is a very poor metric   Advertising is a day-to-day industry for marketers and media agencies .. but if my observation is right monthly duration is pretty useless.

Next story loading loading..