One subject that has not been addressed sufficiently, however, is the critical link among ad and/or revenue ops, measurement science, and research. It is the combination of the skills, talent, and drive of individuals working in these disciplines that enables better measurement. It is the camaraderie among practitioners that permits the sharing of knowledge, each benefiting from others’ expertise.
A presentation by Mitch Weinstein of IPG Mediabrands at the recent IAB Ad Ops Summit was a perfect example of how these disciplines and people intersect. In his talk, Weinstein showed a slide with data from IPG Lab demonstrating that a viewable impression as defined by the Media Ratings Council (MRC) is a stepping stone to recall, a key variable in ad effectiveness. The data comes from controlled tests that include standard display and video.
As the shift to viewability continues, these are the kinds of studies needed to lend additional support to the science behind the MRC standard. The MRC itself analyzed billions of live campaign impressions to develop a true standard for an “opportunity to see,” discussing and writing about the results extensively.
It is encouraging to see more new independent research to support the standard. And it’s important to see work that moves us beyond viewability.
Collaboration across disciplines is going to be more and more important as the market moves to cross-screen currencies and better ad-effectiveness measurement. Without the tech expertise to test and implement new measurement systems -- and the research expertise to test, measure and explain consumer behavior -- we will lack the tools and rigor required to alter the face of measurement. Neither a pure technology solution nor a pure research solution alone is sufficient.
And, even with all the experts together developing standards and changing how we do business, there is still much to be done, which requires an even broader circle.
We need the impartial, media-agnostic leadership that the MRC brings to measurement standardization and the vendor auditing and accreditation that give us a sense of capabilities and quality.
We need to continually assess what the ecosystem needs in order to improve ad quality and delivery, measurement -- and most of all, the consumer experience.
Moreover, with mobile viewability on the horizon, we need to assimilate learnings from the desktop viewability transition:
-- Test, test, test, and test some more.
-- Iterate,
iterate and re-iterate.
-- Train, train, train, and train some more.
-- Collaborate to find solutions.
And we most certainly need to be patient. Neither people nor technology can facilitate change in the blink of an eye. We’ll need to tap into a combination of ad operations, measurement science, and research to get there -- in good order and good time.