Following more than a year of extensive auditing of its TV audience measurement methods, the Media Rating Council has determined that Rentrak’s national and local TV ratings are not yet ready for prime-time and will not receive accreditation.
“The products were not sufficiently compliant with MRC’s standards,” the industry ratings watchdog said in a statement announcing that Rentrak failed an audit of its national and local TV ratings systems after more than a year of detailed analysis and evaluation.
The MRC noted that it is not unusual for a complex ratings system not to pass its initial audit, and said Rentrak has indicated it plans to follow through on “remediation steps” recommended by the MRC and would comply with a new audit that would be necessary for the MRC to reconsider accreditation of the services.
The MRC said it expects a new audit to take place before the end of 2015.
advertisement
advertisement
The news follows a number of high-profile deals for Rentrak to license its ratings to TV network and station groups, and some big ad agencies such as Horizon Media, Saatchi & Saatchi, Zenith Media and Starcom MediaVest Group.
While ratings services do not need to be accredited to be used as TV ad marketplace currency -- Nielsen’s local diary ratings currently are not accredited by the MRC -- the stamp of approval gives a higher degree of confidence that the ratings have passed industry muster.
The MRC indicated that Rentrak’s methods of deriving TV audience estimates from digital set-top data is complex and requires improvements before it can pass accreditation.
“The MRC believes that television return-path data from MVPDs, once adjusted and audience-attributed, can be a viable source for audience ratings presenting opportunities for enhanced granularity and stability,” the MRC said in its statement.
The announcement comes as Nielsen is pushing ahead with some aggressive retooling of its local TV measurement systems, as well, including a controversial “viewer assignment” method that will attribute the audiences of people-metered households in some markets to represent viewers in non-people-meter markets.
This is the first pass . . . Rentrak will come back to meet the MRC requirements and standards will be set for STB-based measurement. Effort and follow-through are important milestones.
I agree with Gerard. The methods used by Rentrak to tally and weight its findings must be fairly complex and unusual. It will, no doubt, all be sorted out in short order.
Part One: The MRC V. Rentrak and The MRC V. Nielsen
If the MRC was wise enough to ask Rentrak to take a second pass, then the MRC needs to be smart enough to preemptively and conditionally withdraw MRC Accreditation from Nielsen's NPM National Service (NTI NPM), if Nielsen insists on going forward with the inclusion of Set Meters in the National People Meter Panel by September of 2015.
For decades, the NTI service has been based on TABULATED (NOT modeled) data from a TV Household Panel built from a rigorously-established and globally-respected Area Probability Sample...with unique, patented and enviable People Meter technology introduced in the late 80's in response to a competitive threat from the then London-based AGB, before it was ultimately acquired by Nielsen.
With the People Meter, Nielsen could provide accurate, reliable and useful overnight measurement of national TV TUNING AND VIEWING
As of yesterday's 2015 Spring Client Meeting, Nielsen has announced it's intention to reduce sample error (rather marginally) by adding Local Set Meters to the National People Meter sample and model their (i.e., the Local Set Meter households') demographic data contribution. In non-technical terms, this is anything but progress. It's an utter abomination. What good is a measurement that is somewhat more reliable, but substantially invalid as far as a tabulated service is concerned.
For decades, Nielsen's Clients have paid for ratings based on tabulated not modeled data. Now, in the midst of a series of major, non-coterminous MSA (Master Service Agreements) with global and national media companies, Nielsen is effectively, unilaterally renegotiating all its contracts and downgrading its service quality by introducing modeled data to account for a substantial portion of its projected, tabulated Household and Persons Ratings data.
Another word for modeled data is simulated or fabricated data. (Let's not use the "guessed" just yet!) Like the weighting process that Nielsen proposed in 2000, it is essentially changing the essential nature of the NTI (Nielsen Television Index) Service by making up or inflating some data and diminishing or discounting other data. If this understanding of the NPX Program (NPX=Nielsen Panel Expansion) is so, then the current MRC Accreditation would itself be no longer valid, let alone reliable.
("Part Two" should follow immediately in a subsequent "Comment" due to MediaPost space limitations.
Thank you for your time and patience. Sincerely, NPS)
Part Two: “I have seen the future and it works.”
(But not in the future.)
If Nielsen has its way, by September 2015 a world-class media research measurement will produce what could become something less than a third-world currency. CPM's based on guesses! How could we have fallen so far and so fast?
It would appear that Nielsen's haste in moving forward is occasioned by the methodological artifacts of the past 12 months that apparently reflect Nielsen's loss of control over its national measurement.
The rooting interest in Rentrak derives not only from the love of invention and innovation but also from abundant frustration with, if not resignation to, Nielsen's inability to keep up with changes in "TV" technology (e.g., Laptops, iPads, IPhones) and the U.S. Populations unique viewing patterns (e.g., DVR, OTT, NetFlix), all the while demanding what seem like exorbitant price increases vis a vis the CPI, especially for the media.
In sum, if this MediaPost report is correct and the MRC has withheld Rentrak Accreditation, then it only seems fair that the MRC evenhandedly prevent the incumbent measurement (i.e., Nielsen) from gaining undeserved marketplace superiority by blessing the introduction of an ersatz national service as the new TV Season begins ... and after decades of quality research service of a national basis.
Anything other than a robust, extensive methodological research investigation and thorough, energetic industry dialogue will signal the end of research quality in the United States.
For an Industry to undue in 5 months that which took 65 years to establish is a foolish and mendacious rush to judgment the likes of which is unworthy of a respected industry or its professionals.
Thank you for your time and attention.
Sincerely,
Nicholas P. Schiavone,
Nicholas P. Schiavone, LLC
It seems to me that Nick is raising some interesting questions. I wonder what the MRC's take on this will be? How about it, George?
Thinking about this some more, it is possible to envision a future scenario where, in order to fulfill the incessant demands for more data on every possible media platform or venue----including those with extremely small audiences-----that we will see a national people meter panel, augmented by various portable peoplemeter panels and other measurements of set usage, all blended together, using statistical weighting and data modelling, to provide what will be called comparable "cross-platform" ratings. Since the media will balk at the cost of maintaining the required sample sizes, I also foresee the use of statistical machinations that create what appear to be samples of 200,000 or larger, by taking the actual results from 25,000 or 50,000 homes ( or persons ) and attributing them to many more phantom households, using various forms of ascription and modelling, to permit all of the "granular" slicing and dicing that is in demand. Of course, a handful of researchers at the media and the agencies will be aware of what is happening, but most, if not all, of the users----buyers and sellers of time---and advertisers---will have no idea. They'll think that the samples are large enough ( 200,000+ ) and that real data---checked out by some impartial industry body--- is the "currency" they use to make their buys. I'm not saying that the data will be "made up",as I expect that some sensible thinking and, if we are lucky, some validation, will be part of the process. However, I am wondering whether we are asking Nielsen to go too far, too quickly, without thinking about the possible, errors or biases that may, eventually, corrupt our precious data.
Dear Ed and all concerned - Part One,
I appreciate your wise, thoughtful reply. Your comments always advance the art, science and dialogue of our craft and are an occasion for progress, if possible. For these we should all be grateful to you again.
Just a few thoughts – and you inspired them (but you’re not to blame.)
You are right to say that the Executive Director of The MRC (Media Rating Council) has the capacity to speak on matters such as these. George Ivie is a good person and smart individual. He is also humble and disciplined. Hence, we'd never see George "bust a gut" even if some system was terribly, even hilariously, out of alignment. However, this also reminds me that the MRC is not just a reflection of its Executive Director or George Ivie, but of all its membership as well. The MRC was formed in response to a Congressional Mandate in the early 60’s with an almost sacred purpose: to see that research methodologies meet basic scientific standards and that the research producer and supplier is doing what it claims to be doing.
[It is unlikely that what Nielsen is proposing is expressed its Technical Appendix for the NTI-NPM Service yet – but it should be. Further, by Nielsen's own admission Monday, March 23, at its 2015 Spring Client Meeting in New York City, critical, so-called validation, tests have yet to be conducted.
Further, what Nielsen calls validation is SELF-VALIDATION: "Good job, me!" It's like looking in your wallet at your driver’s license to make certain you're spelling your name correctly. Nielsen has no external standard of "truth" like we had in the day of Coincidental Testing. Hence, we are left with marginally more statistical reliability BUT NO EXTERNAL VALIDATION. Think about. More reliable, less valid. Talk about a Faustian Bargain. The Devil would approve! So, Nielsen even does its own "validation" testing. I doubt Congress envisioned such circular self-verification]
-- Due to MediaPost space requirements, the balance of this Comment ought to follow immediately after this.
Thank you for your time and patience.
Sincerely,
Nicholas P. Schiavone,
Nicholas P. Schiavone, LLC
Dear Ed and all concerned - Part Two,
...
In sum, MRC Membership must step up to their fiduciary responsibilities, as all quarters of the industry will (or should) be represented in the statistical testing and scientific review. And basic knowledge can beget more sophisticated working knowledge. But time is condition of possibility. And much more than 5 months are required “to do the right things and to do things right” (Source: Peter Drucker). And I have yet to sing the praises of E&Y auditing and technical involvement. Without due process, it's time for Congress, the FTC and the FCC to reconvene and guarantee due diligence.
Final thought: I am reminded of the old phrase "roll your own" used originally in connection with cigarettes. Ultimately, if Nielsen can make a scientifically-certified and industry-accepted case for this Rube Goldberg contraption or seeming methodological monstrosity, then it ought to be constructed and accessible in such a way that Nielsen customers have the CHOICE of using tabulated, modeled or tabulated and modeled data as ratings currency. Nielsen's NPower system was designed to give fast, reliable and comprehensive access to Nielsen's databases. That system has fallen far short of its ideal, including its pricey tabulations and fabricated calculations derived from fusions.
Whether or not "viewer assignments" or demographic audience ascriptions can be done right, Nielsen needs to greatly improve its client computer systems. While there may be differences of technical opinion on complex issues in a complicated industry, there should be a single source where the numbers always come out the same, if not right. It would seem to be time for "roll your own" ratings done in a transparent and certifiable manner accompanied by full disclosure from all consenting parties.
Thank you very much for your time and attention.
Sincerely,
Nicholas P. Schiavone
Nicholas P. Schiavone, LLC
I do have the ability to comment on the matters you raised, Ed, and those of your fellow-commenter, Mr. Schiavone.
MRC does not grant or remove accreditation pre-emptively. Accreditation decisions are based on rigorous, specialized CPA audits, the evidential matter gathered in these audits as well as MRC committee evaluation of these results. That's a process we strongly adhere to, and hopefully we always will.
We have very extensive discussions and audit processes going on at Nielsen to look at their planned changes, in advance of commercialization in the accredited National Service. Nielsen is committed to maintaining accreditation.
As it relates to Rentrak, the original subject of the MediaPost article, we have similar ongoing evaluation processes in-flight and there will be extensive auditing to follow soon. Rentrak is committed to achieving accreditation.
I encourage industry practitioners from media companies who rely on these data to become a member of MRC and participate in MRC decision processes -- truly, that's the way to impact these processes. Truly that's the way to get involved in accreditation decisions.
I can't comment further about either measurement company.
However, in general: A key goal of measurement services of all types today is clear -- gathering larger, more stable, data sets from which to derive media estimates. One of the problems of our time is gathering and/or attributing accurate audience (or other qualitative) information to these larger data sets…it's becoming clear that most measurers are having to confront that problem. The data assets and inferential processes used to make these attribution decisions, where necessary, vary and are the "sauce" that will differentiate measurement competitors -- on quality, accuracy, etc. Validation of these processes is critical.
Our job at MRC is to help ensure, through our accreditation process, quality and transparency of these new processes by the measurement services as they evolve.
Dear George and all concerned - Part One,
George Ivie writes: "The data assets and inferential processes used to make these attribution decisions, where necessary, vary and are the "sauce" that will differentiate measurement competitors -- on quality, accuracy, etc. Validation of these processes is critical. "
I appreciate the public statements of George Ivie on behalf of the MRC (Media Rating Council). Once again, we need to remember a few critical factors and seek "precisions" when and where necessary.
1. The MRC has a Membership and a set of policies and procedures. George Ivie does not act alone nor should he be expected to shoulder the total burden of integrity where the MRC is concerned. One can draw great solace from the involvement of E&Y Professionals in the Audit and Accreditation Process.
2. The key phrase associated with the use of "attribution decisions" is "WHERE NECESSARY." A measurement service essentially based on tabulation has no need of spurious "inferential processes" where core data are concerned and data sets are sufficient in size, as in the case of the current NTI NPM Service. Significant adjustments to a methodology are unnecessary when they are performed to distract from the core deficiency which may be poor panel management over an extended amount of time, such as a year.
3. I have reviewed every methodological, statistical and social science textbook in my library, the MRC website and the classic reference text called "Audience Ratings" written by the distinguished MRC Executive Director, Hugh M. Beville, Jr. published in 1985. (I had the privilege of introducing Mal Beville and his classic work to the advertising industry at the ARF’s 2nd Research Quality Workshop that I had the honor to chair. Mal's illuminating speech resides in the ARF Archives.) My point is that no where do I find the term "sauce" as a technical term to described a distinguishing factor in determining the quality of a research study or service. Research is science and we need verbal, as well as numerical precision, unless we have finally consigned media research to the realm of "Fuzzy Logic."
Due to MediaPost space requirements, the balance of this Comment ought to follow immediately after this. Thank you for your time and patience. Sincerely, Nicholas P. Schiavone, Nicholas P. Schiavone, LLC
Dear George and all concerned - Part Two, ...
Finally, as researchers we need to distinguish carefully between validated data and "the validation of these” (attribution?) “processes." So, may the MRC continue to fulfill its mission and mandate:
"The objective or purpose to be promoted or carried on by Media Rating Council is:
● To secure for the media industry and related users audience measurement services that are valid, reliable and effective.
● To evolve and determine minimum disclosure and ethical criteria for media audience measurement services.
● To provide and administer an audit system designed to inform users as to whether such audience measurements are conducted in conformance with the criteria and procedures developed."
Thank you, George for your outstanding commitment and service. Your dedication to research quality over the long haul may just be our salvation...for now.
Your time and attention are greatly appreciated.
Sincerely,
Nicholas P. Schiavone
Nicholas P. Schiavone, LLC
Sincerely,
Nicholas P. Schiavone
Nicholas P. Schiavone, LLC
Thanks for your response, George. I think that the core of my comments and Nick's as well, is the definition of the word "validation". I have always understood that one of the MRC's prime functions was to ensure that the research company was doing exactly what it claimed to be doing. This could be considered as "validation", however the question then arises, whether what is being done produces as accurate a depiction of what is being measured as possible. In other words---"the truth". To me---and, I believe, to Nick---that also needs "validation". With this distinction in mind, does MRC get involved in validating not just the procedures, including technical issues like sample balancing, weighting, etc. but also the "validity" of the resulting data as most laypeople would understand it?
I concur with with Ed's clarification of my essential concern.
And I am grateful for his help. I hope wisdom, experience and the continued fine example of Mr. Papazian brings be closer to meaningful brevity. Thanks to all readers for their forbearance as I seek "truth" in the labyrinth that media research has become.
[Please note: There has never been any collaboration between me and Mr. Papazian. Unfortunately, I think we have only met once...about forty years ago.]
Ed and Nick: In short, yes, the MRC does much more then merely verify that a measurement organization "does what it says it does." Because of his intimate knowledge of MRC Nick was able to quote from our bylaws which in part state MRC's purpose as -- "To secure for the media industry and related users audience measurement services that are valid, reliable and effective." Notice that does not merely say -- To check the accuracy of disclosures. However the reality is that for the first 30 years of its existence, validating disclosure was primarily what the MRC did -- we were not really fulfilling our full mission. Expanding our mission to its full written course was being discussed by the membership during the administrations of Mr. Dimling and Mr. Goldberg, but that expansion was really put in practice by my predecessor Mr. Weinstein (we all owe these three great men a debt of gratitude). This more aggressive stance by MRC was being made necessary by the increasing issues MRC was encountering with products being less able to maintain the effective statistical rigor of probability sampling with sufficient rates of response (and other challenges) which traditionally gave us the statistical underpinnings we needed for valid and reliable research. Today, this issue has a much more urgent, finer, point on it as products are emerging in the marketplace (and being extensively used) which have no or limited formal statistical underpinnings -- methods extensively leveraging non-probability sampling, hybrids, fusion, calibration, data integration, extensive attribution, etc. You've heard these buzz words, but the challenging part is that our role is to validate this stuff. At the end of the day, people look to MRC as an indicator of trust, and ultimately whether "something works or not" -- in the terms of laypeople. Validating methods like this can be difficult…very difficult…sometimes maybe impossible (in which case MRC would need to put its pencil down and not engage an entity further). But we try using various methods to conduct validation. If you continue to be interested in this topic, I'd point you to our website (www.mediaratingcouncil.org) where you can find a set of guidelines we promulgated on Data Integration -- this will give you a good indication of the types of attributes we look for in validation.
Thank you, again, George, for your detailed reply. I have always had a high regard for MRC and the work that it does. I would like to ask one more question. You have accredited the PPM measurement for radio listening in a large number of markets. This assumes that the person wearing or carrying the PPM device is "listening" whenever it "hears" an encoded station audio signal coming from a radio receiver. The same assumption will apply when PPMs are used for TV audience measurement. I believe that an unknown percentage of the reported radio "audience"---- probably, a fairly large proportion for certain station formats----is not "listening" and the same may be true for TV when this measurement system is used to "capture" out-of-home TV "viewing". So, my question is this, does MRC accept the basic assumptions outlined, above, for PPM measurement or does it require some sort of validation---or "proof" that "listening" or "viewing" actually occurred?
Very good question, Ed, and not an easy question. I'm happy to talk off-line about these matters, but for now, here are some brief thoughts on a complex situation: MRC has been engaged in auditing PPM and validating various aspects of Arbitron's (now Nielsen's) related methodology for over a decade. Some PPMs remain unaccredited -- but to be clear, it's not because of the question you have raised about the PPM device capturing "exposure" to audio (which can include incidental exposure) versus capturing active listing to audio content by a respondent. A few relevant points: (1) when PPM was rolled out MRC took an active role in ensuring that the marketplace was informed about PPM having a different "basis for measurement" than other measurement products that capture a respondent's active listening or viewing of content/ads; (2) MRC has spent literally thousands of hours testing the PPM device in labs, in respondent households and in out-of-home environments verifying the functionality of data capture -- this included all sorts of settings, ambient noise levels, etc. We have found through this testing that PPM works and has been calibrated to fairly represent measurement when a person can "hear" the content -- so for example, it is not tuned to be way more or less sensitive than humans can hear; (3) PPM measurement was innovative in that it can be used across consumer devices and media genres (so long as they contain audio), so there was a potential value proposition to support the move to a potentially different data capture orientation -- but this value is more challenging when PPM's basis for measurement is mixed with other tools that measure active listening, and accordingly this "not mixed" orientation became an issue in Nielsen's RADAR product where PPM is commingled with diary measurement (so we had to evaluate that), and now (4) PPM is now being mixed with legacy TV measurement techniques in in-home/out-of-home measurement, which again mixes orientations. We have this issue in front of Nielsen and are seeking to evaluate this issue as part of potentially approving these in-home/out-of-home products. So, a complex issue to be continued.
OF MODEL CITIZENS AND WELL-FRAMED QUESTIONS
It is a delight to see a refined, high-level dialogue on real, relevant media research questions conducted, no less, with the respect, knowledge and wisdom that lead to meaningful, useful answers.
You, George, and you, Ed, are model citizens of our industry and profession. And you demonstrate that there are men and women who possess the intelligence and dedication necessary to allay the fears of H. L. Mencken and others, like myself, who are too often forced to observe "For every complex problem there is an answer that is clear, simple, and wrong."
Thought leadership is the exception, and the rest of us need to pay more attention to yours and ask the right questions as soon as possible. If only because there is more wisdom in a question well-framed than in any given answer to it.
Onwards and Upwards.
With gratitude,
Nicholas P. Schiavone,
Nicholas P. Schiavone, LLC