I’m tired of research companies presenting the results of what people say as if it factually represents what they did or are likely to do. Come on. This is the era of digital media and marketing and Big Data. We can have massive direct measures of most consumer media and consumption behaviors, and we have extraordinary abilities to store, analyze and present that information. Why do we still rely so much on sending out surveys asking people to tell us about behaviors that we can much more accurately directly measure?
Let’s look at some trade stories from these past few days. Earlier this week, TV Week reported that 87% of DVR owners don’t skip movie ads. Did they use readily available anonymous DVR log data on tens of millions of viewers from TiVo or Kantar or Rentrak to find the answer? No. Instead, the Worldwide Motion Picture Group asked 1,500 viewers what they "think" they do. Raise your hand if you think this is an accurate representation of what people who DVR content really do?
advertisement
advertisement
Yesterday, research company Gfk told us that streaming video may erode traditional TV viewing, since 33% of those surveyed say they watch less "regular" TV. Did they check the above sources, or Nielsen or comScore, all of which have some capacity to answer the question with actual observed data? No. Also yesterday, Forrester told us that younger TV viewers are moving online, citing that there has been an 18% hike in online viewing in the U.S. between 2012 and 2010. Did this data come from any one of a number of log-based panels of online video viewing? No. It came from what 60,000 survey respondents said they did, not what they actually did.
Are research companies doing this because they want to deceive us? Of course not. These are reputable firms. Instead, I think they keep to this model because that’s what they’ve done for years and they haven’t felt the need to change yet.
But now it’s time that survey and other “declared” data is put into its place as a distant second to observed data in measuring media behaviors. Here’s why:
Accuracy. We should use the best available data when we compile and present our research. If we can get factual data on actual behaviors from a massive sample of consumers, why should we rely on “remembered” data from surveys and polls? Diaries have given way to People Meters. So too should surveys.
Reporting bias. As has been widely reported in many studies, if you ask people about their media behaviors, they tend to significantly underreport their TV viewing and over-report their computer and Internet usage. If you ask them what they watch on TV, it seems everybody spends most of their time watching Discovery, History Channel, C-Span and BBC Americas. Nobody ever watches reality shows or reruns. Log files of actual viewing from TV set-top boxes tell a completely different story -- as do the ratings.
Panel bias. Some people like to take surveys. Many do not. Just as we’ve learned that folks who click on ads do not represent most online media consumers -- see comScore’s “Natural Born Clickers” work -- the same bias exists for those who are willing to take online surveys. You can try to balance your panel according to sex/age demographics, but you can’t entirely take the bias away.
People are what they do, not what they say they do. Let's stop pretending that the latter represents the former (or is any way factual). What do you think?
I think you're right. I also see traditional research and associated measures (reported brand and ad awareness and attitudes) being used to evaluate success of campaigns that use only use traditional media as a portion of the overall plan. If behavioral data exists and can be extracted from these media (I don't mean clicks), why would we continue to use traditional surveys?
Thank you. We need more of this plain honestly about what we do and don't know. It isn't just researchers making large assumptions and misinterpreting the reality. Even data providers with larger sample sizes have to interpret the data with a set of guesses and assumptions that frame the conclusions. I think the best thing we can say about these studies is, "it's complicated."
An interesting post - thank you. But I'm concerned about the implications of observed behaviors vs. privacy. Are we ready to turn over the records of what consumers actually do and buy to third parties for analysis without determining some way to protect individual privacy? I recognize that this is already happening to to some extent, at least in aggregated data, but encouraging it requires a pause.
Dave, I appreciate where you are coming from. I should also admit at the outset my bias since I am with GfK and have been involved in research that is both reported and surveyed both at GfK and elsewhere. While you make some good points here, I think you should be aware of something else. In the surveys that you denigrate so, we have also found that when people are watching TV, they are doing a lot of other things (checking email, checking social media, putting kids to bed, doing homework, napping (yes, napping) and a lot more. This would be tough to know thru research from Nielsen, comScore, etc. Do you know we have found that sometimes 25% or more of people that are watching TV at 10 PM or later are snoozing thru most if not all the programming? Yet that if there is a Peoplemeter on that TV, it will register as someone watching. No research is perfect, but the point is that survey research still has an important place in understanding the "how" and "why" of media consumption if done smartly.
Excellent article and I thoroughly agree with one caveat: This thoroughly applies to self-reported actions like media viewing, shopping, mobile shopping, showrooming, etc. However, far too many may walk away from this idea believing that traditional research methods should be avoided. Not at all. Behavioral tracking lacks a very, very, very critical truth: motivation. Merely seeing WHAT someone does isn't enough for advertising success. We must understand more about WHY. And that takes...traditional research methods.
Thank you, Dave! People lie. They have always lied. If we make decisions based on lies, we are fools.
Dave - you have nailed one of my pet peeves spot on and well done!
Larry, thanks for your comment. I agree that data shows that many TV viewers are doing other things when they're watching TV. My point is that we can measure it empirically. We shouldn't assume that people can tell us better what they do in hindsight that observational data.
Change is HARD.
It used to make me crazy that "qualitative" research meant focus groups and "quantitative" research meant surveys of large groups of people.The implication was that because the survey pool was large, the data was not subjective. Ugh!
I do like survey research to track brand equity trends over long periods of time but this technique gets used to track media behavior and it just makes me nuts.
Q. "Why do we still rely so much on sending out surveys asking people to tell us about behaviors that we can much more accurately directly measure?" A. Because we can only spot trends if we use a consistent methodology. Switching to the latest tech means comparisons with earlier surveys are invalid. Simples!
The fact is that asking people what they did or even worse what they 'would do' has always been a very poor measure of real world behaviour, and requires the extraordinary skill of a handful of moderators in the world to get even close to a real insight. The hidden causalities that sophisticated data analysis or some neuroscience techniques are now unearthing will prove to be far more useful in the medium term. We will look back on focus groups rather like we will radiotherapy versus gene therapy. But a multi-billion dollar CYA industry ain't gonna let that happen quickly.
Dave, thanks for the post. I agree with your opinion in principle. However, I take issue with your assessment that the right data is "readily available". It's been my experience that research companies will not provide 'respondent level data' or it's too expensive for small-midsize marketers. Further, the "massive direct measures" are just that - providing only use for general direction. Our company works with clients to harvest specific data points that are directly related to business drivers/triggers/obstacles. This is information that is unique to each project and in general not "readily available". So, we've found interactive behavioral surveys are an extremely valuable tool.
To paraphrase Shakespeare, "the fault is not in our research, but in ourselves". Self-reported data is always flawed because our ideas about what we do differ from reality. But observed data will also be flawed because we will tend to read too much into it. For example, most marketers would become gravely concerned if the data showed that people routinely tuned out from ads after a few seconds. What they forget is that the majority of TV advertising is *reminder* advertising: its function is to say "remember Advil? we don't have anything really new to see but we're still here". As long as the reminder happened, the ad was been a success. Engagement and full :30 video views are good things, but they are not critical to success. A challenge of having more data than we've ever had before is that it will tend to make us more panicky than we've ever been before. We should take a deep breath and take all kinds of data for what they actually are: broad, directional signals about what's happening. The deeper we dig, the more likely we are to fall down the rabbit hole.
Excellent points Tom. Clearly, as we have access to more and more observed data, we will have to learn to react (or not react) to it much differently than we have to data before. We not only need different types of data scientists, we need different types of data responses. The more we see won't necessarily mean the more we know.