Username
Password
Remember me
Forgot your password?
A classic and beautiful ad.
I thought that the mafia was no more I thought those mafia famlies went into being in honest work than being in a gambling ring surprised their is even illeagal gambling as I thought that would've ended when most states got into sports betting. Mr. Big Shot Billups is done and isn't going to ever be a head coach again in the NBA or college in my opinion, I was surprised about Billups was a big fan of him when he played for the Pistons and a core member when the Pistons won the NBA Championship in 2003-04 season.
The NBA will be fine for the most part sure it's a black eye for the NBA but their 80 Years old and have had many scandals and they all have passed and this will to as well in my opinion. I was surprised by Mr. Big Shot Billups as he was one of the Pistons core in 2003-04 season to win the NBA Championship for the Pistons that year I was a big fan of Billups.Billups is done as NBA coach in my opinion was surprised that he didn't get fired from the Blazers after last season he isn't a very good coach. No way he is going to coach again anywhere sure not the NBA or college. And I thought the Mofia was no more I thought the famlies were doing an honest living I guess not.
What a joke. The industry that turns a blind eye to the death and destruction caused by people ingesting the wrong foods and drink, that has traditionally been so reticent to interfere with questions about what patients are ingesting that is causing their diseases, and STICKING WITH prescriptions instead to try to help alleviate symptoms (invariably making the self-inflicted diseased conditions worse) will now be tasked with asking patients if they're storing guns properly? How bassackwards is that? 140,000 Americans die annually from usage of the recreational drig alcohol... only 30,000 die from gunshot wounds, and 20,000 of those are suicides. Every effort to legislate morality in this country has failed miserably... insanity is attempting the same tactics in expectation of different results...
Damn. The first presidential vote I cast as a 21 year old was for Reagan. Ditto on his re-election four years later. Hearing him allude to potentially needing help so Congress might be unable to pass legislation that go against his economic principles is so refreshing. Now, the imbecile that could never be 1/100th of the man, and leader that Reagan was sickens us all with his contempt for our Consititution. I miss Reagan, and many other statesmen that followed him, including some Democrats that earned my vote over the years. Trump, who I voted for in 2016 because I figured he couldn't be a worse bullshit artist, and scumbag than so many before him. But I was dead wrong. He is so much worse that the scale has been irrevocably broken.
@Andrew Susman: I'm proud to say I broke the story that the Tuesday Team had created "There's A Bear In The Woods" spot, and got to interview Hal Riney after it broke. If he were updating it today, it would probably be more like, "There's an unbearable in the woods..."
https://youtu.be/FErYyPMbllI
@Joshua Chasin: Tough to beat the NY Post. Thrilled to be a runner-up.
This is the second best headline of the week!The only one keeping you from the top spot was the New York Post's "Just Walk Away, Beret," in regrd to Curtis Sliwa and the NYC Mayoral race.
Good job, Joe! Hard to believe Reagan would be considered a "moderate" Republican today.
Indeed! Thank you for sharing this and having the courage to write it. Keep it coming!
Ethically messy one: "Fans by the thousands could submit questions and receive personalized, shareable, AI-generated video responses from a star's approved digital clone." Consider who is at fault when the 'clone' advises a fan to break the law or commit self-harm, as LLMs have been known to do. Who of us thinks an actor who cares about their personal brand is going to allow an "approved clone" to make statements on their behalf?
It will be a shame if Magna discontinues it ad forecasting report, similar to Zenith, leaving only a handful of companies to publish annual global ad & marketing reports, including WARC and Madison & Wall referenced in the article, as well as PQ Media, Group M, PWC, and e-Marketer, which were not referenced.
Paul, those "jaw breaking" stats you started out with are totally bogus. They are based mostly on the findings of various panels ----which mey or may not be representative--- which tally device usage not attentiveness. If you boil it all down--as we do for our MDI Direct subscribers-- it turns out that an average person devotes perhaps 5-6 hours a day to media--not 11-13---and much of this consumption takes place when the "user" is not even present or paying any attention. For example in the case of TV--linear and CTV---we found--based on studies which actually monitor audience presence and whether the "audience" has its eyes on the screen--that an average adult devotes only 9-10 minutes per day to TV commercials- that's eyes-on-screen attentiveness. That's hardly an imposing or mind boggling stat. -Otherwise I agree with much of what you said.
Hi Jon -- Thanks for your feedback. To say I'm totally wrong is a bit harsh. You make some great points; although, I think that by now most everyone knows that their data is being used for advertising purposes. The Facebook scandal with Cambridge Analytica was about as public as it gets. Moreover, you didn't speak to the issue of user-data value which is what this post is mostly about. Best regards, Ed
Actually, that's incorrect because your analogy is totally wrong. The money goes to the farmer because he owns the trees that make the apple. If the tree were wild, and the apples were harvested, the harvester would collect some of the money for his labor, but the money for the raw materials The apple would go to whoever planted the tree or own the property on which the tree was. if the person who planted it or owns the property on which the tree is growing, it is not compensated for the apple, then the harvester has stolen from them. It is no different when Facebook is taking the data although they do say that is the exchange. The problem comes in when platforms don't make that clear and so are Defrauding the data owner
Tony, while I'm the biggest supporter of ad attentiveness measurements that you will find--which is why it's the key part of our new TV AD Cume service which is now operational--- I'd give this project a pass on that so long as it's findings are based on "viewer" data. In fact the folks at Aquila might want to take a look at our cross platform reach estimates--linear TV plus CTV----as these may herald some of the things they expect to see with their project.
Josh & Ed: HH data "personified" then used to "assign" viewer presence (e.g. button pushed versus actual viewing)? Set usage (device ) data assigned/ascribed to persons and viewing? Three different sources - Kantar, Conscore and SAMBA - each with various and different bases and basis, video behaviour measurements as well as an array of assignments/imputations, etc., etc. So, without truly independent validation of an extremely complex data manipulation and integration, it appears that Ed's "distorted picture" may unfortunately be the case. It appears that at best Aquila/HALO would only deliver a ballpark planning R&F estimate based on the very broadest video input specs at the least dependable simulated persons OTS possible. If this is correct (?), to use such a broadly scoped campaign R&F estimate for outcomes projections is particulary puzzlling beyond the fact that it is the creative that is the primary driver of campaign outcomes albeit with the support of optimal synergisitc media vehicles that "encourage or enhance" real attention for the brand message.
Interesting, Josh. But I'm still a bit confused. If Kantar is supplying people meter data and Comscore and Samba are supplying set usage data how are the two measurements reconciled? I gather from your comment that the set usage panels might attempt to provide viewer data as well as set usage by assuming that if a person in the desired target group resides in a home and the home's set is tuned in then that person is assumed to be viewing. If so, that assumption would generate a viewer-per-set factor of about 2.5 which is at least double the real figure. So, suppose the ANA looks at a schedule aimed at adults 18-49. If the set usage data from, say, Samba, finds that when a particular home was tuned in when a brand's ad ran on-one of it's screens and one of its residents happen to be aged 18-49 is he or she considered "reached". And what happens if the people meter data says that even though one 18-49 adult resides in one of its homes along with a teenager, if only the teen's button is pressed, then only the teen is "reached"--not the 18-49? How is this kind of discrepancy resolved? Whose data takes precedence?Answering my own question, it's possible that Kantar's people meters will suppply viewer -per -set factors which will be applied to Comscore's linearTV and Samba's CTV set usage findings. This would work to calculate GRPs--or ad "impressions"--but how do they cume the data across shows, networks and platforms?Again, I'll answer my own question. They may have devised some sort of simulation or ascription process to create a panel of "synthetic" people --all defined demographically with their ascribed viewing patterns as its core data. Now, one can do r& f tabs across platforms but one must ask, "Have all,of these statistical manipulations created a distorted picture? Maybe so--but, to be fair, maybe not. We shall have to see.
Just 2 things. 1. The Kantar panel deploys people meters, enabling the panelist to provide their individual start and stop times. That puts the Kantar panel on a par with the Nielsen panel as far as viewer (and viewership) identification is concerned.
2. As currently designed, the unit of measurement in Aquila is the person, not the household. Comscore is Aquila's linear TV partner, and their data, based on devices, is personified within the household in order to assign viewing to persons. Aquila receives Comscore data at the person level. Similarly, the Samba data will be personified in order to assign viewing to persons.
Joe, as I understand the ANA project it's designed to allow comparisons of reach and frequency for sample schedules in linear TV and CTV and both in combination. Which is fine--providing the findings reflect people "reach"--by demos--not whether the ads appeared for 2+ seconds on a TV screen The reason why this distinction is so vital--as we learned 65 years ago--is simply this.Based on set usage, most TV shows peak, "audience"-wise--among younger homes with kids and also homes with above average incomes. That's because such homes have many more residents who can turn on a TV set than older homes with many fewer residents. But who is watching?Without exception, the research tells us that older adults far outview younger adults while low brows also top upscale adults by a fair margin. So you get almost totally oppositional findings--depending on how you are measuring "audience" As it happens, we, at Media Dynamics Inc, have just launched a new service called TV AD Cume. This model allows subscribers to input all sorts of hypothetical schedules for broadcast network, cable and syndication as well as several types of CTV buys and see what the monthly reach and frequency would be two ways--one, using standard TV GRPs as provided by the rating surveys and two, adjusting the findings to reflect the percantage of the target group that actually looks at the brand's ads. So far, this model is showing significant add-ons when CTV is combined with linear TV and lots of other interesting stuff. Consequently we are most interested in what the ANA is doing and eager to see some of its results--but for viewers, not homes---please.
As a memberof the WFA's HALO Industry Tehnical Advisory Group, HITAG, I have expressed serious concerns with device-based and consequently "content-rendered-count" data for Cross Media Measurement, CMM, versus persons-based attention (Eyes/Ears-On at a minimum) metrics program by program or ad by ad, due to the former's associated media biases and relatively poor relationship to campaign outcomes. No attention, there can be no outcomes! HALO, a highly complex multi-faceted construct and base model for CMM initiatives in various countries, has been developed (with the primary support of the technoplolies) to form the basis for the ANA's Aquila and ISBA's ORIGIN currently. However, answers to key questions raised with Aquila are still awaiting response. They include whether there is a consistent and acceptable definition of "impressions", and/or "viewing", and/or "audience", used throughout all and every data source used and integrated within the HALO/CMM model along with their independently verified validity (walled gardens data?!)? And, if not, how are they harmonized and made comparable to what final CMM model definition and derivation of sources used or imputed? These questions address the industry's fundamental on-going media metrics disconnect. Are there conflicting defintions of metrics across the various data sources and inputs used to produce a final campaign Reach & Frequency estimate, i.e., What will the final Aquila resulting reach and frequency estimate acually represent?In the interests of full disclosure, accountability and transparency, it appears that SAMBA TV relies on ACR data which is solely device-based data together with detailed household profile data from a panel but without persons-based, independently verified, actual measured viewing. If correct (?), SAMBA data would merely reflect content-rendered-counts, aka the oft misrepresented "viewable impressions" (No REAL OTS), on a screen that are likely associated with a projected HH profile of the device owner. What we used to call circulation/distribution data years ago. If this is the case, Ed's concerns are are on point and advertisers and their media agencies should ensure that these basic concerns are resolved by HALO and Aquila. As a reminder, high quality samples when independently validated are first representative of a given universe. Sample size while not unimporant is somewhat secondary and depends on the level of detail being sought, e.g., dayparts (planning) versus show by show or program by program (buyng). The latter would require a much larger sample than the former. A non-representative sample however large will always produce specious results.
@Ed Papazian: Oh you were talking about Aquila's calibration panel, not the Samba TV data that will be used to measure the reach/deduplication of the streaming component of its cross-media measurement platform. To be clear, the calibration panel is just a small panel Big Data-plus measurement services are using to calibrate their massive Big Data sources. It's the same model being used by Nielsen and other Big Data-plus panel services. In Nielsen's case, the calibration panel is bigger -- about 42,000 homes/100,000 individuals, but it's not the same as a conventional audience measurement panel. It's just their to tune the massive Big Data sources that go into the hybrid service. I have no idea whether Aquila's 5,000 household calibration panel should be deemed small, but it's not intended to be used as currency-grade measurement for media-buying. It's intended so marketers can understand their audience reach, deduplicate audiences, minimize excessive frequency and inform their marketing mix models. Maybe another reader can weigh in on whether a 5,000 household calibration panel is small to do that? I'm just a journalist.