Closing the Vast Agency- Publisher Knowledge Gap

Originally Published 09/20/09

When it comes to ad operations, nowhere is there a wider knowledge gap than between publishers and agencies. And I’m not just talking about discrepancies in third-party ad serving. That’s the least of the issues. Over the past three to four years, I’ve had occasion to work on both sides of that fence. When I asked my publishing colleagues how ad operations works on the agency side, I received a shrug of the shoulders. When I asked my agency friends if they knew what went on behind the scenes on the publisher side, the response was “I don’t understand why it is so difficult to deliver an ad campaign in full!”

Ladies and gentlemen, this is like living in a suburban community for 10 years and never getting to know your neighbor’s first name. Really!

In this article, I’ll start by discussing the trends taking place behind the scenes at agencies, why publishers need to know about them, and the future implications for their businesses. For some of you as readers, this is not new — but for many on the publisher side, and specifically in ad operations departments, it has remained behind the veil for too long.

Publishers, don’t be clueless

I am definitely a charter member of the “branding has value” club. A publisher’s job is to supply quality content that attracts a valued audience and deliver that to an advertiser. But to ignore what is happening behind the scenes at agencies in terms of measurement and metrics is simply being clueless. Don’t you want to get the license plate of that truck before it runs you over? Or are you intent on habitually crossing the street without looking both ways? Even while we continue beating the drum for branding (and rightfully so), we need to understand what happens on the buy side.

Let’s take a look at some of the metrics at work, behind the scenes, in agency ad operations:

Attribution modeling

One of the more recent and advanced sets of metrics used on the advertiser side is attribution modeling (also called engagement mapping or “path to conversion”). This helps determine how many ad exposures preceded a conversion, when they happened, and what ad product they were associated with.

For instance, a unique consumer might have been exposed to a specific advertiser’s creative (a retailer, perhaps, like Hugo Boss) in the following sequence, and produced the following results:

Tuesday > 12:15 p.m. > view a leaderboard ad > impression exposure

Tuesday > 5 p.m. > view a skyscraper ad > impression exposure

Wed > 8 a.m. > search Google, see text ad > click > welcome page of Hugo Boss, no further action

Wednesday > 5 p.m. > Navigate directly to welcome page of site > Men’s suit section > shopping cart > $650 purchase

So in this sequence, the agency would see that several ad exposures influence a purchase, not just the click. All of this would be tracked by the initial cookie dropped on the user’s browser when the person sees the first ad exposure. Branding may be contributing to more conversions when combined with search. And the final action by the consumer may be a result of the entire media mix, not just a single ad exposure.

Ironically, there is interest among agencies grounded in search engine marketing for this metric. They may actually try and get their clients to spend more money on display than they are used to because the ultimate mix that leads to a conversion includes several types of ad products, generating both impressions and clicks.

Is this model prevalent at every agency, with every client? Of course not. However, the data are being collected, analyzed, and presented. The more analytical agencies and staff will certainly present it as justification for a media plan, and that may actually be beneficial to publishers because the ultimate “mix” will include branding.

Long term implications? If this analysis catches on, it may mean more frequent campaign revisions as agencies start to define certain types of ad impressions as being contributors to a conversion — and ask publishers to change a campaign to achieve the best blend. More frequent campaign revisions would call for applications on the ad operations side to process and document those changes more efficiently.

Conversion: Time lag and frequency

Less exotic than attribution modeling, these types of reports are in use more frequently at agencies. They are used to help fine-tune the length of a flight and the frequency of ad exposures.

For instance, a time-lag-to-conversion report would contain the following metrics, informing the advertiser as to how many consumers who saw an ad three, five, or 10 (frequency 10) times actually converted:

Placement A:
First ad display > No. of imps > uniques > clicks > unique clicks > post-click event

Second ad display > No. of imps > uniques > clicks > unique clicks > post-click event

Third ad display > No. of imps > uniques > clicks > unique clicks > post-click event

… and so on

This report would track the frequency of up to 10 ad displays and the resulting impressions, clicks, and conversions (or any post-click event) by frequency level.

This type of data forces the agency to look at the correlation between ad frequency and conversions, and avoid the “conventional wisdom” that a frequency cap of three is the most efficient path to conversion.

In terms of time lag to conversion, the advertiser is measuring the chronological point in the timeline of a campaign at which a consumer converted. Was it during the first half hour, day 1, or day 30? The report structure looks like this:

Placement A > No. of imps > No. of conversions @ first hour = 5 > Day 1 = 10 > Day 2 = 6… Day 30 = 1

Again, the quantitative data here might be counterintuitive. The best duration for a flight might be 10, 20, or 30 days — it depends on what the data show. Conducting an ROI analysis would show the agency the optimum duration for a campaign.

More-conventional metrics

Among the more-conventional metrics expressed as important by agencies interviewed include:

Re-targeting. This involves the retargeting of people that did not perform an action or conversion, and sending them an ad with a specific message designed to prompt them to take action.

Testing (AB/multivariate). Some agencies rely heavily on simple A/B testing, which creates two separate landing pages, splits the traffic between the two, and tracks conversions to see which versions gets the best results. Multivariate goes beyond A/B testing and enables several variables on a single web page, analyzing the results to arrive more quickly at the optimal design. The implication for publishers is that it may place them on the receiving end of revisions designed to leverage the results from the multivariate testing.

Publisher overlap. When running campaigns across several publishers, the agency may look at how many unique users overlap from one site to another.


The use of cookies, tracking pixels, and site analytics are increasing the amount of data a digital advertiser has to work with. Short term, the ability for most marketers to assimilate and rapidly act on these data is lagging behind the volume of information. However, agencies are using at least one of these types of reports on a regular basis. So, this is the beginning of a slow but inevitable trend toward more sophisticated analysis.

Who’s supplying this type of functionality to ad agencies? The same providers that power many publishers: Atlas, Bluestreak, DoubleClick’s DFA, Eyeblaster, and Mediaplex.

As use of this type of data becomes more widespread, I believe the rate of campaign revisions will increase, and as mentioned previously, this will require ad operations on the publisher side and all the links in their workflow (including processing of insertion order revisions from sales all the way through finance) to be more efficient and more accurate.

Publishers could take the same metrics and do a self analysis of their audiences to model and understand the patterns of usage and response. This could in turn be used as a selling tool to agencies that are perhaps more direct response-oriented than we would like, or to suggest duration and frequency of campaigns to advertisers that need that guidance.

Long, long term, on the agency side, it’s likely that the analysis of data, like attribution modeling, will be run automatically, resulting in automated decision making on the mix and placement of ad units to achieve the best ROI. This could then be programmed into the agency ad server, which would subject the publisher to more rapid campaign changes.

Is there a danger that more automated crunching of data on the agency side will start to turn digital media into more of a self-service model? Although we can’t deny that Google has proved this out to some extent, traditional publishers and their digital media properties will have a long time to figure this out. Just remember, we’ve been talking about the “convergence’ of media for (believe it or not) 20 years now. So even in our digital world, significant change takes time. Combine that with the fact that there is a tremendous legacy and infrastructure inherent in the ad agency/publisher model, and I don’t think we’re anywhere close to “the end is near.” That’s good news for all of us who still believe that advertising at its best is a creative collaboration between the buyer and seller.

On the other hand, if we were to imagine a future, decades from now, where automation of media performance data is so current, and processed so quickly, that only machines can manage decisions on media, it might read something like this:

Leave a Comment

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s