From Clicks to Bricks- Getting Digital Campaigns' In-Store Measurement Right

From clicks to bricks graphic
From Clicks to Bricks

In OptiMine’s last blog post, we explored the Top 5 reasons why retailers need to measure digital campaigns’ effects on in-store sales. That urgent imperative to measure digital campaigns to include cross-channel influence confronts every retailer- large and small- as the market is being disrupted by rapidly evolving consumer behaviors enabled by technology and instantaneous access to information. In this blog post, we’ll explore the frequently used approaches that retailers attempt to cross the digital-to- in-store divide, we highlight the inherent risks, as well as the manyfold unintended consequences and downside considerations of these methods.

Let’s start with the obvious: measuring digital investments’ contributions to in-store revenues and traffic can be difficult, which is why many retailers don’t do it. Same goes for the potential costs, as well as the IT data security risks- more on that later. But boiling it down, there are really only a handful of ways to accomplish this task, and those are:

 

1. Matching consumer identities across devices (and in-store)

2. Marketing “dark tests”

3. A/B split testing

4. Predictive modeling

 

Matching consumer identities across devices (and in-store)

Many retailers we speak with have told us that they have completed consumer identity matching projects with Facebook and Google. They send their customer and transaction databases to Facebook and Google, and then the Walled Gardens match this data with their own advertising & campaign tracking data to determine which customers have been exposed to the retailer’s ads on their platforms, and whether they have made subsequent purchases in-store- thereby linking the ads to the outcomes. Let that sink in for a moment.

 

There are so many problems with this approach- where does one begin? Setting aside the risks of a retailer sending their entire customer database and transaction file to companies with poor track records on consumer and data privacy (for more on that topic- click here), the big question remains: why would a retailer trust the performance measurement guidance from the two largest recipients of their digital ad dollars? Finally, there’s an entire set of issues with this measurement approach (too many to detail here) but you can read more about those here (Determining the Incremental Value of Marketing ).

 

Marketing “dark tests”

Another common approach brands use to measure offline impacts is to actually turn off the media campaigns completely in a “dark test”. By going dark, the brand can compare the before and after effect to get a sense for contributions of that particular media on their sales. This approach can be effective, but it has one major risk: if the media does have an impact, sales will go down while it is turned off. Also, this technique only provides a point-in-time measure. What if certain media and marketing campaigns have different performance characteristics during certain times of the year? This approach only provides a partial answer.

 

A/B split testing

A/B tests- usually in the form of treatment/ control tests across different geographies- are very useful to measure the lift of media, but only in cases where there are excellent controls on all other campaigns during the test period. These types of tests are difficult to run precisely because of the complexity of controlling all other factors across multiple geographic regions. They are also expensive and time consuming to construct and manage and take marketing teams away from their core focus: marketing.

 

Predictive modeling

The use of sophisticated predictive models- like those from OptiMine- can help a retailer measure campaign impacts on any touchpoint- in-store, call center, in-app, channel partners, and e-commerce. And by taking into account all sales touchpoints, the measurement of any given digital campaign is complete in its scope. These models, if done properly, don’t require PII and as such, are compliant with emerging regulations (California Consumer Privacy Act, GDPR, others) that greatly limit consumer tracking. Furthermore, because advanced models account for all campaigns (digital and traditional media) across all conversion points, the media planning and budget allocation process is of the highest confidence and accuracy because it accounts for the entire scope and budget of marketing.  We have a saying at OptiMine, “Nothing happens in a vacuum”, and this gets to the truth of the fact that consumers don’t consume media in a vacuum and they often choose the touchpoint to interact with a brand that is the most convenient and effective at that moment- frequently using many touchpoints over time.

So, why should a retailer care about any of this? OptiMine’s own analytics yield some very interesting findings.

 

– When adding in the full cross-channel impacts of Paid Social campaigns, these efforts can rival a retailer’s best performing search efforts.

 

-Many retailers are over-invested in PLA campaigns as their economics may not be boosted as much by in-store sales impacts.

 

-When accounting for retail effects and halo impacts on other digital channels, many campaigns move from total ROAS losers (when only focused on e-commerce) to stellar investments.

 

Several of these findings and many more will be highlighted in OptiMine’s landmark Index project that will make its public debut soon. The OptiMine Index will provide a panel of indices and insights from across our diverse portfolio of retail clients highlighting the cross-channel effects and in-store halo of digital campaigns. We’re incredibly excited to share more and look forward to the launch of the OptiMine Index. Stay tuned!