Why You Want to Watch Out for Related Features
Associated ad features can mislead you and take your money. Luckily there are direct and practical fixes we can use to identify the real drivers of ad success.
Good ad testing tells you what matters about your ads. But when looking into related features, we may need to make sure the ads we test are different in the ways that allow us to discover what matters.
Understanding Problems
Can't Tell Features Apart
Imagine that only one of the variables in the diagram above causes your ads to speak to customers. When another feature is always there, expressing the same information, it might get credit for the other's work. When variables follow each other around like this, no analysis of the data can tell which is really producing the magic.
Unreliable Results
This association between features in the data makes your estimates less reliable. A related factor can mask another's true impact, this can cause estimates of relationships to shift notably with small data changes ("maybe that one is really doing the work").
Fixes
Direct Testing
The best approach when you can't tell who is doing the work is to ask one feature to take a vacation. Directly test it. Produce an ad with only the color pallet but not the background. Does the work stop when one is out? Does the magic continue? This controlled testing helps identify true causes rather than just connections, providing you with actionable insights you can trust.
Focus on Likely Causes
If direct testing isn't possible, limiting related features to those most likely to be causal can help. Perhaps from tests in other contexts, you have a sense of what matters. We may need to run with that hunch and limit the features we include until we can test it directly.
We'll help :)
Cleaner data leads to clearer decisions. We carefully analyze feature relationships before building models to help you manage these pitfalls and will point out when a specific test (or waiting on using a feature) is needed.