This week, we challenge our assumptions in marketing with a look at a 4-way A/B test’s results, plus the week’s news in data.
In this issue’s Bright Idea, we look at data and assumptions. I run a personal newsletter every Sunday night, and one of the criteria for what I share has been whether or not an article received lots of clicks the previous week (as measured by the bit.ly API). One of the most important things to do as a data-driven practitioner of any industry is to question assumptions – such as how to choose content to share in a newsletter.
The test I ran was a four-way test, evaluating 4 different ways of curating content:
- Most clicks the previous week
- Most social shares the previous week
- Highest page authority (an SEO metric)
- Most topically-relevant (using text mining techniques)
Qualitatively, when I put together the four editions, the fourth example was a newsletter I’d most like to read. But I’m an n=1 and making the broad assumption that my readers are just like me is foolish.
What were the results of the test?
- Click edition: 400 opens, 50 clicks, 12.5% click to open rate
- Page authority edition: 398 opens, 51 clicks, 12.8% click to open rate
- Share edition: 322 opens, 46 clicks, 14.3% click to open rate
- Topic edition: 386 opens, 24 clicks, 6.2% click to open rate
My marketing automation software crowned the share edition as the winner. Would you?
Here’s the plot twist: almost no marketing software includes tests of statistical significance. Using the statistical language R, I ran tests of significance against all four editions, comparing them to each other by p-value. Not a single p-value was under 0.27; in most generally-accepted scientific literature, p-values should be under 0.05 to be considered statistically significant.
Thus, even though there’s a “winner” above, the reality is that the result is statistically insignificant. What do we do when we face this kind of situation? Like a court in which the judge declares no verdict, we are remanded back to additional testing. This is clearly a test I need to run more than once, and if repeated tests keep coming back statistically insignificant, only then will I know that the algorithm for choosing which content to share doesn’t really matter.
Some questions for you and your team:
- What assumptions have you tested lately in your data-driven work?
- How have you tested those assumptions?
- Have you evaluated your tests for statistical significance?
- What software do you use every day that does or does not tell you that a result is statistically significant?
- Impact Of The Great Twitter Bot Purge on Elected Officials via Trust Insights
- Marketing automation using R studio via Trust Insights
- Encouraging Voter Registration via Deep Dive into 2016 Census Data via Trust Insights
- Avoiding Kennel Cough with Predictive Analytics via Trust Insights
- Predictive Analytics Issues via Trust Insights
Social Media Marketing
- Facebook Ads for Webinar Funnels: How to Maximize Your Results via Social Media Examiner
- How to Use Instagram Live Video Chat for Business via Social Media Examiner
- Winning the social media marketing game via Marketing Land
Media Landscape
- Why So Much Mid- and Bottom-Funnel Content Doesn’t Work and What We Can Do About It
- Top 5 Mind-Blowing Statistics from Google Marketing Live 2018 via WordStream
- Your Editorial Calendar is Not Your Content Marketing Strategy
Tools, Machine Learning, and AI
- Academic expert says Google and Facebook’s AI researchers aren’t doing science
- Best Machine Learning Tools: Experts Top Picks via Data Science Central
- Machine learning in finance: Why, what & how via Data Science Central
Analytics, Stats, and Data Science
- Critically Reading Scientific Papers via Data Science Central
- Dimensionality Reduction: Does PCA really improve classification outcome?
- How to Improve Customer Experience With Big Data
SEO, Google, and Paid Media
- The Rules of Link Building via Whiteboard Friday via Moz
- Do Google reviews impact local ranking? via Search Engine Land
- Why keyword stuffing is bad for SEO via Search Engine Watch
Upcoming Events
Where can you find us in person?
- Greater Los Angeles Chapter of NSA, August 2018, LA
- Health:Further, August 2018, Nashville
- Content Marketing World, September 2018, Cleveland
- INBOUND2018, September 2018, Boston
- MarketingProfs B2B Forum, November 2018, San Francisco
Can’t wait to pick our brains? Book a Table for Four and spend an hour with us live (virtually) on any topic you like:
https://www.trustinsights.ai/services/insights-foundation/table-for-four-consultation-package/
Going to a conference we should know about? Reach out: (https://www.trustinsights.ai/contact/)
Conclusion
Thanks for subscribing and supporting us. Let us know if you want to see something different or have any feedback for us! (https://www.trustinsights.ai/contact/)
Social follow buttons
Make sure you don’t miss a thing! Follow Trust Insights on the social channels of your choice: