We’ve reached the end of journaling our marketing efforts for January — our first month of marketing plan execution, and it’s time for marketing performance for the month!
We’ll be looking back at the month of January, at our executed tests, and finishing up execution on any tests that we haven’t run yet.
We’ll be looking to see whether we drove growth for Ladder with our tests, and what sorts of baselines we’ve set for the business. Since this is retroactive, I can provide full data on how each test performed.
The goal?
Keep in line with our 43% growth test success rate across all marketing tests we run for our clients.
To read the full marketing plan, click here.
To catch up on Week 1 (January Strategy), click here.
For Week 2 (Experiments), click here
Week 3 (Analytics), click here
Interested in following along as we execute our 2017 Marketing Plan? Sign up for our newsletter:
GET WEEKLY UPDATES >>
First, let’s start with the tests we haven’t run yet. As a reminder, here’s what’s left to run:
Let’s quickly talk implementation:
This was relatively straightforward to get done. I executed it in Facebook’s Power Editor, as a slightly more flexible and powerful alternative to their Ads Manager (allowing for easy duplication and batch-editing).
(Note: Power Editor mis-attributes some conversions here, so that 48 number you’re seeing isn’t actually accurate.)
Setting up the campaign was straightforward, and budgeting was simple to get done. Here’s the calculation:
Generally, we prefer to focus on website conversions on Facebook when running ads to gather leads. This is because it lets Facebook’s algorithm optimize for getting the most conversions at the lowest cost. That’s what we went with for this audience.
For the ad creative (above), we went with something that’s worked well in the past, but also
And for audience settings, here’s what we had running:
A full gamut of accelerators and accelerator-relevant interests like Y Combinator, Angel List, CrunchBase, and Techstars — the audience size came out to 1.1m people in the US and UK.
From there, I ran the campaign and let it gather clicks and data.
Another straightforward ad to run — though LinkedIn’s ads platform tends to be extremely unintuitive (can’t edit sponsored content ads after they’re created?!).
The point of this ad is to tap into our startup audience on LinkedIn. Since Ladder works with both venture-backed startups and Fortune 500 enterprises, we targeted startup founders and CEOs with our first LinkedIn test to see if we can reach that audience effectively.
Budgets and CPCs on LinkedIn tend to be a LOT higher than other platforms, so here’s what we went with:
We went in expecting 5 leads at a $106 cost per lead (matching our AdWords cost per lead), so our daily budget was $75.
The ad we went for was also similar to what we had working elsewhere — straightforward messaging about how Ladder gives businesses an unfair advantage in their growth.
The audience was fairly easy to set up — job title targeting for CEOs and Founders in businesses from 1-50 employees (startup sizing). I could have gone deeper here and narrowed by industry to make sure tech startups were the primary target, but that would have narrowed the audience too far.
And with that, I set budget and CPC and ran the ad.
Straightforward, and we were hoping for at least 5 high quality leads from this test.
This was a copy test that was designed to run on Facebook to gather a high volume of clicks, rather than a high volume of conversions. Basically, we were optimizing higher up the funnel to gauge interest in a copy approach based on what drove more clicks.
This was the copy for the data-driven ad.
And this was the copy for the ROI-driven ad.
Straightforward, right?
We ran it on a 1% lookalike audience based on website and blog visitors with an $18 a day budget for 7 days. This basically let us test how our current audience, or those similar to our current audience, would react to ROI vs Data.
Finally, the boldest test of our month was to try out a new keyword on AdWords — B2B marketing.
Now the reason why it was so bold was because B2B marketing as a keyword was a lot more competitive than what we were used to bidding on. With advertisers like LinkedIn and Oracle to bid against, CPCs were guaranteed to be higher than expected.
Besides that, the keyword was also rather broad, so it was risky.
We ran the keyword straight up with all three match types — broad, phrase, and exact.
The ads were based on what had worked in the past on AdWords for us.
We were also spending more on AdWords than any other channel at the time, so a $75 a day budget for 14 days was ideal to properly test the keyword.
Time for the fun part — final performance numbers for January!
First, overall monthly performance:
That’s a 50% test success rate — above the 43% expected for the month! Pretty happy with how things turned out in Month 1, and we learned a lot that we can apply in Month 2.
Let’s drill down into each test:
Playbook SEO is still running because it’s a long-term suite of SEO changes to a lower-traffic portion of our website. I’ll keep you informed on how it does!
Accelerator Interest Audience failed to drive any leads, despite driving ~500 sessions to our homepage.
Marketing Strategy Landing Page drove 1 lead at $935.34 cost per lead. Uhh. Yeah. Nope. Moving on.
Unfortunate that both of those failed, because we were excited in the potential for landing pages and accelerator audiences for growing our startup client base. This doesn’t mean we can’t try them again, but we’d need to test different copy and creative approaches to try to reactivate those audiences.
Emoji vs. No Emoji Subject Line didn’t do anything of significance. It seems like the emoji doesn’t add or detract, so we’ll keep it around where it makes sense as a branding play.
Quuu Content Promotion was inconclusive because we could only run 2 credits worth of promotion for the month. Not much traffic was driven, but we’ll keep trying to use the credits.
LinkedIn Startup Founder/CEO Targeting was expensive — but it also drove 2 ridiculously high quality leads. The cost per lead was $270, which might be more than we’re willing to pay, but the lead quality is unbeatable. Further testing needed.
B2B Marketing Keyword was our big win of the month — 4 leads at a $88 cost per lead (20% lower than our average AdWords cost per lead). And on top of that, every lead we got from the keyword was highly qualified — a great addition to our AdWords arsenal, especially at an 8% conversion rate.
ROI-Driven vs. Data-Driven Ad Copy was successful because Data-Driven got a higher CTR and a lower CPC (1008 clicks and 3.1% CTR vs. 805 clicks and 2.7% CTR) and we reached a 100% statistical significance on clickthrough rate. It’s valuable information that at a baseline, data-driven might perform better. Further testing needed, however.
Port AdWords Campaigns to Bing worked! 6 conversions for $749.67 total spend. Breakdown: 5 conversions from Marketing Strategy @ $83.20 CPA; 1 conversion from Marketing Plan @ $85.26 CPA; 0 conversions from other campaigns. Total = $124.95 CPA. Nice! Looks like Marketing Strategy is a winner on Bing.
Homepage SEO worked because we saw a 53% spike in organic traffic sessions once our meta descriptions were updated to better reflect what we do.
Hero Copy – Current vs. New Meta Description coincided with the Homepage SEO test and actually saw an increase in performance over the original hero copy — a 14% increase in CVR for our main goal (hero form submission) and a 10% increase in our sub-goal (below the fold form submission).
Scroll Box + List Builder Copy saw a bump in newsletter conversions that was significant enough for us to keep it running.
January was a rocky month on the back end — as my first month executing strategy for Ladder, it was a bit rough to re-acquaint myself with all the platforms, budgeting, etc…
But it went well, with a 50% test success rate, so I’m pretty happy with the end result.
Here’s to an even better February (journal coming soon!)
Sign up to our newsletter to stay up to date with all the latest movements in the field.
More from Ladder Blog
The art world is being disrupted by generative AI, and artists aren’t happy. Generative AI models like DALL-E, Midjourney, and Stable Diffusion were trained by scraping millions of images from the internet, without permission. Artists claim this violates copyright, while the companies training these models argue this falls under fair use.
Read More →In the world of digital marketing and data-driven decision-making, creative testing is a pivotal tool in achieving business growth. Gone are the days of relying on gut feelings or guesswork; now, business decisions are powered by data-validated insights, meticulously collected, analyzed, and validated. This transformative process empowers businesses of all sizes, from established enterprises to budding startups, to thrive in an ever-evolving digital market. This article looks at the practical applications, challenges, and innovative transformations associated with creative testing, offering you valuable insights and actionable strategies to implement in your own digital marketing efforts for achieving growth and success.
Read More →