Sign Up For Our Mailing Lists


Allies

How to Build a Culture of Optimization for Your Online Communications

by Tim McGovern and Nathaniel Ward
June 01, 2014

There’s no question that the Left has dominated conservatives online in recent years.

The best example is the 2012 Obama campaign, which ran circles around Mitt Romney’s digital operation. The Obama campaign signed up millions of Americans for e-mails, coordinated get-out-the-vote efforts online, and raised a whopping $525 million via the web.

Some of this success doubtless results from the campaign’s investments in personnel and infrastructure. For example, Obama’s data and technology team had 300 members, while Romney’s had 120.

Yet while Obama’s campaign hired dozens of experienced marketers, its success had less to do with experience than a willingness to ignore “expert opinions” and let testing tell them which marketing worked and which didn’t. In them learning that their expert guesses couldn’t tell them which e-mail was going to generate the most gifts or revenue. “We basically found our guts were worthless,” a senior campaign staffer said.

There is a lesson here for conservatives.

How Heritage Adopted a Strategy of Testing

We have made our share of bad marketing choices at Heritage.org. Not all those choices led to bad results, but they were often made for the wrong reasons. Historically, we made decisions about Heritage.org in three ways:

  1. By HIPPO—the highest-paid person’s opinion. We acted based on the gut instincts, experience, or aesthetic preferences of our higher-ups.
  2. By committee—a standing committee of stakeholders holding lengthy meetings that led to making some decisions and deferring others. Office politics were a recurring factor in decisions.
  3. By expert. We found someone with real or claimed expertise to make decisions for us or to validate decisions we’d already made. Often their solutions involved us buying software for which they received a commission.

Without question, the most painful decisions were those for which we used a combination of approaches, which meant no clear authority for the decision was established.

The death of the old way of making marketing decisions began on a fall day in 2012, when one of our policy analysts appeared on the eighth floor of the foundation’s headquarters in Washington, D.C. That’s normally where you find the President, then Ed Feulner; the Vice President, Phil Truluck; and the members of the development team. This analyst was, as many Heritage employees are, also a Heritage Foundation member, and he wanted to know how he could renew his membership online. He was a bit flustered because he couldn’t figure out where to find the “renew membership” button at Heritage.org.

At first we couldn’t believe that one of our members didn’t realize that you renew your membership by hitting the “donate” button. After all, that’s what you do to become a member in the first place. Our colleague told us that it wasn’t obvious that you renew just by hitting “donate” again. And if it wasn’t obvious to our own colleague, then maybe we were failing to see the problem as our members were seeing it.

So we in the development team began discussing the idea of adding a “renew membership” button that would sit next to the existing “donate” button at the top of Heritage.org. However, the request triggered concerns about the delicate political balance struck on the existing version of the Heritage.org header. Additionally, such a button would clutter up the navigation bar, and best practices say not to do that. We prevailed by framing it as an experiment that would either show results or end in having the button removed.

In order to assess the question properly—and also with an eye toward setting a precedent for future testing—we made sure our experiment was a true test: We had a hypothesis about how user behavior would be different in the new version (the treatment) from the old version (the control); in order to control for confounding factors, we randomly selected visitors to see either the treatment version or the control version; and we committed to letting the results guide the final decision regardless of other factors.

Our hypothesis—that making it easier to renew would generate more and more timely renewals—was confirmed. Our treatment with the renewal button outperformed the control to the tune of $200,000 per year. The “renew membership” button is still part of the top banner at Heritage.org. We gained, however, not just a “renew” button, but also an appreciation of the value of testing our online communications.

Today, many more decisions—from the design of event registration e-mails to the layout of our policy reports—are made based on data rather than our gut instincts or aesthetic preferences.

How You Can Adopt the Testing Strategy

The easiest way to start building a culture of online optimization is to start testing. This framework guides every test Heritage runs:

1. Choose what element of your program you want to improve. For example, you might want to strengthen your e-mail newsletter or your donation page.

2. Identify how you measure success. You don’t run a test just to see what happens. You’re trying to improve something. What is that something?

So if you’re optimizing your newsletter, what’s the goal of the newsletter? To drive someone to your website, perhaps? Or what’s the goal of your donation form? To capture the most gifts or to capture the most revenue?

You will then measure your test results based on this goal.

3. Develop a hypothesis about how you will improve that measure. This step is critical. With a clear hypothesis—“more links in my newsletter will drive more traffic to the site,” for example, or “less clutter on the donation page will lead to more gifts”—you have a testable proposition. Your test will either confirm or reject your hypothesis, and you can apply that lesson in the future.

Not every test you run will yield a clear improvement. Most of the tests we run at Heritage are inconclusive. And more than we’d like to admit, the new version we develop has worse results than the status quo. But this is exactly the point: Every time we run a test, we learn something. We learn what works, what makes no difference, and—probably most importantly—what doesn’t work at all.

Testing, of course, is not unique to online marketing—or even to marketing. However, digital marketing offers three unique characteristics that make it easier to avoid the pitfalls of gut-instinct decision-making: faster turnaround times; more granular measurements; and lower marginal costs for additional tests. Conservative organizations should take advantage of these characteristics of the online world by continually testing what they do.

Winning the war of ideas ultimately requires us to win the hearts and minds of the American people. Only with knowledge about what communications are effective and what communications are not can conservatives hope to win the war of ideas. The strategy of testing is what gives us that knowledge.

Resources You Can Use to Get Started with Testing

• In “Inside the Cave,” (enga.ge/projects/inside-the-cave/) Patrick Ruffini explains how the Obama campaign built its online operation.

• You can use tools like Google’s (free) Content Experiments or the (paid) Optimizely (optimizely.com) to run A/B tests on your website. Most commercial e-mail programs allow you to run tests out of the box.

• The blog at ConversationXL.com offers more advanced tips and pointers for how to use A/B testing.

Mr. Ward is Associate Director of Online Membership Programs at The Heritage Foundation, and Mr. McGovern is Director of Marketing Technology at The Heritage Foundation.


Heritage FoundationInsiderOnline is a product of The Heritage Foundation.
214 Massachusetts Avenue NE | Washington DC 20002-4999
ph 202.546.4400 | fax 202.546.8328
© 1995 - 2014 The Heritage Foundation