Launching a social campaign without conducting a thorough analysis of your target markets, segments, consumers, and past campaigns can lead to subpar results. That’s why it’s critical to use social data to fuel campaign creative and strategy which will boost the performance of your campaigns. As you run more campaigns and gather more data on the performance of your campaigns you can begin to correlate campaign specifics (e.g. content types, timing, duration, target markets) with success.
Over time this type of analysis will illuminate the types of campaigns that are likely to produce the strongest results and highest ROI for your business. Underperforming programs fall into a variety of categories. The most common, visualized by the charts below, are typically referred to within the industry as “flop,” “bad buzz,” and “no spread.”
A campaign that “flops” is one that fails to produce any significant or sustained increase in buzz. A “bad buzz” campaign is one that generates a boost in buzz – but most of it negative. Finally, a “no spread” campaign is one that produces a very brief and unsustainable, although significant, increase in buzz.
While this is just one example of a successful program, there are several key points that can be applied generally to your campaign measurement methodology. Ideally, the start of a new campaign will quickly generate a sustained (and steep) upward slope in positive buzz volume (as seen here), which should peak somewhere in the range of 4 to 10x your pre-campaign volume baseline. How long it takes to reach this peak depends on a great number of factors, including the mediums in which your material is appearing, the frequency of content placement, virality, and more.
Once your campaign has reached its peak, there should then be a gradual downslope of sustained positive buzz. This drop-off will ideally level out to a post-campaign baseline that is marked — if not greatly – higher than your pre-campaign baseline. It’s important to chart the slope of every campaign, regardless of overall success, because over time you will be able to aggregate this information and turn it into an actionable body of historical campaign data.
This can then be called upon with each successive program to fuel predictive insights that will help you optimize campaign attributes both pre and post-launch. For instance, if you have found that high volume mixed-media campaigns tend to produce faster, higher peaks, that may be a program type you wish to rely upon more greatly in the future.
Moreover, by mapping each campaign you will be able to compare it to previous ones and develop useful predictions based on the data coming in. If the data appears to suggest suboptimal performance vis-á-vis historical data, then corrective steps can be taken to boost results. Adjusting campaign messaging, content, target markets and geographies, and budget is a few of the steps that brands take to optimize campaigns that seem to flag in the early stages. Once such steps have been taken, it’s then important to gather data on the impact of those corrective measures on campaign performance.
Did adjustments in messaging and content boost the campaign peak over what was originally projected? Did an injection of extra budget midway through the program help to extend a period of post-peak buzz? Gathering and analyzing these types of data will help you gain a deeper understanding of which corrective actions are promoting higher campaign performance and which are not as effective.