Matthew Lord is the Chief Strategy Officer for Adperio, a mobile acquisition company based in Denver.
Good marketing and acquisition employs data to make decisions, increase learnings, set goals, optimize campaigns, and maximize ROI. Mobile app acquisition is an exciting segment in advertising partially because of the emphasis and richness of the data sets available. This is the foundation for new levels of collaboration between advertisers, agencies, networks and publishers. But without collaboration, the data is misunderstood, misused, and often leads to bad decisions that run counter to the advertiser’s interests.
Case Study #1: Low Conversion Rates
A retail app, working through an agency, expressed concerns about the inefficiency of the media buys. We asked for clarification on what inefficiency they were referring to, as our effective cost-per-goals (eCPG) or post install events were coming in at a $4.00 to $6.00 range, a KPI that the agency previously told us “outperformed other partners.”
Before we got an answer, the campaigns stopped performing – conversion rates plummeted suddenly and I was Skyped frantically by a top publisher who threatened to drop the campaign. The entire pub team heard similar feedback.
We found out at the end of the day that the concern was related to low conversion rates. The client had recently launched a campaign for the Android version of their app. Its conversion rate was much lower than that of its iOS version. While the iOS launch had gone smoothly, we were struggling with the Android app. We found sources that were excited to run it, some of whom had great success with the iOS version, but the Android app wasn’t converting, wasn’t backing out for our pubs. The client (or the agency, I could never discern which) felt this indicated that Android was being poorly targeted, was subject to an inefficient media spend.
First, since the campaign was being run on a performance basis, an “inefficient media spend” should be of little concern, since they are only paying for performance. Inefficiencies in impressions and clicks really occur when that is the model on which the media is bought, i.e. on a CPM or CPC basis. This was not the case here. In performance marketing, click concerns usually are about cost or concerns of misattribution/fraud. In the initial launch of this Android campaign, we were not driving enough click volume or conversions for either to be a concern. It may also be that this is an example of the client and/or agency unable to think beyond old branding models not relevant in performance.
What was most damaging to the campaign was the decision that was made, without consulting partners, to “correct” the inefficiency. At just after noon that day, someone changed the attribution window for the campaign down from a standard window to one hour. This of course not only failed to fix the low conversion rate – the inefficiency – it made it much worse. In addition, the change was made across app stores. A concern about Android data was now impacting distribution of the iOS version of the app, which had been successfully driving installs and purchases for months.
The publisher who was Skyping me in a panic? She did drop the campaign. She had been their top publisher for the iOS version almost since the launch of the campaign. (She wasn’t running the Android campaign, as she specializes in iOS inventory.) But a skewed view of a single data point, coupled with a misguided idea of how to fix the concerns, brought down a major app’s acquisition across channels and platforms.
The crisis was short lived, but up front communication across the ecosystem, from advertiser to publisher and everyone in between, would have benefited the acquisition program in this case. If we had understood the client’s concerns, we could have explained the differences in iOS and Android distribution sources that might have explained some of the differences in conversion rate. We could have also explained that their Android app was simply underperforming their iOS app, even in cases where the two campaigns were running on the same mobile web sources. We could have worked together to solve the problem, rather than use good data points to jump to bad conclusions and worse decisions.
(My frantic publisher did relaunch, but I will tell you that her confidence in the program is shaken. I have engaged in more than one exchange where she sees a drop in the conversion rate through natural variance and is afraid it’s happening all over again).
In addition to not comparing Android and iOS conversion rates, here are some other facts about conversion rates to consider to make informed decisions regarding the data you are seeing:
- What are standard conversion rates for display? While conversion rates differ across sources and conversion points, a benchmark is useful. Google Display Networks average conversion rate is 0.89%.
- Accidental clicks are a problem in mobile. Different ad units increase or decrease the probability and percentage of accidental clicks. More Accidental Clicks = Lower Conversion Rate.
- What are the concerns behind the low conversion rate? Efficiency, click costs, and attribution and/or fraud issues may each require different responses.
- How widespread is the issue? In the example above, the concerns were limited to a new Android campaign, but universal steps were taken that did damage across the program. Can the issue be attacked at campaign, source, or sub-source levels?
Case Study #2: Terrible Account Management
I saw an email from a longtime client, enraged that the eCPG on their campaign had skyrocketed overnight. (Again, eCPG is the term we use to measure the effective cost of a post-install goal, such as a registration, purchase, or anything else someone might do in the app to indicate that they are a high-value, engaged user. Establishing a value to this event before we ever launch a campaign ensures our Account Management team can proactively optimize on the client’s behalf.) The truth is, this spike in the eCPG was concerning. A campaign that had been running successfully for months was suddenly in trouble. We were scrambling to figure out what was going on. I hadn’t even had my first cup of coffee.
It wouldn’t be a work week without a fire drill. Like any good problem solving, the team worked down the list of most obvious problems, eliminating them one by one. No new publishers had been added to the campaign. No new inventory or creative units. We reached out to publishers to see if they had any insight (a resource too often neglected), but they were as in the dark as we were. Campaign saturation was suggested as a possible cause, but this didn’t make sense. That would affect installs, not the post-install events for people who had already responded to the advertising.
I started my morning with frustration at the accusation of “terrible account management,” but was increasingly anxious as we rolled into mid-morning and we still couldn’t pin down the culprit. The account manager found a clean phone to test the campaign again. The install came through, but she had trouble when she tried to complete the post install event. The app crashed. When she looked on the app store, she saw that the most recent version of the app had 1.5 stars. The version that was released to the public just the night before.
Digging in further, we found review after angry review about the buggy new version. We took screenshots of the most informative feedback.
The account manager called the client and then summarized in an email. The update to their app contained a bug. It was preventing people from doing what they wanted most, engaging with the app. We forwarded a summary of our experience in testing, as well as the reviews. We hoped this information would help them fix the troublesome release. We recommended they pause all acquisition activity until the problem was resolved.
Data often leads us to jump to conclusions, and the acquisition manager for the app (who really is a good guy) was following a similar process of elimination to try to solve the problem he was confronted with as soon as he fired up his computer that morning. His campaign metrics were suddenly way off, on a sizable acquisition budget for which he was accountable. But in problem solving, in getting behind the data to the truth, it’s important, one, to find quick tests to eliminate incorrect assumptions and two, to ensure creative thinking to iterate to other possible (and possibly overlapping) conclusions.
What I like about this example is that none of us considered an app update as a possibility until an account manager testing an app discovered what was really behind the data. The acquisition manager didn’t consider the possibility, either. I imagine Product and Engineering in that company (and no doubt in many others) did not communicate well enough with their Marketing arm.
The next time I’m in San Francisco, I will have a drink with the aforementioned acquisition manager and make a toast to terrible account management, lest he doubt us again.
The post App Marketing Case Studies: Good Data, Bad Conclusions, Worse Decisions (Part 1 of 2) appeared first on mobyaffiliates.