Revulytics Blog

Not reaching revenue targets? How to fix conversions and renewal rates.

July 5, 2013


decision-maker-arrowsYou’re a Product Manager, in control of a well-positioned technically sound software solution, but revenue growth from your product is not reaching its targets. As a PM you wonder what (and if) you are doing wrong, and in order to get a handle on the situation you start looking into the reasons leading to this. Your engineering, sales and marketing departments have been there for a while so their performance is proven. So, how do you go about identifying the real cause for missing your revenue targets?

Sales will argue that unsatisfactory conversion rates are because the product is not reaching customers’ expectations or else because marketing is not generating good quality leads. On the other hand, Marketing will argue that the quality of the leads is good, but sales prefer to follow their own leads or are not putting enough effort into converting marketing leads and sharing the credit.

As a PM you cannot help but wonder if the problem lies in the product itself, not necessarily in terms of feature quality, but possibly in terms of usability, presentation and minor or complementary features that are expected by your customers.

Let’s drill down into the analysis process of the above arguments…

Lack of conversions: The product evaluation mystery

When analyzing new business revenues, product managers usually look at the volume of leads being generated, the lead qualification rate and the lead conversion rate. In this post, we won’t focus in on scenarios where the problem is obvious, such as not enough leads from marketing or not enough qualified leads that start an evaluation process. Instead, we will focus on a more complex and rather common scenario, where you do get enough leads evaluating your product, but the number of conversions is way below your company’s expectations. In this scenario, it is very difficult to determine the cause of the problem using traditional tools and techniques, like CRM systems, surveys and statistics from download logs.

Sales & Marketing metrics example

In order to get an understanding on why evaluating leads are not converting, you must answer three key questions:

  1. How is your product's user experience?
  2. What are users really looking for in your product? What are their expectations?
  3. Is your product able to meet these expectations?

Knowing these answers is a critical decision making factor when trying to determine which areas need improvement in order to push new business conversions to the desired level.

Key Point #1: User experience - Did they actually evaluate?

Successful evaluation is one prerequisite of successful products. In order to deliver a good evaluation experience you need your product to be easy to work with, and showcase its value up front. Of course both factors are subjective and their interpretation varies from one lead to another. So how do you know if the product is easy to install, configure and run for the particular leads that are trying it out for the first time?

Evaluating leads very rarely offer any form of feedback (both positive and negative). You may know who your leads are, but you do not actually know anything about their experience with your product, the features they have tried and what drove them to evaluate, buy or dump your product in the first place.

Missing install and evaluation statistics

Download logs from your web server or your CRM system are not enough to give a clear indication on the evaluation process. They will only tell you how many evaluating leads you have, or whether they have downloaded or not, but they will not tell you what happened after the download took place. Surveys may be good indicators, but you cannot depend exclusively on them for decision making as their results are usually subjective or statistically unsound.

There are many cases where people download software but never get to install it, or if they do get to install it, they never use it, or just run it once and have a look at the user interface before throwing it away. There are also cases where people try out your product in detail but decide not to buy it for other reasons. Below are some typical cases that could expose what may be wrong with your product:

  • Was the evaluating user able to install and configure the product in a satisfactory manner?
    If not, most probably you have a problem with your installation kit such as too many prerequisites, lack of automatic means to install such prerequisites or non-intuitive dialogs asking for information which is not immediately available to the user. You should look at the difficulty of your product’s configuration process. Is it wizard assisted?  Can it be automated? Does it fit and scale with the customer’s environment?
  • Was the evaluating user able to run the product and discover/test its main functionality?
    If not, most probably you have a problem showing the main value of the product in an effective time frame. Evaluating users will NOT spend a lot of time trying to discover and learn how to use your product's features. If the product does not deliver fast, they will simply move to a competitor product, so you need to ensure your product delivers the main functionality in the fastest and easiest way possible.
  • How many times and for how long did they run the product? Are you engaging your users enough to keep them coming back for more?
    Assuming that the product delivers what it is supposed to, if it is not run more than once or users simply run it just for a couple of minutes and throw it away, then it most likely means that you have a significant user experience problem. People are not excited whilst using your product. You are not offering enough to keep them engaged with your application.
  • Did your product throw any errors during the evaluation?
    However friendly they might look, throwing errors during evaluation will leave a bad taste in your users’ mouth. You need to look at such possible errors and see if they are caused by the evaluator’s installation environment. In such case you need to look at what is different in that environment and ask yourself if this is an isolated case, or whether there is a significant likelihood that others will experience the same issue. The more common the scenario, the higher the need to cater for that scenario cleanly.

Of course, we can further analyze each of these points and drill down even more to consider specific possible scenarios, but in the end, it all comes to the same conclusion: Where can you get all this information and answer all these questions? The only way to get objective answers is to have the ability of monitoring and analyzing how the product is being used, from download all the way to conversion or uninstall, based on built-in runtime intelligence and product  telemetry.

Key Point #2: User expectations - What features do they use and how?

Monitoring the features that were used most can give insights into customers’ expectations from your product. Knowing why customers need your software and how they use it is very important in determining the way forward in terms of product development, as well as targeting your marketing campaigns so that they resonate more with your target audience. If there is a product feature that your evaluating leads really like, you want to know about it so that you can invest resources in that direction. If on the other hand you have a product feature that evaluators never use, you want to know whether the reason is related to the feature being too hidden to be discovered or whether it’s time to switch your priorities away from a ‘useless’ feature.

Feature usage tick box

Think how valuable it will be if you had a method to aggregate this information from all your evaluating leads, and seeing the big picture through their eyes! You will see the objective, unbiased view of what people expect from your product. Knowing the real facts may come as a surprise and possibly alter your beliefs about your own product, but it will surely help you improve things in a way that no phone call or survey can.

However, in order to get such information from your evaluating leads in a scalable manner you must find an automated way using some form of on premises software analytics module that can record and send home granular telemetry on feature usage.

Key Point #3: Ability to meet user expectations – Did the product deliver?

Most product managers know the extent to which their product delivers as advertised, but knowing if your product delivers what your customers expect it to deliver, is a whole different story! Customer satisfaction is a key performance indicator for the business but getting accurate KPI measurements is difficult. The most popular ways to measure customer satisfaction rely on questionnaires and surveys, however the American Customer Satisfaction Index (2012) found that response rates for paper-based surveys were around 10% and the response rates for e-surveys (web, wap and e-mail) were averaging between 5% and 15%. Obviously these numbers are not sufficient to properly assess customer satisfaction and may not be statistically sound.

Product Expectations and Capabilities

In order to get more data, along with your customer satisfaction surveys, it is strongly advised to invest in a software analytics solution that can provide you with product runtime intelligence showing the occurrence of particularly important events within your product, feature usage trends and statistics as well as what errors are being generated. This will help you determine what parts of your application are being used and whether the product actually works as designed for your users. By following runtime session execution trends, you could also understand how vital your product is for your customers and thus how loyal they are to your software application. User loyalty is a key indicator effecting renewals and up-sell/cross-sell opportunities.

Lack of renewals: Why do they die?

When it comes to shortage of renewals, as a Product Manager, you have little choice but to analyze your own work: product positioning, product features, pricing and competition. You are dealing with people who invested in your product once, but are not finding any value in renewing their license agreements anymore. As a bare minimum you need to dig up answers to these questions:

  • Are your customers price sensitive to your renewal license cost?
  • Is your product no longer performing as expected or no longer supporting new technologies in use by customers?
  • Has your product roadmap stopped short at satisfying your customers' changing needs?
  • Is the space being dominated by tougher competition?

The key things you need to know in order to arrive to a conclusion are:

  • Who is actively using your product?
    • Are your active users split into multiple user groups, each with their own set of needs and expectations?
    • Do you know what makes a loyal users vs a user who will dump your product before the first renewal?
    • Are you focusing your roadmap around the user group with the best ROI or trying to kill too many birds with 1 stone?
    • Are you marketing to every user groupor a totally different user group?
  • How is your product actually being used?
    • Are they using it for purposes which it was not intended?
    • Does the product design and usage flow match how it is used in the real-world, or does it only fit the hypothetical scenarios you drew up on a whiteboard during an Engineering meeting?
  • What are the areas your users (not you) believe that need improvements?
    • Have user priorities changed?
    • Is it time to shift focus on areas previously regarded as low-priority?

Using the information you have at hand, such as your CRM solution, or financial information will, again, not answer any of your questions above. They will tell you who has purchased your product, but they won’t give an answer on whether they are actually using it or not, and if so, how is the product being used, or what are the features they need, in order to make their lives easier. All these details have a direct impact on your renewal rate. In order to get insightful and actionable information like the above, you need to enable the product to collect telemetry on key aspects of its availability, functionality and usage and then aggregate the data from your customer base in order to make decisions on your product’s future roadmaps.


Ignore what YOU believe on your product (at least temporarily) and let your product speak for itself. Get to know your customers’ expectations and adjust your roadmaps according to what your target audience really wants you to deliver. This will lead you to increase revenues without costly trial and error investments in R&D, marketing or sales.


Get Started with Usage Analytics

Register a free account and start touring analytics immediately. Then, simply integrate the SDK into your app to start your free trial. Start making data-driven decisions.

Keith Fenech

Post written by Keith Fenech

Keith is Revulytics’ VP, Software Analytics and was the co-founder and CEO of Trackerbird Software Analytics before the company was acquired by Revulytics in 2016. Following the acquisition, Keith joined the Revulytics team and is now responsible for the strategic direction and growth of the Usage Analytics business within the company. Prior to founding Trackerbird, Keith held senior product roles at GFI Software where he was responsible for the product roadmap and revenue growth for various security products in the company's portfolio. Keith also brings with him 10 years of IT consultancy experience in the SMB space. Keith has a Masters in Computer Science from the University of Malta, specializing in high performance computing.