Revulytics sponsors a series of Product Management Today webinars featuring innovative ideas from top software product management thought leaders. In these blog posts, we ask the presenters to share their insights - we encourage you to watch the full on-demand webinars for even more details.
In this webinar, leading product development consultant, coach, and trainer Sari Harrison shows how to go beyond conventional metrics to understand whether you’re delivering value to users by helping them make their lives better. Drawing on 20+ years of product management and R&D experience at Apple and Microsoft, Harrison introduces new metrics for doing just that, and helps you begin applying them in your software business.
My inspiration was the growing conversation about the negative impact technology is having on our lives - tech-related issues like digital addiction, anxiety and stress, fake news and the breakdown of truth, societal polarization, political manipulation, and superficiality. We all know these things existed before technology. The question is: how is technology contributing?
From being in the tech community for years, I know we’re not doing this on purpose. I think Mark Zuckerberg genuinely wants Facebook to increase meaningful interactions. But it’s not enough just for us to not intend these things to happen. I think we’re responsible for our products’ impact. And we are all really smart people: innovators at heart, who like to solve problems. So if we do take responsibility, we’re going to fix it, right?
I’ve been thinking a lot about why this is happening. I’m here to share my hypotheses, as well as some tools I’ve been using for many years that I think will help your business and our society.
My main message is to encourage a shift in the metrics conversations we have.
I talk with product managers around the world, and it’s really common to spend most of our time talking about what’s sometimes called “vanity metrics.” These are things like you may have heard of, like AARRR – activation, acquisition, retention, revenue, referral.
There’s a lot of value in traditional metrics like these. But I propose we add impact metrics: measurements of the value we hope to provide our users. Things like: how are we trying to make them feel? Do we want them to be happier, calmer, more productive, have better experiences, more free time? What do we want for them?
Most of us are engineers at heart, so our logical, rational minds love things that are concrete and measurable, and it tends to be tougher to measure impact. So we assume that if vanity metrics are going in a positive direction, impact metrics are, too. But that’s not always the case. Often, we can move vanity metrics quite easily without increasing value to the customer.
More practically, though, I think adding impact metrics will help you achieve your vanity goals. I talk to a lot of product people who really struggle with what to do next. I think they’re too focused on trailing indicators, and vanity metrics are trailing indicators. You have to add value to the customer before you extract value through things like monthly active users or referrals.
When you’re only talking about trailing indicators, you end up with a gap that’s hard to bridge. You have your vision on one side, you hopefully have a strong value proposition, maybe you have a strategy. And on the other side of the stack, you have revenue, usage, KPIs, OKRs, and that gap isn’t being bridged.
This doesn’t just make it hard to measure whether you’re adding value. It also makes it harder to decide what to do. To inspire people. To make tough calls.
So today we’re going to close the gap, and we’re going to do that with impact metrics.
When I started at Apple in 1992, I had no idea how people used my products or how much money I was making. Maybe the finance guy would tell us once a year, but data-driven really wasn’t a thing when I first became a product manager. So I had to figure out some other ways to inspire people to come up with ideas, and I’ll be discussing some of the techniques I’ve been using.
Here’s what it looks like to use impact metrics to make decisions, and how you might make different decisions as a result. I was head of the product team for Apple Maps for four years, and our vision is probably fairly obvious: we wanted Apple ecosystem customers to be able to seamlessly explore and navigate the world. You could probably replace the words ‘Apple ecosystem’ with ‘everybody’ and you’d have Google Maps’ vision, but our strategy is where we differentiated.
If your strategy is just ‘reach this number of users or this amount of money,’ that’s not a strategy, it’s just business goals. A strategy should differentiate you. And much of our Apple Maps strategy – make it beautiful, with unmatched attention to detail; make it safe and relaxing, support the Apple ecosystem and be consistent with the Apple brand – are things you probably wouldn’t see on Google Maps’ strategy.
We already had a lot of metrics, including every imaginable version of usage. I knew all there was to know about the quality and performance of our products – how long did it take to fix a user’s product report, how successful were our search products. But, most beautiful, safest, glanceable, community friendly, most trusted – things like that weren’t being measured.
Often, when I told people I worked on Apple Maps, they told me: you should aggressively reroute around traffic like Waze does. We knew Waze was getting increasingly popular. If we’d only been looking at typical metrics like usage, that might have led us to do the same thing. But our answer was no.
For one thing, aggressive rerouting around traffic isn’t community friendly. You’ve probably heard of communities that are ganging up against Waze, saying stop routing people through our neighborhoods. It’s also not the safest and most relaxing experience, because to route aggressively you’re sending people to places they don’t know, and they have to pay more attention to their phones while they’re driving. If a company wants to advertise to you, the more you’re looking at your phone, the more ad dollars they get. But because we aren’t monetized through advertising, and because we had safety as a key strategic goal, we consider that the less you are tempted to look at the screen, the better. And therefore complex routes are not strategically aligned.
Another example is this iOS widget screen.
Originally, if you swiped left on the home screen of your iPhone, you saw both of these widgets: Maps Destinations, and Maps Nearby. If you’re really focused on engagement, and you asked should we do this Maps Destination widget, the answer would be no. But from our standpoint, the whole point was to be able to swipe your phone, and immediately see where you’re going, how to get there, and how long it’ll take. You don’t need to launch Maps, and that’s what success looks like. Given our strategy of supporting our Apple ecosystem and designing for the least effort for the user, the answer was: yes.
We also originally shipped this with the Maps Nearby widget turned on. That dramatically increased the number of Apple Maps launches, because you’d hit this Dinner button and end up doing a search in Maps to see nearby restaurants. But when users launched from this widget, the most likely next thing they did was exit out of Maps. That search wasn’t what they expected or wanted. This didn’t improve over time. So we turned it off because we didn’t want people to have this negative reaction, even though it was significantly increasing launches.
Yes – and let’s also talk about what each type of metrics applies to, and what we want to use them for – the conversations we can have around them.
Vanity metrics come in familiar buckets. The first is usage: monthly active users, time spent, launches, page view, churn, abandonment of workflows. The second is financial: things like revenue or subscriptions. Again, these are trailing indicators. They’re measurable, so they’re very tempting.
But product is only one factor in usage, so focusing primarily on usage isn’t necessarily the best approach. If you have a big uptick in monthly active users, maybe it’s because you did a big marketing campaign. If you have a big downturn, maybe a cheaper competitor came on the scene. So, too, revenue can also be dramatically impacted by price, and number of subscriptions by what you decide to include in the free vs. paid product.
Then, there are quality metrics: things like performance, bug counts, reliability. I love these. They’re leading indicators: if you have poor quality, when you release you might have really strong usage and financial results, but eventually it’ll catch up to you and impact those vanity metrics negatively. Quality is also measurable, though it takes more time, and product is the primary factor here.
Now, finally, we come to impact metrics: the metrics that fill the gap between your vision and strategy and your revenue, usage, KPIs, and OKRs (Objectives & Key Results).
These will be somewhat unique to your business, product, or feature. So you can’t just superimpose one set of impact metrics onto another business and have them make sense. They’re not leading or trailing; they’re simply defining: setting the standard for what you’re trying to achieve. They might or might not be measurable quantitatively, and product is the only factor.
When you’re having a conversation about metrics, you’re always talking about some aspect of the business. You might be talking about the business itself: is it thriving, are we extracting value, is our business model working? You might be talking about the product: is it adding value, is it high quality? And you might be talking about a feature or feature area: Did we solve the problem we intended to solve? Did we do it well? Is our solution discoverable?
We should be having ‘understanding’ conversations where we don’t have a particular goal but we just want to try to understand the data. We want to use our metrics to inspire ideas; to prioritize our work; to generate and validate hypotheses. We use quality metrics to determine readiness for shipping. Finally, we want to use metrics to generate shared context on the team and alignment among members.
I’d like you to create impact metrics to complement your existing usage, financial, and quality metrics; and then as you go about your day, be thoughtful about which metrics you use for what conversation.
Think of this as a hierarchy that starts with vision and strategy. Below that, create goals – and many of those will be user goals. Out of your goals comes metrics, features, and subfeatures.
Let me make this more concrete with some examples. Let’s start with Apple Maps again. Here are some of the goals built from our vision and strategy. You can see everything that starts with the user is a user goal, but we also have a few goals at the end – such as developmental and operational efficiency – that aren’t directly about the user.
The ones I haven’t crossed out – the user goals – can inspire impact metrics.
When I was at Microsoft, at Bing Ads, our user goals included reducing friction for advertisers, helping users discover new products and services, and doing that without negatively impact user metrics. With the last of those, we identified the impact metric by drilling down. Why would ads negatively impact user metrics? Because ads might slow users down. So we can change that goal to: don’t slow the user down in finding relevant search results. Now we have something we can turn into an impact metric.
What are some other examples of user goals? At Facebook, it might be: more meaningful interactions. At Apple Maps, we asked: can we be right more often about our proactive suggestions, can we reduce your stress while navigating?
When I was working on Apple’s TV platform, our goal was to have users spend as little time in our interfaces as possible, because the more time they spent trying to find something to watch, the more frustrated they would get.
One of our webinar participants has a mission to start more conversations about peoples’ experiences with diversity and inclusion in the workplace. That’s would be a great impact measure.
You can see that some of these impact metrics are clearly quantitatively measurable, some are sort of measurable, and some don’t seem measurable at all. In-app feedback can help you bring in some more quantitative data: thumbs up/thumbs down, how did you feel during this journey? But how do you measure the impact metrics that don’t seem measurable at all? Qualitatively.
You can often add these types of questions to usability interviews. Ask people to navigate a route with Waze and do it with Apple Maps, and see which was more relaxing. Focus groups are great to discover impact metrics; the idea that Apple Maps should be relaxing came up through a series of focus groups.
Often, personas can be helpful in determining who to interview or bring into your focus group. B2B can be easier, because you have a sales team that can give you direct access to the people you need – you might have some sort of product council, a set of users you can always tap to do a survey or get-together. With B2C focus groups, you’ll often feel like you’re way ahead of them. They’ll come up with feature ideas and you’re thinking: that’s already in the product. It can feel like you’re not getting value, but if you really listen, useful insights will slowly emerge.
Another really good technique that’s less commonly used is heuristic evaluation. You have a set of heuristics you want your user experience to achieve. Maybe if you’re Instagram, you want people to feel better about themselves instead of feeling worse. You could do heuristic evaluation with a small but significant set of individuals who look at your interface and share how they feel after using it. If you’re in a larger company, you may be able to contract out with a subject matter expert to help you tease out real heuristics from mere goals.
As you work day to day, stop and think: what’s the right metric to use here?
All metrics are appropriate for all-hands meetings: you need your team to understand as much context as possible. When you’re brainstorming, impact metrics are usually the best, unless you’re brainstorming about something that isn’t at all user focused. Hypothesis generation and validation usually need quantitative data: if you do have quantitative impact metrics, that can work, but usually you’ll use quality or usage metrics. Feature validation is also a bit more difficult to do with impact metrics; you ship a feature, you want to know how it’s doing, you’ll look at usage, abandonment rates, whether it has a workflow. Then, hopefully over time, you can do a survey and validate your impact.
I also think impact metrics are best for prioritization discussions – and it’s ideal to separate prioritization discussions around impact from prioritization discussions around quality metrics. Prioritization tends to be complex, but if you’re not at all talking about impact metrics, find a way to bring them in.
I hear repeatedly that conversations about revenue and usage get hyper-focused on details like: should we move this button so we can get more of x? That can really frustrate teams, and we can surface this to leadership with a solution: we can reduce frustration by having a conversation about what we mean by value.
Help leaders see the cultural aspect of this: the more you’re talking about impact, the more inspired people will be, the more decisions will align. If the CEO has been asking how could you have made that decision, it’s because you don’t have enough common context, and the way you generate that is by talking about impact.
We try to innovate all day long. Usually we’re thinking about innovating on features, products, or business models. My request is: add innovation on metrics.
Here’s a typical innovation cycle:
How can you use that with metrics? Stop and think: Are we using the right metric in this conversation? Is there a better one we could be using? Is there one we should retire that isn’t serving us anymore? How can we measure this user goal? The more you think about how you can measure seemingly unmeasurable things, the more innovative ideas you’ll have.
Also ask: How do our metrics impact each other? As an organization matures, you’ll often find metrics you want to improve may conflict with each other. Uncovering those conflicts and thinking about the causes can help you generate valuable new metrics. Finally, ask: can we avoid the negative in addition to increasing the positive? So, for example, if you’re Instagram, can you measure decreasing anxiety instead of just increasing engagement?
Keith is Revulytics’ VP, Software Analytics and was the co-founder and CEO of Trackerbird Software Analytics before the company was acquired by Revulytics in 2016. Following the acquisition, Keith joined the Revulytics team and is now responsible for the strategic direction and growth of the Usage Analytics business within the company. Prior to founding Trackerbird, Keith held senior product roles at GFI Software where he was responsible for the product roadmap and revenue growth for various security products in the company's portfolio. Keith also brings with him 10 years of IT consultancy experience in the SMB space. Keith has a Masters in Computer Science from the University of Malta, specializing in high performance computing.
As product usage analytics deliver a growing stream of data and insights into product team dashboards, user retention metrics ...
As on-premise software vendors start evaluating and deploying cloud-based applications and functionality, they unlock a whole new ...