Revulytics Blog

Q&A with Product Discovery Coach Teresa Torres

July 1, 2019

Subscribe

Revulytics sponsors a series of Product Management Today webinars featuring innovative ideas from top software product management thought leaders. In these blog posts, we ask the presenters to share their insights - we encourage you to watch the full on-demand webinars for even more details.

Teresa Torres, Product Discovery Coach, Product TalkTeresa Torres, Product Discovery Coach at Product Talk, presented The Top 5 Most Common Rapid Prototyping Mistakes. She identified five common prototyping mistakes – and showed how to avoid them. She offers insights on what to prototype, when to prototype it, who to test your prototypes with, and how to get more reliable feedback.

Why is it important to prototype more often?

I preach continuous discovery. A lot of user research grew up in a project mindset, where we outsourced it to external firms, we’d interview several customers at once, we’d prototype a big project and get feedback on the whole thing. With continuous discovery, our goal is to have the cross functional team – the product managers, designers, and software engineers – doing their own research.

Why? Every day we’re building software, we have to make decisions. They could be UX decisions: where should this button go? Or clarity decisions: should we label this “x” or “y”? Or bigger feature conversations: do we include this feature, and how should it work? These conversations don’t just happen at roadmap and sprint planning levels: they happen as we deliver. If we do the small research activities of interviewing and prototyping as frequently as possible, we can infuse our customer’s perspective throughout product development.

On the delivery side, we’re marching towards continuous deployment. Amazon releases every ten seconds. The same concept should apply to prototyping. With usertesting.com and other unmoderated testing tools, we can shorten feedback cycles. We’re not far from being able to push multiple designs a day and get feedback multiple times a day. That’s where the power of rapid prototyping will explode.

What’s the difference between prototyping and usability testing?

Some of the big design firms have done a great job of convincing the entire industry that usability testing is really important. And it is! But it’s not the only value of prototype testing.

With usability testing, we’re asking: can people use it? Do they understand it? Can they navigate the workflow? But there are a lot more questions to ask.

You’ve probably seen the Venn diagram: we want products that are desirable, viable, and feasible. We have risk in all three, and we can prototype in all three to mitigate that risk.

With desirability, we ask: does anybody want it? If not, it doesn’t matter if it’s usable. This is where a lot of startups fail. One classic simple test of desirability is the “smokescreen” test. Let’s just put a landing page up there and see if anyone’s excited about this feature. If they click, you tell them: we’re considering adding this feature, thanks for your vote of confidence, we’ll let you know when it’s available. As you move forward, you can prototype to learn: how would people use it? What problem is it solving? Do our customers really have that problem? How well are we solving it?

Viability is just as important: sometimes products get traction and then get shut down because companies can’t make them viable. Maybe the economics don’t work, or there’s some compliance or security issue. Earlier on, you might have built a prototype for your general counsel to determine: will they even let us build this?

With feasibility, you’re asking: can we build it, is it something our company would do? Feasibility prototypes are often built by and for engineers to ask: do we have the technical capacity to build this?

Nowadays, I’d add a fourth category: you can use rapid prototyping to answer ethical questions – especially if you’re collecting a lot of data about your customers or users.

Why shouldn’t you prototype the entire solution?

When we do that, we overwhelm ourselves and our participants. If we run them through twenty screens, somewhere in that sequence you’re no longer getting reliable feedback. They’re getting fatigued, and you’re not giving them enough time to really simulate the experience your customer would go through on their own.

It’s a lot easier when we prototype teeny-tiny pieces of the experience with lots of variations.

Here’s an example: Google search. I might be tempted to prototype the entire experience of going to the Google Home page, typing a keyword, getting search suggestions, going to the results page, seeing some ads, trying to decide what to click on, and clicking through. But if I prototype all that at once, the participant goes through it in about three minutes. Even if we ask them to think out loud, they’re moving too quickly for us to catch all the nuances of what they think on each screen.

So maybe we just test the search predictions. We ask them: tell me about the last time you were on google, what did you search for? Then, show your prototype of the home page, and ask, can you recreate that experience? And we just want them to type in the keyword and think out loud, and by slowing it down we get them to tell us what they think of that page. It won’t take long, so we can try multiple variations in the same test.

Think about: how do we test microparts of our idea, and really get the details of each micropart right? We’ve all had the experience where you buy a product, you’re excited, conceptually it hit the mark, and you start using it, and it’s just a little off. It’s not hard to get the big concept right, but all the little details will make or break the experience.

How do I get more reliable feedback?

If questions like “What do you think?” “Imagine this scenario…” “Would you use this?” sound familiar, you’re probably gathering unreliable feedback.

If you say, “What do you think; please think aloud” we have no context, we don’t know if they have the need our solution is designed for. We’ll just get an opinion. We don’t care about an opinion. We care about: does it work? We want them to try to do something with the prototype.

We don’t want to ask them “would you use this,” because humans are terrible at speculating about their future behavior. There are cognitive biases at play. You’ll get an optimistic answer, and most people want to be nice, so they’ll say they’d use it. They’re not being nefarious; you’re asking the wrong question.

Maybe we know this, so we do task-based prototyping and say “Imagine this scenario.” But I don’t think you should ask that question, either. If someone doesn’t care about that scenario, they’ll run through the prototype at arm’s distance. They’ll still have an opinion – humans always do – but it won’t be reliable.

Instead, start with an interview question. Tell me about the last time you… Get them to tell you a story, then use that story to test your prototype. Walk me through doing what you did. Now they’re using the prototype in the context of the specific real-world scenario that matters to them.

You might do some moderated tests to fine-tune this, and then you can jump to unmoderated testing.

What are the problems with last-minute testing?

Often, our product manager and designer are hustling to keep up with the delivery sprint, designing one sprint ahead of what the engineers are building. There’s no time to integrate our feedback. We just have time to put lipstick on the pig. If we hear something critically wrong with the workflow, it’s too late: we have engineers waiting and we have to give them something. If we learn we didn’t get the feature quite right, we’ve already got internal buy-in, it’s too late to redraw it.

We tend to have a validation mindset: we’re smart product folks, we’ll have all the ideas, we’ll design it all, and then try to pass the test: did we get it right? But we’re way too committed to our idea, don’t have time to integrate the feedback, so we might as well have skipped the test.

Instead, we want a co-creation mindset from day one. Yes, we’re smart product people, we know a lot about what’s possible with technology, and we’re good at designing solutions. But our customers have a lot of knowledge about their contexts, needs, pain points, wants, and desires, and we need that knowledge when designing solutions.

Product teams are getting good at running design studios: coming together, sketching a whole bunch of ideas, share and critique those ideas, come up with one or two promising ideas, and feeding them into development. I say: invite customers to participate.

In the sketching phase, before there’s requirements, when everything’s erasable, it’s a lot easier to integrate customer feedback. Let them draw ideas, let them mark up yours.

We get way more wrong than we think. When we present a finished product to a customer, it’s really overwhelming for them to explain all the ways we got it wrong, so they just explain the surface-level stuff. We never get into the heartbeat guts of what went wrong. Whereas if we co-create with them from the very beginning on a whiteboard, they can say no, it’s not like that at all, it’s like this instead, and we get much richer feedback.

I preach a weekly touchpoint with your customers, at minimum, because it helps to break that validation mindset and move into a co-creation mindset.

All this takes practice, because we’re all defensive. If you’re new to this, start with customers who love you, then work with more challenging customers.

What’s wrong with testing one idea at a time?

Often, we have a top solution, it’s ready for the next sprint, we prototype it. But that frames the question as: is this idea good or not? This sets us up for confirmation bias. We look for all the confirming evidence that says it’s good, and miss all the disconfirming evidence even if we’re trying to see it.

Instead, set up a compare and contrast decision: which of these three or four ideas looks best? That way, you’ll start to see conflicting data, pros and cons. It’ll help a ton.

Most people hear this and say, how will we have time to do this?

You only have time if you don’t prototype the whole idea. Go all the way back to: What’s the specific question we’re trying to answer? For any given solution, where’s the most risk?

Often, it’s desirability risk. What’s the one desirability question you need to answer for each of these ideas? How do we prototype that quickly and get feedback?

Not feedback over the next couple of weeks. I want to know: What can you learn by the end of this week? By the end of today? If you regularly ask those questions, you’ll bring the rapid back into rapid prototyping.

 

Keith Fenech

Post written by Keith Fenech

Keith is Revulytics’ VP, Software Analytics and was the co-founder and CEO of Trackerbird Software Analytics before the company was acquired by Revulytics in 2016. Following the acquisition, Keith joined the Revulytics team and is now responsible for the strategic direction and growth of the Usage Analytics business within the company. Prior to founding Trackerbird, Keith held senior product roles at GFI Software where he was responsible for the product roadmap and revenue growth for various security products in the company's portfolio. Keith also brings with him 10 years of IT consultancy experience in the SMB space. Keith has a Masters in Computer Science from the University of Malta, specializing in high performance computing.