Douglas Ferguson
President - Voltage Control
Douglas Ferguson is an expert when it comes to making products better. An early advocate of the Design Sprint methodology, he and his workshop agency, Voltage Control, have taught dozens of organizations how to align stakeholders, build prototypes, and validate solutions. In addition to facilitating Design Sprints and innovation workshops, Douglas also lends his technological expertise to companies as a fractional CTO.
In our conversation, Douglas explains some of the underlying strategies he uses when running a Design Sprint, where Sprints fit into the larger research space, and the different roles of professional and amateur researchers.
Respondent: Thanks for joining us today, Douglas. Along with Jake Knapp, the creator of Design Sprints, you’re one of the leading advocates of this methodology. Could you talk about how you first discovered Design Sprints and your initial experiences practicing them?
Douglas: I first discovered Design Sprints about a year and a half or so before Jake Knapp’s book, “Sprint,” came out. I was fortunate enough to catch wind of some early blog posts, so I started dabbling. At the time, I was practicing them internally at my own company, Twyla. In fact, that’s one of the main reasons I became such an advocate for it, because of the transformative results that we got.
Initially, however, it was a learning experience. For instance, one mistake we made was to try and shorten the Design Sprint. That’s one mistake everyone always makes. While it’s possible, you can’t really compress it to fewer than four days. I don’t even recommend that for lots of teams. You have to be in a really specific spot for a four-day Sprint to work. Some teams can compress the work and break it apart from the structure, but again it only works for early stage ventures where it’s a small team and they’re really focused on what they want to do.
That was one of the big early lessons we learned. We also didn’t include enough cross-functional stakeholders. We just involved the product team — product designers, product managers — but no one from the business side, no one from marketing. So we lost out on the alignment benefits. We could have learned a lot from folks in other parts of the organization, and they could have been included and had a sense of ownership over the work that we did. We learned how critical this component is.
So the first one was a total failure because we shortened it and didn’t include enough people. The next one was also a failure because we swung the pendulum in the other direction and brought in too many people. But we were getting better and better. A few Sprints later, we brought in the Google Ventures Design Team [with Jake Knapp] to lead a Design Sprint. What struck me about their process was what they weren’t willing to budge on. It was really telling. The seven-person rule was one of them. We gave them a list of 14 people and they were like, this won’t work. They were really stringent on this. As I’ve done more and more Sprints, I totally understand why.
Their team taught me other little subtle things too, like how they facilitated. They really watched out for the people and made sure to account for bathroom breaks. Some of it was just language. I remember Jake liked to say “let’s just pause” rather than “stop.” I love that one because “pause” sounds so nice, whereas “stop” is so rigid. There were just tons of things we picked up because they’d done so much of this facilitation. Watching them work was enlightening.
Respondent: So when you’re talking with clients or people who are unfamiliar with Design Sprints, how do you differentiate it from more traditional methods of doing product research? How do you describe it?
Douglas: The first thing people usually get hung up on is the interviews, so I’ll try to explain that Design Sprints don’t use focus groups. Instead, we do one-on-one interviews because we want unbiased and real reactions. The next thing people get hung up on is usability testing. They need to understand that Sprints are not a usability test. Instead, they’re meant to get a reaction to a solution so that we can understand desirability.
Some folks are already doing research and problem validation. They’re talking to customers and really understanding the problem. But if they’re not building prototypes and testing them, then what I tell them is missing from the equation is solution validation. I see a lot of folks who have learned about problem validation and are going out and talking to customers and learning how users think about the problem space, who then synthesize all that down and say, “I know how to solve this” and then start their development process.
Instead, what we advocate as part of the Design Sprint is building high-fidelity prototypes that give the illusion of the working product. Then we put that in front of users and we have them react to it. If a team is already in that mode of doing research, know a lot about the problem, and are jumping right into development, I tell them to hold on a second since it will always take a long time to build, will be very expensive, and there will be a lot of uncertainty going into it, which means you’ll accrue a lot of technical debt. Why not front load that with a lot of validation around the solution instead?
If folks aren’t doing research, then that’s a different conversation. We may do a Design Sprint as a way to help them understand why research is important, but it’s not going to be the same kind of outcome [if they had been doing research]. They’re not really doing solution validation. Although we test a solution, it will probably be the wrong one, unless they’re extremely lucky people. But they’re going to learn a ton and their eyes are going to be opened to the power of talking to customers and they will begin to understand where the customer is coming from.
Respondent: That leads well into my next question. Why would you say research is necessary to effective product development in the first place?
Douglas: I would say that we can point to the complexity theory. There’s a concept called disintermediation. It basically means to get out of your seat and see what’s on the ground, because the on-the-ground reality is the truth, and without the truth you can’t solve for reality. If you’re just pontificating about how you think the world is, then you’re unlikely to have something that’s generalizable across a large audience.
Now the real beauty of design is when you can take that research and synthesize it into something new and amazing and innovative. What we don’t want to do is talk to a bunch of people and build what they told you. I think Steve Jobs is a good example of someone who really listened and paid attention to what’s out there, but then crafted it into something brand new and unexpected. There’s some real power in that.
But ultimately you have to own up to the fact that we’re working in a complex domain where it is necessary, per the complexity theory, to probe your environment. A Design Sprint is a great example of a probe. In probing your environment, you can disintermediation. Instead of making assumptions about your environment and thinking you know everything, you’re probing and testing and understanding, and you have to do it constantly because in a complex domain things that work today aren’t guaranteed to work tomorrow.
— Douglas Ferguson, Voltage Control
Respondent: I’d love to discuss your process for probing the environment. First of all, how do you select participants to interview?
Douglas: We’ll start with a screener, which will have various criteria on it. We’ll then look at this criteria from the perspective of whether we want to include or exclude someone. For instance, certain salary brackets or behaviors may make someone a poor candidate, and vice versa. It really depends on the project and what we need to learn.
Typically, we start with the jobs-to-be-done framework to understand who we’re addressing. I’m a big fan of the jobs-to-be-done perspective. When we think about our users in this way, it is easier to define the buckets we’re looking for. While personas are interesting, these attributes may be misleading. They may also make it difficult to understand how to properly bucket your users.
So then we make the screener, which we try to make abstract. We don’t want it to be obvious what we are looking for, so we’re typically asking a bunch of indirect and multiple choice questions, all of which are still informed by the original criteria. In fact, since we're usually starting with that criteria, you could almost say that’s a screener in itself, which we have to rewrite to obfuscate our intentions.
Respondent: What about when you’re actually interviewing users? What are your strategies?
Douglas: We use a strategy called the five-act interview. You start off with a welcome. You want to make sure you make people feel comfortable by building some rapport. It’s typically not that long.
Next, you move onto the contextual inquiry. This is where you try to understand a bit more about them with respect to what you’re studying. So let’s say you want to know how people collaborate at work, then you’ll be asking them questions about how they collaborated in the past and what sort of tools they used.
After you get through the context questions, you introduce the prototype. As you do this, explain to them stuff like how you didn’t make the prototype, they can’t hurt your feelings, there are no right or wrong answers, they’re not being tested, you’re just trying to get their reactions, and so on.
Next, you move into the task section. It’s important to distinguish this from a usability test. You’re not asking them to go do something and then seeing how well they do it. Instead, you’re just putting the prototype in front of them and nudging them around. We want to observe their reactions to the prototype by asking them to speak out loud. To do this, I love to set up the scenario, then ask them what their first impressions are. If they don’t do something I think they should do, I might ask what this thing here is on the bottom right. Or if they just scrolled past a few things, I’ll get them to tell me why they did that. It’s important not to say, for example, “Why didn’t you click on the buy now button?” I try not to use words on the screen to prompt them. I want to know what they perceive, what they see.
The last part is a debrief. This is when I get them to tell me the pros and cons of all the solutions and prototypes they saw. I’ll ask them how they would describe it to a friend. Again, you don’t ask any leading questions. Now that they’ve seen it and been through it, I just get them to talk about it in retrospect. How do they feel about it and what do they have to say looking back?
Respondent: This ability to test out multiple prototypes and solutions in a five-day span is one of the biggest innovations of a Design Sprint. You can get a lot done very quickly and in a very focused way. However, do you think this speed also means you are potentially missing out on good ideas that would have surfaced had people had more time for the problem to digest?
Douglas: If they are in a scenario where they are limiting their options, it means they haven’t done enough upfront research. They haven’t digested the problem space enough. If they have done their research and understand the problem space and have had enough time to think it through, then the constraints are very powerful. They force you to take action and do something today instead of putting it off until you know enough to make the right decision. Sure, some ideas won’t make it into the prototype, but it doesn’t mean you’ll never get to them. It just means that, given the time constraints, you’re not going to do that this week. Often, people will come back and revisit sketches and ideas that came up during the Design Sprint that were set aside. Also, remember the Design Sprint is just five days that were reserved for the team to get alignment on a problem and figure out where to start. There will be plenty of work to do after the Sprint.. In fact, you will usually know your next steps right after you get through the five-day process.
There are three types of Design Sprint outcomes. Either there’s a total failure and you have to shut it all down, which is great because you just avoided a potential six- to twelve-month or longer project, and you only took five days to figure that out. That’s a huge savings, which includes the opportunity cost avoided by not sending people off to work on a project destined for failure. You may have even learned some stuff from that failure that spark ideas about another solution that will work.
Then there’s a flawed win, where things are gelling but there’s still work to do, so you’re going to go tweak the prototype. This is the most common outcome. You have a good idea but it needs some work so you tweak the prototype and do another round of interviews until you have something you’re super confident about building. I would argue, if that’s the case, then you have plenty of time for more nuanced thinking around the solution because you’re benefiting from the users’ responses and the insights they generate. It’s the best of both worlds. You’re forced to move quick, get results, and put something out there and understand it. Then you can feed that back into a cycle that’s growing over time.
The other very unlikely output is a total success where you just nail it and you go build it. In this case, the Design Sprint is technically slowing you down because you have such great intuition you should have just built it in the first place. [Laughs] I’ve never seen that happen. I’m sure it’s out there, but not everyone can be perfect.
Respondent: One of the effects of Design Sprints, as people like you and Jake go into organizations and help them improve their UX/UI research process, is that you’re helping empower non-researchers to do quality research. However, as we’ve discovered in some of our previous conversations, there’s a discussion in the research industry about non-researchers taking on the realm of professional researchers. People have different opinions about this. What do you think? Is this a good thing? A bad thing? Is there still a role for professional researchers?
Douglas: Just because you didn’t go to school for something, doesn’t mean you can’t be good at it. I’m thinking of a researcher right now who is a terrible interviewer. They’re just not a good researcher, although they went to school for it and that’s their career. In contrast, I did not go to school for it. I’m very passionate about research, but I don’t have a degree. Instead, I care about it and I’m constantly trying to hone my craft.
What we need are really dedicated individuals who care less about what someone’s job title is and where they went to school. Instead, they have to care about this stuff. They should always be looking at their last interview and thinking about how it went and what they can do better. We need more of that across the board: people honing their craft and caring. That’s more important to me than anything.
— Douglas Ferguson, Voltage Control
Respondent: So you would discount the complaints of those professional researchers who say that non-researchers aren’t doing it right, despite using methodologies like Design Sprints? It’s less about their credentials and more about their passion?
Douglas: Professional researchers have a degree and a program they spent time on that they need to justify, and I totally get that. Those programs will set people up to do research right, but you don’t have to go through one of them to be successful. Instead, you have to care, you have to do your homework, and you have to hone your craft. So I agree with them, but I’m just presenting it a little more objectively. You have to work on your craft. You don’t have to have credentials.
Respondent: Thanks so much for speaking with us, Douglas.
Douglas: For sure. Thank you!