User Research Standards for Your Innovation Project
Does the first step have to be the hardest?
Have you ever been in an innovation project and didn’t know what’s the right direction? How to build something that people actually need? Yes, we too. Hunkering down in board rooms with bored thoughts and the same people who are always part of the conversation often leads to tired ideas found on beaten paths. But what if I told you that there is a fix for that? An approach that is both simple *and* yields quality results?
Now, I know what you might think: “We’ve done user research in the past. We hired someone to do some interviews and it didn’t work. Heck, how is asking the people what they want helpful at all? They often don’t know themselves!” And I can’t fault you for these objections. User research is often done poorly, with methods so simplified that they are no longer reliable. In this post, I want to give some ideas on how to do it right.
Too many user interviews take too many shortcuts
So, you do interviews and talk to some people, right? Here is a quote from a chat I recently had on that topic: “Afterwards, you just think: Okay, we’ve talked to someone now. It was nice. But where does that leave us?”
Looking at the industry, there seem to be many research practices that leave us with arbitrary results that don’t connect to any concrete action or strategy. That’s because researching people, especially doing interviews, is much more than just talking to someone. But preparing this is hard work and in business, we take shortcuts way too often. My personal goal was to resolve this paradox by doing the work and then building something manageable in a sprint.
For good interviews, we need to do the work and build a process first.
For good user research, we need a process that makes it relevant, useful and that ensures quality. We need to be explicit about goals and purpose, we need a structured briefing to kick off the process. We need a research design that enables us to ask the right questions, an evaluation method that filters important results and removes fluff from the conversations; and we need interviewers who know what they are doing. Also: Documentation, interviewer guides, databases, privacy setup, probably more. I will give you insights on what to build later.
First, though, some background info on why all of this is necessary.
Take a minute and look at the image below. Stop for a moment and think: Is this thermometer a good measuring tool? Does it have problems? What might they be?
User Research Standards? A Brief Overview on Quality Criteria From Social Sciences
A good thermometer shows everyone reliably what it is supposed to show them (Well, duh… but)
So, what makes for a good thermometer? This is a thought process I usually prompt in research trainings. For one thing, a good thermometer should be reliable, correct? If the temperature is 30°, it shouldn’t show 28 or 33, or even 40. This is the most obvious. But there are two other things that it should do.
On the one hand, a thermometer shouldn’t leave room for interpretation: if the scale is too imprecise, different people might read different temperatures; for instance, on the one in the image above, a hasty glance might make you read 24° instead of 28. Also, it might be hard to discern the difference between 29 and 30 in that case. On the other hand, it also needs to measure the actual thing it is supposed to measure. Meaning, it needs to measure the temperature and not, say, the wind speed or humidity. While these things are pretty obvious for a thermometer, they are standards that all measuring tools need to satisfy. Like the ones we use in user research. And this is where it gets a little complicated.
*All* good measuring tools need to be objective, reliable and provide valid results
I won’t make this too theoretical, but you can skip this section if you want. In the social sciences (where qualitative research grew up) there are criteria that have to be met. That is, if you want to claim that your research is good and can credibly say something about the world. Let’s say that we conduct a big survey. For example, we’ll have a set of questions with answers on a scale of 1 to 5. Our goal is to measure how conservative people are.
We now methodically determine a sample of roughly a thousand people and appoint 20 interviewers to ask them our questions. We hand out an interview guide and off they go. In the end, we hope to have quantifiable data on the topic we research. For quantitative research like this, there are quality criteria that have to be met. Otherwise, our research will not have valid results (and is therefore useless to tell us something about the world.)
Careful: Open user interviews can’t meet the standards above
The criteria above refer to quantitative research; big surveys with many respondents and mathematical scales for answers. Therefore, we can use statistical means to see if we meet our research standards. In an innovation projecte, though, we will most problably not conduct quantitative research, as it is time-consuming and expensive. Instead, we will likely conduct qualitative user interviews. These are more open-ended; they are conversations with a guest who is invited to narrate at length. We only prepare questions insofar as to keep the conversation on topic.
So, how do we ensure objectivity or reliability in an interview? Well… we don’t, exactly. In this type of conversation, the interviewer will always have an active role. And different interviewers might have different styles or the same interviewer may have a bad day first, then a good day. Therefore, no interview is the same and we cannot rely on the criteria described above. We’ll have to ensure quality in some other way.
How do we ensure objectivity or reliability in an interview? Well… we don’t, exactly.Christoph Erle, Iconstorm
User Research Standards for Innovation Projects
How to translate quality criteria into user research standards
First off, how to best ensure standards for qualitative research is an ongoing debate. Thus, the tools I built can by no means aspire to be some sort of definitive answer. (Scientists will have to approach that one; we can just be smart and use their work afterwards.) However, I did build a useful way to do research in innovation projects, and it works.
Basically, the approach “translates” scientific research standards to a user interview approach and combines this approach with tools that streamline a process. Mind, most of this isn’t even new; it has been done in social research for a long time, although to my knowledge it has never been connected to design or innovation like this.
So let’s take a look!
1) “Objectivity”: Consistent replication of interview conditions
As I mentioned, conducting open interviews doesn’t allow for objectivity because the researcher will influence the conversation. Since we can’t take the interviewer out of the equation, we have to make sure that a) all of our interviews take place in the best possible conditions and b) all interviewers are equipped to ensure and maintain these conditions.
I would say that the goal in innovation projects should be to create a framework in which the interview guest feels comfortable to talk openly and in detail and to give honest insights into what is important to him. For the interview to be as intersubjective as possible, the purpose, goals, context, role of all participants, and legal considerations of the interview should always be explicit for both interviewer and guest. Also, a “nice” and enabling starting point for a good conversation needs to be created.
Here is what we did to get there:
Streamlined PreparationWe have a process in place that allows for a clear definition of requirements. A combination of a kickoff conversation based on guiding questions with the client, desktop research and some canvasses to prepare interview topics, helps to cleanly define the purpose and objectives from the start. Involving the interviewer from the get-go ensures focused conversations that stay on topic later on.
Interview Field GuideWe built a streamlined guide for interviewers that helps guide the conversation. It contains a chronological and structured start section that helps clarify interview purpose, setting and process and creates a collegial atmosphere; additionally, there are concise sections that help keep track of interview topics and questions. There are also detailed general instructions and tips for interviewers at the end of the guide.
Interviewer TrainingsWith our inhouse academy M1nd, we started to carry out practice projects to prepare colleagues for conducting user interviews and working with the results. It gives participants the opportunity to get familiar with our field guide, to practice interview conversations, to understand the questions we ask and why, to evaluate interviews with our database and use the results in a project, to look at the required tech for recordings and the legal considerations that need to be satisfied… and more, probably.
2) “Reliability”: Dependable generation of reproducible results
With a repeatable interview structure in place, the next idea would be to build a process that enables us to reliably get useful results from our interviews; meaning, results that connect to the further innovation process and that can be reproduced based on how we do the interviews. For this, we built an infrastructure that allows us to remove all “background noise” from a conversation and only track content that helps proceed later on.
Our solutions for this:
Statement CategoriesWe gathered the design methods we apply in innovation projects and then “reverse-engineered” them. For example, if you want to write down a user story, it consists of different parts such as user role, action and benefit. Correspondingly, if you want to build user stories from a user interview, you’ll have to ask questions that make people talk about roles, activities and benefits. Prepare questions that do this and you’ll probably get useful results. Then, rinse and repeat for other methods.
Syntax/CodeAfter we defined the statement categories, we also defined a grammar-based code that prescribes rules on how to write down a “correct” statement. We now record every interview on audio or video and then filter these statements from the narration. Thus, we don’t do transcriptions or direct quotes; rather, we look at the semantic or contextual meaning within the words and use a syntax to write it down. Doing this, we can remove all clutter from the conversation get a streamlined assortment of statements that connect to our design playbook.
DatabaseFinally, we built an easy-to-use database that helps collect the interview statements; it distinguishes between categories and also allows us to compartmentalize answers from individual interviews. Thus, we can sort and analyze the data from a variety of perspectives. Also, we can directly copy & paste each statement category from the database onto color coded sticky notes, and then get creative in a workshop! (The database also enabled us to build heuristics; we can now predict that every hour of interview conversations will result in 80 to 90 usable statements. This might make me a huge nerd, but in my opinion this is pretty awesome.)
The difference: We don’t get opinions on what the best solution might be—rather, we get the picture of a context and then we design solutions ourselves.Christoph Erle, Iconstorm
3) “Validity:” Standardization of questions and interview focus
Asking the user might seem really straightforward, but it is not. If we ask the people what they want directly, we will get a wide variety of answers that aren’t useful at all. They might propose solutions or ideas which are not viable or feasible; they might speculate based on incomplete knowledge; and what do we do, if one user is really excited about an idea that another one deeply hates? Thus, we can’t depend on ideas or opinions. They ofte aren’t useful in our project.
Instead, we need to focus on information that users can give us with authority. Stuff they actually know about. Which leaves us with the user themselves. Questions therefore should focus on user behavior in a context. What do people actually do? To what end? Which problems do they encounter? And so on. Things a user can observe or say with confidence, as there is no speculation or inductive reasoning involved.
Guiding interviews like this takes a lot of practice, but once it clicks an interviewer will get amazing results even from a single conversation. Equipped with simple questions that are easy to answer and that connect to innovation methods, we now will get answers that help us in the further project. The difference: We don’t get opinions on what the best solution might be—rather, we get the picture of a context and then we design solutions ourselves.
Some tips to build a user research process for design and innovation
Four steps to get started with user research
I won’t lie: Building something like this is hard work. At Iconstorm, we had very favorable conditions for this, as I originally started my career with a background in Social Sciences before joining the design industry about six years ago. This unique combination might not be available to you; so a lot of leg work might be necessary to read up on empirical research methods and such. Even without that it still took us more than a year to get this process off the ground (including tests and iterations, that is).
If this doesn’t deter you, though, here are some pragmatic tips to get started.
- Do an audit: Look at which methods you use in your projects and analyze how they might fit together. What types of ideas and content do they include? Is there maybe some overlap?
- Reverse engineer your methods: Think about how to ask questions that lead to the content you need. Make a list of examples for each content category you find. Don’t forget, the questions need to be simple and every guest needs to be able to answer them with confidence.
- Define a syntax: Define some rules on how to collect statements so you can remove all the clutter from your interviews. Test them after your first interviews; keep what works and iterate on what doesn’t.
- Write a field guide: Look up suitable practices on how to do this. For example, the university near you might have literature on this topic. Then its another matter of build, test, iterate.
With these four steps, you can start conducting interviews and get a lot of experience under your belt. Of course, if you actually want to get serious with this, the next project would be to build suitable infrastructure for your process. Depending on what you do, you might want a database, means to record voice or video (online or on site), legal consent forms for participants or even a project management for bigger projects. Also, some means of bringing the results from the database to a workshop. (I’m actually working on another blog post on this topic.)
All of this is more or less something you need to tailor to your needs, and not strictly needed to test your process. So, you might want to start with the four points prescribed and see what works for you. And if all of this seems a little much, you can also just hit me up on LinkedIn.
With all that said: Good Luck out there!