How to define standards for user research and make them usable

User Research Standards for Your Innovation Project

Title Image: User Research Standards for Innovation Projects
In this blog post: Explorative user research can give your project scope, direction, and lead to fantastic ideas for your new product. But user interviews often appear arbitrary or downright dubious; after all, what makes for a useful result? How do we focus on what matters? And how do we do this reliably? I will give you detailed ideas on how to build tools that meet user research standards, ensure quality and integrate with innovation projects. I will also give some hints on how to get started from scratch.

Does the first step have to be the hardest?

Have you ever been in an innovation project and didn’t know what’s the right direction? How to build something that people actually need? Yes, we too. Hunkering down in board rooms with bored thoughts and the same people who are always part of the conversation often leads to tired ideas found on beaten paths. But what if I told you that there is a fix for that? An approach that is both simple *and* yields quality results?

Now, I know what you might think: “We’ve done user research in the past. We hired someone to do some interviews and it didn’t work. Heck, how is asking the people what they want helpful at all? They often don’t know themselves!” And I can’t fault you for these objections. User research is often done poorly, with methods so simplified that they are no longer reliable. In this post, I want to give some ideas on how to do it right.

The Value of User Research at the Outset of a Venture
At the beginning of most innovation projects, it is not entirely certain where the journey will take you. But a good research process will yield results that give you great ideas for radical innovation and connect them seamlessly to the further innovation project. It will also build knowledge that helps your organization in the long run. Finally, it untangles the complexity in the beginning of your project; and, due to the trajectory it establishes, the initial investment saves time, money and nerves. 

Too many user interviews take too many shortcuts

So, you do interviews and talk to some people, right? Here is a quote from a chat I recently had on that topic: “Afterwards, you just think: Okay, we’ve talked to someone now. It was nice. But where does that leave us?”

Looking at the industry, there seem to be many research practices that leave us with arbitrary results that don’t connect to any concrete action or strategy. That’s because researching people, especially doing interviews, is much more than just talking to someone. But preparing this is hard work and in business, we take shortcuts way too often. My personal goal was to resolve this paradox by doing the work and then building something manageable in a sprint.

For good interviews, we need to do the work and build a process first.

For good user research, we need a process that makes it relevant, useful and that ensures quality. We need to be explicit about goals and purpose, we need a structured briefing to kick off the process. We need a research design that enables us to ask the right questions, an evaluation method that filters important results and removes fluff from the conversations; and we need interviewers who know what they are doing. Also: Documentation, interviewer guides, databases, privacy setup, probably more. I will give you insights on what to build later.

First, though, some background info on why all of this is necessary.

Take a minute and look at the image below. Stop for a moment and think: Is this thermometer a good measuring tool? Does it have problems? What might they be?

This is a nice thermometer. We can use it to measure the temperature. Probably within some boiler by the looks of it. Pretty cool, right? Or is it? What makes for a good thermometer? You can stop right here for a minute and think about it, if you like.
(Image by Artur Solarz on Unsplash)

User Research Standards? A Brief Overview on Quality Criteria From Social Sciences

A good thermometer shows everyone reliably what it is supposed to show them (Well, duh… but)

So, what makes for a good thermometer? This is a thought process I usually prompt in research trainings. For one thing, a good thermometer should be reliable, correct? If the temperature is 30°, it shouldn’t show 28 or 33, or even 40. This is the most obvious. But there are two other things that it should do.

On the one hand, a thermometer shouldn’t leave room for interpretation: if the scale is too imprecise, different people might read different temperatures; for instance, on the one in the image above, a hasty glance might make you read 24° instead of 28. Also, it might be hard to discern the difference between 29 and 30 in that case. On the other hand, it also needs to measure the actual thing it is supposed to measure. Meaning, it needs to measure the temperature and not, say, the wind speed or humidity. While these things are pretty obvious for a thermometer, they are standards that all measuring tools need to satisfy. Like the ones we use in user research. And this is where it gets a little complicated.

*All* good measuring tools need to be objective, reliable and provide valid results

I won’t make this too theoretical, but you can skip this section if you want. In the social sciences (where qualitative research grew up) there are criteria that have to be met. That is, if you want to claim that your research is good and can credibly say something about the world. Let’s say that we conduct a big survey. For example, we’ll have a set of questions with answers on a scale of 1 to 5. Our goal is to measure how conservative people are.

We now methodically determine a sample of roughly a thousand people and appoint 20 interviewers to ask them our questions. We hand out an interview guide and off they go. In the end, we hope to have quantifiable data on the topic we research. For quantitative research like this, there are quality criteria that have to be met. Otherwise, our research will not have valid results (and is therefore useless to tell us something about the world.)

Quality Criteria: A Quick Look at Theory

Careful: Open user interviews can’t meet the standards above

The criteria above refer to quantitative research; big surveys with many respondents and mathematical scales for answers. Therefore, we can use statistical means to see if we meet our research standards. In an innovation projecte, though, we will most problably not conduct quantitative research, as it is time-consuming and expensive. Instead, we will likely conduct qualitative user interviews. These are more open-ended; they are conversations with a guest who is invited to narrate at length. We only prepare questions insofar as to keep the conversation on topic.

So, how do we ensure objectivity or reliability in an interview? Well… we don’t, exactly. In this type of conversation, the interviewer will always have an active role. And different interviewers might have different styles or the same interviewer may have a bad day first, then a good day. Therefore, no interview is the same and we cannot rely on the criteria described above. We’ll have to ensure quality in some other way.

How do we ensure objectivity or reliability in an interview? Well… we don’t, exactly.

Christoph Erle, Iconstorm

User Research Standards for Innovation Projects

How to translate quality criteria into user research standards

First off, how to best ensure standards for qualitative research is an ongoing debate. Thus, the tools I built can by no means aspire to be some sort of definitive answer. (Scientists will have to approach that one; we can just be smart and use their work afterwards.) However, I did build a useful way to do research in innovation projects, and it works.

Basically, the approach “translates” scientific research standards to a user interview approach and combines this approach with tools that streamline a process. Mind, most of this isn’t even new; it has been done in social research for a long time, although to my knowledge it has never been connected to design or innovation like this.

So let’s take a look!

1) “Objectivity”: Consistent replication of interview conditions

As I mentioned, conducting open interviews doesn’t allow for objectivity because the researcher will influence the conversation. Since we can’t take the interviewer out of the equation, we have to make sure that a) all of our interviews take place in the best possible conditions and b) all interviewers are equipped to ensure and maintain these conditions.

I would say that the goal in innovation projects should be to create a framework in which the interview guest feels comfortable to talk openly and in detail and to give honest insights into what is important to him. For the interview to be as intersubjective as possible, the purpose, goals, context, role of all participants, and legal considerations of the interview should always be explicit for both interviewer and guest. Also, a “nice” and enabling starting point for a good conversation needs to be created.

Here is what we did to get there:

2) “Reliability”: Dependable generation of reproducible results

With a repeatable interview structure in place, the next idea would be to build a process that enables us to reliably get useful results from our interviews; meaning, results that connect to the further innovation process and that can be reproduced based on how we do the interviews. For this, we built an infrastructure that allows us to remove all “background noise” from a conversation and only track content that helps proceed later on.

Our solutions for this:

The difference: We don’t get opinions on what the best solution might be—rather, we get the picture of a context and then we design solutions ourselves.  

Christoph Erle, Iconstorm

3) “Validity:” Standardization of questions and interview focus

Asking the user might seem really straightforward, but it is not. If we ask the people what they want directly, we will get a wide variety of answers that aren’t useful at all. They might propose solutions or ideas which are not viable or feasible; they might speculate based on incomplete knowledge; and what do we do, if one user is really excited about an idea that another one deeply hates? Thus, we can’t depend on ideas or opinions. They ofte aren’t useful in our project.

Instead, we need to focus on information that users can give us with authority. Stuff they actually know about. Which leaves us with the user themselves. Questions therefore should focus on user behavior in a context. What do people actually do? To what end? Which problems do they encounter? And so on. Things a user can observe or say with confidence, as there is no speculation or inductive reasoning involved.

Guiding interviews like this takes a lot of practice, but once it clicks an interviewer will get amazing results even from a single conversation. Equipped with simple questions that are easy to answer and that connect to innovation methods, we now will get answers that help us in the further project. The difference: We don’t get opinions on what the best solution might be—rather, we get the picture of a context and then we design solutions ourselves. 

Interviewer Support:
These are some of the tools we built to streamline the process. They can now be used in any research project. We have a complete interview field guide, canvasses for research design and prep, handouts that help ask the right questions etc.

Some tips to build a user research process for design and innovation

Four steps to get started with user research

I won’t lie: Building something like this is hard work. At Iconstorm, we had very favorable conditions for this, as I originally started my career with a background in Social Sciences before joining the design industry about six years ago. This unique combination might not be available to you; so a lot of leg work might be necessary to read up on empirical research methods and such. Even without that it still took us more than a year to get this process off the ground (including tests and iterations, that is).

If this doesn’t deter you, though, here are some pragmatic tips to get started.

  1. Do an audit: Look at which methods you use in your projects and analyze how they might fit together. What types of ideas and content do they include? Is there maybe some overlap?
  2. Reverse engineer your methods: Think about how to ask questions that lead to the content you need. Make a list of examples for each content category you find. Don’t forget, the questions need to be simple and every guest needs to be able to answer them with confidence.
  3. Define a syntax: Define some rules on how to collect statements so you can remove all the clutter from your interviews. Test them after your first interviews; keep what works and iterate on what doesn’t.
  4. Write a field guide: Look up suitable practices on how to do this. For example, the university near you might have literature on this topic. Then its another matter of build, test, iterate.

With these four steps, you can start conducting interviews and get a lot of experience under your belt. Of course, if you actually want to get serious with this, the next project would be to build suitable infrastructure for your process. Depending on what you do, you might want a database, means to record voice or video (online or on site), legal consent forms for participants or even a project management for bigger projects. Also, some means of bringing the results from the database to a workshop. (I’m actually working on another blog post on this topic.)

All of this is more or less something you need to tailor to your needs, and not strictly needed to test your process. So, you might want to start with the four points prescribed and see what works for you. And if all of this seems a little much, you can also just hit me up on LinkedIn.

With all that said: Good Luck out there!