Qualitative
Quantitative
Attitudinal
Generative
Evaluative
Qualitative
Generative
Attitudinal
Quantitative
Behavioral
Evaluative
Qualitative Behavioral
Evaluative
Quantitative
Evaluative
Qualitative
Generative
Tl;dr: user interviews.
Directly ask users about their experiences with a product to understand their thoughts, feelings, and problems
✅ Provides detailed insights that survey may miss ❌ May not represent the wider user base; depends on user’s memory and honesty
User interviews are a qualitative research method that involves having open-ended and guided discussions with users to gather in-depth insights about their experiences, needs, motivations, and behaviors.
Typically, you would ask a few questions on a specific topic during a user interview and analyze participants' answers. The results you get will depend on how well you form and ask questions, as well as follow up on participants’ answers.
“As a researcher, it's our responsibility to drive the user to their actual problems,” says Yuliya Martinavichene , User Experience Researcher at Zinio. She adds, “The narration of incidents can help you analyze a lot of hidden details with regard to user behavior.”
That’s why you should:
Tanya Nativ , Design Researcher at Sketch recommends defining the goals and assumptions internally. “Our beliefs about our users’ behavior really help to structure good questions and get to the root of the problem and its solution,” she explains.
It's easy to be misunderstood if you don't have experience writing interview questions. You can get someone to review them for you or use our Question Bank of 350+ research questions .
This method is typically used at the start and end of your project. At the start of a project, you can establish a strong understanding of your target users, their perspectives, and the context in which they’ll interact with your product. By the end of your project, new user interviews—often with a different set of individuals—offer a litmus test for your product's usability and appeal, providing firsthand accounts of experiences, perceived strengths, and potential areas for refinement.
Tl;dr: field studies.
Observe users in their natural environment to inform design decisions with real-world context
✅ Provides contextual insights into user behavior in real-world situations ✅ Helps identify external factors and conditions that influence user experience ❌ Can be time-consuming and resource-intensive to conduct ❌ Participants may behave differently when they know they are being observed (Hawthorne effect)
Field studies—also known as ethnographic research—are research activities that take place in the user’s environment rather than in your lab or office. They’re a great method for uncovering context, unknown motivations, or constraints that affect the user experience.
An advantage of field studies is observing people in their natural environment, giving you a glimpse at the context in which your product is used. It’s useful to understand the context in which users complete tasks, learn about their needs, and collect in-depth user stories.
This method can be used at all stages of your project—two key times you may want to conduct field studies are:
Tl;dr: focus groups.
Gather qualitative data from a group of users discussing their experiences and opinions about a product
✅ Allows for diverse perspectives to be shared and discussed ❌ Group dynamics may influence individual opinions
A focus group is a qualitative research method that includes the study of a group of people, their beliefs, and opinions. It’s typically used for market research or gathering feedback on products and messaging.
Focus groups can help you better grasp:
As with any qualitative research method, the quality of the data collected through focus groups is only as robust as the preparation. So, it’s important to prepare a UX research plan you can refer to during the discussion.
Here’s some things to consider:
It’s easier to use this research technique when you're still formulating your concept, product, or service—to explore user preferences, gather initial reactions, and generate ideas. This is because, in the early stages, you have flexibility and can make significant changes without incurring high costs.
Another way some researchers employ focus groups is post-launch to gather feedback and identify potential improvements. However, you can also use other methods here which may be more effective for identifying usability issues. For example, a platform like Maze can provide detailed, actionable data about how users interact with your product. These quantitative results are a great accompaniment to the qualitative data gathered from your focus group.
Tl;dr: diary studies.
Get deep insights into user thoughts and feelings by having them keep a product-related diary over a set period of time, typically a couple of weeks
✅ Gives you a peak into how users interact with your product in their day-to-day ❌ Depends on how motivated and dedicated the users are
Diary studies involve asking users to track their usage and thoughts on your product by keeping logs or diaries, taking photos, explaining their activities, and highlighting things that stood out to them.
“Diary studies are one of the few ways you can get a peek into how users interact with our product in a real-world scenario,” says Tanya.
A diary study helps you tell the story of how products and services fit into people’s daily lives, and the touch-points and channels they choose to complete their tasks.
There’s several key questions to consider before conducting diary research, from what kind of diary you want—freeform or structured, and digital or paper—to how often you want participants to log their thoughts.
Remember to determine the trigger: a signal that lets the participants know when they should log their feedback. Tanya breaks these triggers down into the following:
Diary studies are often valuable when you need to deeply understand users' behaviors, routines, and pain points in real-life contexts. This could be when you're:
Collect quantitative data from a large sample of users about their experiences, preferences, and satisfaction with a product
✅ Provides a broad overview of user opinions and trends ❌ May lack in-depth insights and context behind user responses
Although surveys are primarily used for quantitative research, they can also provided qualitative data, depending on whether you use closed or open-ended questions:
Matthieu Dixte , Product Researcher at Maze, explains the benefit of surveys: “With open-ended questions, researchers get insight into respondents' opinions, experiences, and explanations in their own words. This helps explore nuances that quantitative data alone may not capture.”
So, how do you make sure you’re asking the right survey questions? Gregg Bernstein , UX Researcher at Signal, says that when planning online surveys, it’s best to avoid questions that begin with “How likely are you to…?” Instead, Gregg says asking questions that start with “Have you ever… ?” will prompt users to give more specific and measurable answers.
Make sure your questions:
To learn more about survey design, check out this guide .
While surveys can be used at all stages of project development, and are ideal for continuous product discovery , the specific timing and purpose may vary depending on the research goals. For example, you can run surveys at:
Tl;dr: card sorting.
Understand how users categorize and prioritize information within a product or service to structure your information in line with user expectations
✅ Helps create intuitive information architecture and navigation ❌ May not accurately reflect real-world user behavior and decision-making
Card sorting is an important step in creating an intuitive information architecture (IA) and user experience. It’s also a great technique to generate ideas, naming conventions, or simply see how users understand topics.
In this UX research method, participants are presented with cards featuring different topics or information, and tasked with grouping the cards into categories that make sense to them.
There are three types of card sorting:
Card sorting type comparison table
You can run a card sorting session using physical index cards or digitally with a UX research tool like Maze to simulate the drag-and-drop activity of dividing cards into groups. Running digital card sorting is ideal for any type of card sort, and moderated or unmoderated sessions .
Read more about card sorting and learn how to run a card sorting session here .
Card sorting isn’t limited to a single stage of design or development—it can be employed anytime you need to explore how users categorize or perceive information. For example, you may want to use card sorting if you need to:
Tl;dr: tree testing.
Evaluate the findability of existing information within a product's hierarchical structure or navigation
✅ Identifies potential issues in the information architecture ❌ Focuses on navigation structure, not visual design or content
During tree testing a text-only version of the site is given to your participants, who are asked to complete a series of tasks requiring them to locate items on the app or website.
The data collected from a tree test helps you understand where users intuitively navigate first, and is an effective way to assess the findability, labeling, and information architecture of a product.
We recommend keeping these sessions short, ranging from 15 to 20 minutes, and asking participants to complete no more than ten tasks. This helps ensure participants remain focused and engaged, leading to more reliable and accurate data, and avoiding fatigue.
If you’re using a platform like Maze to run remote testing, you can easily recruit participants based on various demographic filters, including industry and country. This way, you can uncover a broader range of user preferences, ensuring a more comprehensive understanding of your target audience.
To learn more about tree testing, check out this chapter .
Tree testing is often done at an early stage in the design or redesign process. That’s because it’s more cost-effective to address errors at the start of a project—rather than making changes later in the development process or after launch.
However, it can be helpful to employ tree testing as a method when adding new features, particularly alongside card sorting.
While tree testing and card sorting can both help you with categorizing the content on a website, it’s important to note that they each approach this from a different angle and are used at different stages during the research process. Ideally, you should use the two in tandem: card sorting is recommended when defining and testing a new website architecture, while tree testing is meant to help you test how the navigation performs with users.
Tl;dr: usability testing.
Observe users completing specific tasks with a product to identify usability issues and potential improvements
✅ Provides direct insights into user behavior and reveals pain points ❌ Conducted in a controlled environment, may not fully represent real-world usage
Usability testing evaluates your product with people by getting them to complete tasks while you observe and note their interactions (either during or after the test). The goal of conducting usability testing is to understand if your design is intuitive and easy to use. A sign of success is if users can easily accomplish their goals and complete tasks with your product.
There are various usability testing methods that you can use, such as moderated vs. unmoderated or qualitative vs. quantitative —and selecting the right one depends on your research goals, resources, and timeline.
Usability testing is usually performed with functional mid or hi-fi prototypes . If you have a Figma, InVision, Sketch, or prototype ready, you can import it into a platform like Maze and start testing your design with users immediately.
The tasks you create for usability tests should be:
Be mindful of using leading words such as ‘click here’ or ‘go to that page’ in your tasks. These instructions bias the results by helping users complete their tasks—something that doesn’t happen in real life.
With Maze, you can test your prototype and live website with real users to filter out cognitive biases, and gather actionable insights that fuel product decisions.
To inform your design decisions, you should do usability testing early and often in the process . Here are some guidelines to help you decide when to do usability testing:
To learn more about usability testing, check out our complete guide to usability testing .
Tl;dr: five-second testing.
Gauge users' first impressions and understanding of a design or layout
✅ Provides insights into the instant clarity and effectiveness of visual communication ❌ Limited to first impressions, does not assess full user experience or interaction
In five-second testing , participants are (unsurprisingly) given five seconds to view an image like a design or web page, and then they’re asked questions about the design to gauge their first impressions.
Why five seconds? According to data , 55% of visitors spend less than 15 seconds on a website, so it;s essential to grab someone’s attention in the first few seconds of their visit. With a five-second test, you can quickly determine what information users perceive and their impressions during the first five seconds of viewing a design.
And if you’re using Maze, you can simply upload an image of the screen you want to test, or browse your prototype and select a screen. Plus, you can star individual comments and automatically add them to your report to share with stakeholders.
Five-second testing is typically conducted in the early stages of the design process, specifically during initial concept testing or prototype development. This way, you can evaluate your design's initial impact and make early refinements or adjustments to ensure its effectiveness, before putting design to development.
To learn more, check out our chapter on five-second testing .
Tl;dr: a/b testing.
Compare two versions of a design or feature to determine which performs better based on user engagement
✅ Provides data-driven insights to guide design decisions and optimize user experience ❌ Requires a large sample size and may not account for long-term effects or complex interactions
A/B testing , also known as split testing, compares two or more versions of a webpage, interface, or feature to determine which performs better regarding engagement, conversions, or other predefined metrics.
It involves randomly dividing users into different groups and giving each group a different version of the design element being tested. For example, let's say the primary call-to-action on the page is a button that says ‘buy now’.
You're considering making changes to its design to see if it can lead to higher conversions, so you create two versions:
Over a planned period, you measure metrics like click-through rates, add-to-cart rates, and actual purchases to assess the performance of each variation. You find that Group B had significantly higher click-through and conversion rates than Group A. This indicates that showing the button above the product description drove higher user engagement and conversions.
Check out our A/B testing guide for more in-depth examples and guidance on how to run these tests.
A/B testing can be used at all stages of the design and development process—whenever you want to collect direct, quantitative data and confirm a suspicion, or settle a design debate. This iterative testing approach allows you to continually improve your website's performance and user experience based on data-driven insights.
Tl;dr: concept testing.
Evaluate users' reception and understanding of a new product, feature, or design idea before moving on to development
✅ Helps validate and refine concepts based on user feedback ❌ Relies on users' perception and imagination, may not reflect actual use
Concept testing is a type of research that evaluates the feasibility, appeal, and potential success of a new product before you build it. It centers the user in the ideation process, using UX research methods like A/B testing, surveys, and customer interviews.
There’s no one way to run a concept test—you can opt for concept testing surveys, interviews, focus groups, or any other method that gets qualitative data on your concept.
*Dive into our complete guide to concept testing for more tips and tricks on getting started. *
Concept testing helps gauge your audience’s interest, understanding, and likelihood-to-purchase, before committing time and resources to a concept. However, it can also be useful further down the product development line—such as when defining marketing messaging or just before launching.
The best research type varies depending on your project; what your objectives are, and what stage you’re in. Ultimately, the ideal type of research is one which provides the insights required, using the available resources.
For example, if you're at the early ideation or product discovery stage, generative research methods can help you generate new ideas, understand user needs, and explore possibilities. As you move to the design and development phase, evaluative research methods and quantitative data become crucial.
Discover the UX research trends shaping the future of the industry and why the best results come from a combination of different research methods.
In an ideal world, a combination of all the insights you gain from multiple types of user research methods would guide every design decision. In practice, this can be hard to execute due to resources.
Sometimes the right methodology is the one you can get buy-in, budget, and time for.
Gregg Bernstein , UX Researcher at Signal
UX research tools can help streamline the research process, making regular testing and application of diverse methods more accessible—so you always keep the user at the center of your design process. Some other key tips to remember when choosing your method are:
A good way to inform your choice of user experience research method is to start by considering your goals. You might want to browse UX research templates or read about examples of research.
Michael Margolis , UX Research Partner at Google Ventures, recommends answering questions like:
If your team is very early in product development, generative research —like field studies—make sense. If you need to test design mockups or a prototype, evaluative research methods—such as usability testing—will work best.
This is something they’re big on at Sketch, as we heard from Design Researcher, Tanya Nativ. She says, “In the discovery phase, we focus on user interviews and contextual inquiries. The testing phase is more about dogfooding, concept testing, and usability testing. Once a feature has been launched, it’s about ongoing listening.”
If you're looking for rich, qualitative data that delves into user behaviors, motivations, and emotions, then methods like user interviews or field studies are ideal. They’ll help you uncover the ‘why’ behind user actions.
On the other hand, if you need to gather quantitative data to measure user satisfaction or compare different design variations, methods like surveys or A/B testing are more suitable. These methods will help you get hard numbers and concrete data on preferences and behavior.
*Discover the UX research trends shaping the future of the industry and why the best results come from a combination of different research methods. *
Think of UX research methods as building blocks that work together to create a well-rounded understanding of your users. Each method brings its own unique strengths, whether it's human empathy from user interviews or the vast data from surveys.
But it's not just about choosing the right UX research methods; the research platform you use is equally important. You need a platform that empowers your team to collect data, analyze, and collaborate seamlessly.
Simplifying product research is simple with Maze. From tree testing to card sorting, prototype testing to user interview analysis—Maze makes getting actionable insights easy, whatever method you opt for.
Meanwhile, if you want to know more about testing methods, head on to the next chapter all about tree testing .
Conduct impactful UX research with Maze and improve your product experience and customer satisfaction.
How do you choose the right UX research method?
Choosing the right research method depends on your goals. Some key things to consider are:
What is the best UX research method?
The best research method is the one you have the time, resources, and budget for that meets your specific needs and goals. Most research tools, like Maze, will accommodate a variety of UX research and testing techniques.
When to use which user experience research method?
Selecting which user research method to use—if budget and resources aren’t a factor—depends on your goals. UX research methods provide different types of data:
Identify your goals, then choose a research method that gathers the user data you need.
What results can I expect from UX research?
Here are some of the key results you can expect from actioning the insights uncovered during UX research:
Tree Testing: Your Guide to Improve Navigation and UX
Skip navigation
User interviews 101.
September 17, 2023 2023-09-17
A user interview is a popular UX research method often used in the discovery phase .
User interview: A research method where the interviewer asks participants questions about a topic, listens to their responses, and follows up with further questions to learn more.
The term “user interview” is unique to the UX field. In other areas, like market research or social science, the same method is called an in-depth interview, a semi-structured interview, or a qualitative interview.
Why conduct user interviews, user interviews vs. usability tests, how to do a user interview, can interviews be used with other methods, limitations of interviews.
When performed well, user interviews provide in-depth insight into who your users are, their lives, experiences, and challenges . Learning these things helps teams identify solutions to make users’ lives easier. As a result, user interviews are an excellent tool to use in discovery.
Here are some of the many things you can learn by interviewing your users:
User interviews help teams build empathy for their users . When teams watch interviews, they can put themselves in their users’ shoes.
Data gathered from user interviews can be used to construct various UX artifacts, including
Interviews are versatile; they can be used to learn about human experiences or about a customer's experience with one of your existing products. For example, imagine you are working on an app used to track calorie intake. You could interview people about their experiences using the app or about their journey to become healthier (or both).
User Interviews are often confused with usability tests . While they sound similar and are both typically one on one, these two methods are very different and should be used for different purposes.
User Interview | Usability Test | |
---|---|---|
Purpose | about your users, their experiences, needs, and pain points | : how it performs, what’s not working, and why |
Research type | : we collect participants’ reported behaviors, thoughts, and feelings | : we directly observe how users interact with a design |
When used | stage of the (or in discovery) | stage of the design-thinking model (or when a team is working on a product in alpha or beta) |
Design | : participants don’t review or try a design | : participants interact with a design |
Facilitator–participant interaction | More natural interaction: regular eye contact, facilitators are warmer and approachable | More rigid interaction: Sporadic eye contact; facilitators avoid being too friendly |
Think of an interview as a research study, not a sales session or an informal conversation. Like any research study, an interview should have research goals (or research questions). Goals that are too broad (like learning about users ) can result in interviews that fail to produce useful or actionable insights. Examples of reasonable research goals include:
A concise, concrete set of goals that relate to your target users’ specific behaviors or attitudes can help you gather helpful and actionable insights. These goals will influence the questions you’ll ask in your interviews.
An interview guide is used to direct the conversation and help you answer your research goals. An interview guide should contain a few well-designed, open-ended questions that get participants talking and sharing their experiences.
Examples of good open-ended questions include:
Jog the memory by asking about specific events rather than about general processes. Remembering a particular incident will nudge the user’s memory and enable them to give you meaningful details.
An interview will also contain followup questions to gather more-detailed information . When constructing the guide, these questions are often nested underneath the main questions. Construct followup questions based on your research goals. Anticipating different responses can help you identify what followup questions to ask. Examples include:
An interview guide can be used flexibly : interviewers don’t need to move through questions linearly. They can skip questions, spend longer on some questions, or ask questions not in the guide. (See our interview-guide article for an example guide .)
Even the best guide may need to be tweaked after the first interview. Piloting allows you to identify what tweaks are needed before running all your interviews. You can pilot your guide with a friend or colleague if the interview topic isn’t too specialized. Or, you can recruit a target user (or two).
Piloting helps you learn:
Since interviews are a qualitative research method, it’s okay to continue making minor tweaks to your guide as you complete all your interviews. However, avoid changing your research goals throughout the study. It may become difficult to achieve your research goals without collecting enough relevant data.
Some participants can feel nervous at the beginning of the interview, especially if they’re not sure what to expect. Start by talking through the purpose of the interview, what kinds of questions will be asked, and how the information will be used. Slow down your pace of speech. Talking slowly has a calming effect and indicates that you are not anxious and have time to listen.
Start with questions that are easy to answer, such as Tell me a bit about yourself or What do you like to do in your spare time? These questions are easy to answer and can get participants comfortable talking. Avoid asking questions likely to be interpreted as personal or judgmental, such as What was the last book you read? This question assumes the user read a book recently; if they didn’t, they might feel stupid.
People are more likely to remember, talk, and let their guard down if they feel relaxed and trust the interviewer and the process. Keep in mind that there’s a big difference between rapport and friendship . The user does not have to like you, think you’re funny, or want to invite you out for a cup of coffee in order to trust you enough to be interviewed.
Build rapport by showing you’re listening and by asking related questions. You can show you’re listening by using verbal and nonverbal cues. Verbal cues include:
Nonverbal cues include:
Avoid interrupting or rushing participants as these behaviors will harm your ability to build rapport.
Ask your prepared followup questions if the participant did not cover them when sharing their experiences. Additionally, ask further questions that probe into your participant’s responses. These questions help you to uncover those important motivations, mental models, perceptions, and attitudes.
Probing questions include:
Since you won’t know when you might use them, probing questions are not usually prepared in advance. However, if you know you might forget to use them, write them at the top of your guide or on an index card.
Yes. You can mix interviews with other methods, such as:
Since interviews are an attitudinal method, they collect reported behaviors (rather than observed behavior). Some limitations of self-reported data include:
If we want to know what users actually do, we need to observe them or collect data about their behavior (such as through analytics and other behavioral metrics).
Another limitation of interviews is that the quality of the data collected is very much dependent on the interviewer's skill. If the interviewer asks many leading questions, the validity of the data will be compromised.
Interviews are a popular method to learn about users: what they think, do, and even need. Treat user interviews like a research study, not an informal chat. Compose research goals before crafting a guide. During your interviews be careful not to lead participants and make sure to follow up with further questions. Finally, complement interviews with observation-based research to attain an accurate and richer picture of users' experiences.
User interviews.
Uncover in-depth, accurate insights about your users
Orchestrate and optimize research to amplify its impact
Use surveys to drive and evaluate UX design
Please accept marketing cookies to view the embedded video. https://www.youtube.com/watch?v=jy-QGuWE7PQ
The 3 Types of User Interviews: Structured, Semi-Structured, and Unstructured
What Is a SWOT Analysis?
Therese Fessenden · 5 min
What Is User Research?
Caleb Sponheim · 3 min
Using the Funnel Technique in User Interviews
Maria Rosala · 3 min
6 Mistakes When Crafting Interview Questions
Maria Rosala · 5 min
How Many Participants for a UX Interview?
Maria Rosala · 6 min
The Funnel Technique in Qualitative User Research
Maria Rosala and Kate Moran · 7 min
Should You Run a Survey?
Maddie Brown · 6 min
Open-Ended vs. Closed Questions in User Research
Writing an Effective Guide for a UX Interview
User interviews are a crucial part of the product design process because they allow us to gain insight into the needs, behaviors, and motivations of the product users.
The main goal of user interviews is to understand the user’s pain points and needs. These key insights will help us to make decisions about the UX design and ensure that the final product meets the user’s needs.
The purpose of the article is to explain this technique 360 degrees, why we use it during UX research, which kind of data you can collect, how to prepare the questions, what you can do with the data, and provide you with many tips that will help you to make a great user interview process.
Quantitative and qualitative research, what are user interviews, and what are they not, when and why you should conduct user interviews, user interviews have limitations, preparing for a user interview.
During the interview, after the interview.
We can collect two kinds of data during user research: qualitative and quantitative data.
Quantitative data comes from analytics tools that tell us what happens. For instance, they can show how many users add products to their cart and leave the page without buying anything.
Qualitative data comes from interviews, observations, and focus groups to understand why certain things happen. For example, if we saw that 35 percent of people abandoned their shopping carts with items in them, we could interview them to find out why.
Quantitative and qualitative data are valuable information that help us understand our users. Quantitative data tells us what happens, and qualitative data tells us why.
We can indeed gain insights from every conversation we have with a user. But rather than a simple conversation we have along the way, an interview is a structured process we do during the UX research process with a clear objective: to know the user and their needs better. Let’s take a closer look at what a user interview is not.
User interviews aren’t sales meetings. During a user interview, we can’t push our product. Product designers don’t make sales, but sometimes a sales rep gets invited. We should explain to them that we are listening to the user in that meeting, not selling them anything.
User interviews are not the place to discuss upcoming features or potential solutions to the user’s problems.
First, users cannot consider the product architecture and the company’s dependencies.
Aside from that, users tend to focus more on their existing solutions than on developing new ones. As Henry Ford said, “if I ask what people want, they will say faster horses.”
When product designers run a usability test, they check if the user understands the designer’s solution. Our objective when performing a usability test is to determine if the user understands the interface to perform some actions and if the flow is smooth.
Therefore, in the user interview, we ask the user about their needs and pain points, and in the usability test, we want to see if they understand the solution.
Interviewing users is a very effective way to build products that solve people’s problems, and it can be used in different ways during a product’s lifecycle.
First, we can use user interviews to find a problem, for example, before we start building a startup company. We can focus on one topic, like buying online, and ask people what they don’t like about it. Maybe we’ll find a big problem many people suffer from, and we’ll figure out that there’s much to fix.
If we have a product and want to grow the business and solve more problems, it’s the same. We can conduct them and collect valuable information.
As a result of user interviews, you will have lots of data that you can use to create a user persona and user journey. That way, all the team members will be on the same page about the user’s needs and what the team wants to solve. Also, making decisions will be easier since everyone knows what the user needs.
First, people often need to remember what they did or they perform tasks automatically, so they don’t remember to share critical details. In addition, since everyone is different, the interviewer should be professional and know how to interact with each person so they will be open with them and share their experience.
It is important to follow certain steps before conducting a user interview, so let’s go over them.
You need to write down your objectives for the interview. You usually have 3–5 goals, but if you have more than 5, your objectives need to be more focused.
For instance, if you have a website that sells shoes, your goals could be:
Take the time to build this part well because if it’s done right, all the other parts will be easier.
It’s critical to consider your research goals and choose individuals who are relevant to those goals. For example, if you have an online store that sells running shoes, you must search for people who run because it will be more accurate. You won’t get accurate information if you choose users who don’t run.
The number of people you interview can vary, but generally, you should speak with between 3 and 8 people. If you interview fewer than three people, you might not get a representative sample, and interviewing more than eight people might take a lot of time and give you no additional insight. Aiming for a total of 5 interviews is often a good balance, as it allows for identifying patterns while still being manageable.
Finding people to interview can be tricky, but here are some places you can find them:
It is best to find users for a user interview through personal connections or introductions rather than cold emails. For example, a sales team member can introduce you to a potential candidate, and this can increase the likelihood that potential candidates will be receptive to being interviewed.
Preparing some templates for your emails will help you work faster. When you make your first connection with a user, you should send a mail explaining who you are and why you are conducting the interview. You should also send another mail explaining how the interview will proceed and set up a time, and the last email thanking the candidate.
To maintain a successful relationship with users, you must be honest with them and quickly respond to their emails.
You can observe the user’s body language better during an onsite interview. On the other hand, remote interviews have many advantages, such as interviewing people in different time zones and locations. In addition, it is much simpler since the other person doesn’t need to leave their comfort zone.
I prefer the remote option, but if some limitations require it to be done onsite, that is also an option.
The following tips will help you prepare for an onsite or online interview.
Interview onsite
Interview online
A clear script with questions is essential to a successful interview. The script has four sections.
First, we would like to introduce ourselves and the notetaker (a person who helps us take notes, such as a UX designer, a product manager, or a developer), explain why we are conducting the interview, how it will proceed, and that we are not testing the candidates or their knowledge. This will allow them to feel comfortable during the interview, so they will be more willing to share information.
If you want to record the meeting, now is the time to ask for their permission (I suggest you ask in writing ahead of time). Lastly, ask if they have any more questions before continuing.
In this section, you will ask 3–5 questions to break the ice with your interviewee. For example, you might ask:
Here you ask the main research questions. Asking 10–12 questions will give you good results, but you should prepare 20–25 questions since some people are difficult to get information from, so more questions will help. To make it more structured, think about what you want to ask at the beginning and what is at the end. Here are some tips for writing them well:
Ask questions that align with your research goals. Don’t ask questions that are out of the scope of what you’re looking for.
Avoid leading questions because they can lead to biased or inaccurate responses and make the participant feel uncomfortable or pressured to give a certain answer. Let’s say we do user research for an online store that sells running shoes.
If you ask, “Can you tell me which websites you bought running shoes from?” you assume that the user bought the shoe online, but you don’t know that.
Would it be better if I asked, “Where do you buy your running shoes?”
Avoid vague questions because they can lead to unclear or confusing responses, making the participant feel frustrated. Instead, ask clear and specific questions.
An example of a vague question is: What do you think of the shopping experience on the website?
Better questions can be:
Open questions are better than closed questions. Open questions are better in user interviews because they allow the participants to express their thoughts and give you more insight. The answer to a closed-ended question can only be yes or no, so it won’t help us find the information we need.
An example of a close question is: “Do you buy your running shoes online?”
An open question will be: “Can you tell me how you buy running shoes?”
We sometimes want to ask a closed question to continue to ask questions. It is okay to ask a closed question and then an open question, so you can better understand how the user behaves. For example, you can ask the user, “Do you buy your running shoes online?”
If the answer is YES, you may ask:
If the answer is NO, you may ask:
Try to start with a big question. Starting with a big question and then asking related questions will enable you to dig more profoundly step-by-step. You can ask, for example:
“Can you tell me when was the last time you bought new running shoes?”
Then ask subquestions like:
The last question : At the end of this part, you can ask a very open question that will give you more information about the user. For example, “If you had a magic stick that could help you choose your next running shoe, what would it do?”
At this point, we’d like to thank the interviewee and explain what we plan to do with the data. Before you end the interview, ask the interviewee if they have any questions. In that way, you give them the final say.
To get quality information from the interviewees, there are some key things you need to do during the interview.
This is the time to organize all the information you collected. You can summarize each interview and make the main points from it. Once you have summarized all the interviews, you can create a report that you can share with the product manager and the developers. In this way, the team can prioritize the following solutions it wishes to develop.
Further, if you are working on a new product, you can create a user persona and a user journey. As a result, all team members will be more knowledgeable about users and their needs.
All reports should be concise and to the point. Writing a long document is easy, but to effectively communicate the results, you must focus on the key points rather than getting bogged down in unnecessary detail. You can include links to the interview summaries if the team members need more explanation.
User interviews are a crucial part of the product design process, as they allow designers to gain insight into the users’ needs, behaviors, and motivations. This article reviewed all the details you need to know to perform user interviews effectively.
We started by discussing what qualitative and quantitative research is and how user interviews fit into these categories. We also looked at when to conduct user interviews and how to prepare for them. This included setting clear goals, identifying users to interview, and deciding between onsite and online interviews. We also discussed how to create a script for the interview.
Next, we saw tips for conducting a smooth interview, and finally, we discussed what to do with the data you collected.
Featured image source: IconScout
LogRocket lets you replay users' product experiences to visualize struggle, see issues affecting adoption, and combine qualitative and quantitative data so you can create amazing digital experiences.
See how design choices, interactions, and issues affect your users — get a demo of LogRocket today .
Recent posts:.
Learn to deliver a UX portfolio presentation that keeps your audience engaged while explaining your design process clearly and concisely.
No-code builders have been consistently growing in popularity. In this blog, I discuss the ins and outs of Framer and Webflow to help you pick the best for your needs.
A conceptual model can help break complex systems into understandable mechanisms. In this blog, I share you can use one as a UX designer.
Choosing a triadic color scheme can help us create a theme for our product that really pops or is very attention-grabbing.
IMAGES
COMMENTS
User interviews are one of the most flexible and adaptable methods—they are powerful tools for uncovering new opportunities and generating ideas during the discovery phase, complement both qualitative and quantitative evaluative methods, and can be used in conjunction with ongoing listening continuous methods to keep up with changing customer ...
User interviews are a popular UX research technique, providing valuable insight into how your users think and feel. Learn about the different types of user interviews and how to conduct your own in this guide. Emily Stevens. 10 May 2023 11 min read.
A UX research method is a way of generating insights about your users, their behavior, motivations, and needs. These methods help: Learn about user behavior and attitudes. Identify key pain points and challenges in the user interface. Develop user personas to identify user needs and drive solutions.
User interview: A research method where the interviewer asks participants questions about a topic, listens to their responses, and follows up with further questions to learn more. The term “user interview” is unique to the UX field.
We’ll cover: The types of UX research methods. You’ll learn about the differences between different kinds of user research methods—quantitative and qualitative, generative and evaluative, attitudinal and behavioral, moderated and unmoderated.
The purpose of the article is to explain this technique 360 degrees, why we use it during UX research, which kind of data you can collect, how to prepare the questions, what you can do with the data, and provide you with many tips that will help you to make a great user interview process. Table of contents. Quantitative and qualitative research.