Get AI summaries of any video or article — Sign up free
User Research - Nikki Anderson thumbnail

User Research - Nikki Anderson

6 min read

Based on Qualitative Researcher Dr Kriukow's video on YouTube. If you like this content, support the original creators by watching, liking and subscribing to their content.

TL;DR

User research centers on understanding users’ pain points, motivations, and behaviors, then translating those insights into product decisions.

Briefing

User research is a practical, mostly qualitative discipline focused on understanding real people’s pain points, motivations, and behaviors so product teams can make better decisions. In practice, it means going to the target users of a website, app, or service and gathering evidence through methods like interviews and usability testing—then translating what’s learned into actionable guidance for product managers, designers, developers, and business stakeholders.

A key point is that user research isn’t limited to “small-sample” insights or purely qualitative work. While the core work often relies on qualitative methods (especially generative/discovery interviews), it can also incorporate quantitative approaches when the business needs them—such as surveys, A/B testing, and analytics tools like Google analytics and Firebase. The most effective teams increasingly blend both through “mixed methods,” rather than treating qualitative and quantitative as separate camps.

Common methods span the spectrum from discovery to evaluation. Interviews—often grouped under labels like generative research, discovery research, or listening sessions—help uncover what users are trying to achieve and why they behave the way they do. Usability testing evaluates prototypes (early skeletons of interfaces) to see where people struggle. Card sorting helps map how users mentally organize content by having participants sort labeled cards into categories that “make sense” to them; the goal is to align navigation and information architecture with user expectations. Journey mapping and concept testing show up as additional tools, especially when teams need to reason about experiences over time or validate ideas before building.

User research also extends beyond digital products. While the focus here is often on apps and websites, research can apply to physical spaces and services. Service design research examines how people move through environments—such as museums—where signage, information placement, and pathways shape whether the experience feels intuitive. Ethnography and contextual inquiry can capture how people behave in real settings, not just in controlled testing.

The conversation draws a sharp contrast between user research and academic research. Academic-style research can be slow and overly process-heavy for fast-moving product development, where decisions must fit existing company workflows. There’s also a perceived cultural gap: user research often isn’t taught in academia, even though many practitioners come from social science backgrounds like psychology or anthropology. Within industry, some researchers advocate for stricter, academic-like protocols and statistical rigor, while others argue user research should not mimic academia because the goals and constraints differ.

Finally, the path into user research is framed as practice-first. People often enter through adjacent roles such as research assistant, recruiter, or internship positions that build hands-on skills like recruiting participants, note-taking, and synthesizing findings. For career changers, transferable research skills matter, but hiring teams also expect familiarity with product development realities—agile vs. waterfall, design thinking, and the ability to produce work products like a portfolio. One practical strategy described: building portfolio case studies by testing an existing app (even personally) and documenting what was learned, since theory alone can be hard to apply in interviews. The work itself is presented as highly collaborative and iterative—proposal discussions, budgeting, recruiting, and then sharing insights through meetings and workshops—aimed at reducing user stress and improving how products help people get from point A to point B.

Cornell Notes

User research is a people-centered discipline that uses qualitative methods—especially interviews and usability testing—to understand users’ motivations, pain points, and behaviors, then turns those findings into decisions product teams can act on. It’s not restricted to small qualitative samples: surveys, A/B testing, and analytics (including Google analytics and Firebase) can be used, and mixed methods are increasingly common. User research applies beyond apps and websites, including service design and contextual inquiry in physical environments like museums. The field also differs from academia: industry research must fit product development timelines and workflows, so academic-style rigor and pacing don’t always translate directly. Getting into user research often requires practice and proof of applied skills, frequently via assistant/recruiter roles or a portfolio case study.

If user research is “qualitative,” why do teams still use surveys, A/B testing, and analytics?

Qualitative work is central because it uncovers motivations, pain points, and behaviors through direct engagement with users. But companies also need measurable evidence for decisions. Surveys can gather input from large groups (the transcript cites “over a thousand” as good practice), A/B testing compares two versions of an interface to see which performs better using metrics like conversion rate, and analytics tools such as Google analytics and Firebase track behavior at scale. The newer trend is mixed methods—combining qualitative insight with quantitative validation—rather than choosing one side exclusively.

What does “generative research” mean in this context, and how is it used?

Generative research is essentially discovery-focused interviewing. It includes related labels like discovery research, listening sessions, and discovery interviews. The purpose is to understand what users are trying to achieve and why they do things—capturing motivations and behaviors—so teams can identify what needs to change to make users happier and help them accomplish goals more easily.

How does card sorting help design information architecture?

Card sorting tests how users mentally categorize content. Participants receive note cards labeled with keywords representing navigation elements (for example, in a food delivery app: restaurants, login/settings, filters, search). They then sort the cards into groups that “make sense” to them. The output guides where items should appear in menus and how navigation should be structured so users can find what they expect.

Why does user research extend into physical spaces and services?

Because the same underlying goal—making experiences intuitive and reducing friction—applies outside software. Service design research looks at how people navigate environments like museums: where they walk first, how signage directs them, and how information placement tells a coherent story. Ethnography and contextual inquiry can capture real-world behavior patterns, not just what users say in interviews or what they do in a prototype test.

What’s the main reason user research doesn’t map neatly onto academic research?

Industry constraints differ. Academic research often follows slow, step-by-step processes and can require approvals and timelines that don’t fit product development. User research also aims at practical outcomes—informing product decisions—so it may use an adapted scientific method rather than replicating academic protocols. In addition, user research is often not taught in academia, creating a cultural split between researchers who want strict academic-style rigor and those who prioritize methods that work within company workflows.

How can someone with an academic research background break into user research?

Transferable skills help, but hiring teams also look for product-development fluency (e.g., agile vs. waterfall, design thinking) and applied evidence. The transcript describes updating a CV to emphasize relevant skills like qualitative interviewing, synthesis to insights, empathy building, and note-taking. It also stresses building a portfolio: one approach was creating a usability-testing case study by testing an existing app personally (using a World of Warcraft–related app as an example), gathering feedback from friends, and documenting findings to demonstrate practical ability even without prior industry experience.

Review Questions

  1. Which user research methods best support discovery versus evaluation, and what decisions do they typically inform?
  2. How do mixed methods change the way teams validate user insights, and what roles do surveys, A/B testing, and analytics play?
  3. What practical differences between academia and industry research affect how user research is planned, approved, and communicated?

Key Points

  1. 1

    User research centers on understanding users’ pain points, motivations, and behaviors, then translating those insights into product decisions.

  2. 2

    Qualitative methods (especially interviews and usability testing) are foundational, but quantitative tools like surveys, A/B testing, and analytics can be added when needed.

  3. 3

    Mixed methods are increasingly preferred because qualitative insight and quantitative validation complement each other.

  4. 4

    Card sorting is a practical way to align navigation and information architecture with how users naturally categorize content.

  5. 5

    User research applies to more than digital products, including service design and contextual inquiry in physical environments.

  6. 6

    Academic-style research processes often don’t fit product development timelines, so user research uses adapted methods that match company workflows.

  7. 7

    Breaking into user research typically requires hands-on practice and proof of applied skills, often via assistant/recruiter roles and a portfolio case study.

Highlights

User research is described as qualitative-first empathy work—talking to users to uncover motivations and pain points—so teams can make products that feel easier to use.
A/B testing is framed as a conversion-rate comparison: two versions of a page or interface are shown to different groups to see which performs better.
Card sorting turns users’ mental models into design structure by having participants group labeled content cards in ways that “make sense” to them.
User research can extend into service design, such as studying how museum visitors navigate pathways and interpret signage.
For career changers, transferable research skills aren’t enough without product-development context and a portfolio demonstrating applied usability work.

Topics

Mentioned

  • Nikki Anderson
  • Kriukow
  • UX
  • A/B
  • Google analytics
  • Firebase