UX RESEARCH PROTOCOL — NUUN'S METHOD
Quick Answer: NUUN's UX research protocol sequences three research modes — discovery (what problem?), generative (what should we build?), and evaluative (does it work?) — inside a continuous operating model. Each mode has a specified sample, method, deliverable, and decision link. The protocol is method-agnostic but governance-strict: every study ships with a published methodology note and links to the product decisions it shaped.
THE THREE MODES
Discovery research. Run before a product bet is made. Open-ended, generative. Methods: exploratory interviews, ethnography, diary studies, journey mapping. Deliverable: problem frame.
Generative research. Run during design. Concept-testing, co-creation, tree testing, card sorting. Deliverable: design decisions.
Evaluative research. Run during and after build. Usability testing, A/B testing, accessibility auditing, post-launch behavioural analysis. Deliverable: validation and iteration queue.
Most teams run evaluative without discovery and ship products that solve the wrong problem efficiently. Discovery is the highest-leverage mode and the most often skipped.
THE PROTOCOL AT A GLANCE
| Mode | Goal | Typical Method | Sample | Cadence | |---|---|---|---|---| | Discovery | Frame the problem | 8–12 IDIs + field visits | Purposive | Pre-project | | Generative | Shape the solution | Concept test, co-creation | 20–60 | During design | | Evaluative | Validate and iterate | Usability, A/B, accessibility | 5–20 per iteration | Continuous |
DISCOVERY — WHAT TO RUN
In-depth interviews (IDIs). 45–60 minutes. Semi-structured. Aim for saturation, typically 8–12 interviews per segment.
Contextual inquiry / field visits. Observing real work in real context. Highest-leverage method for B2B and complex workflows.
Diary studies. Multi-day self-reported logs. Captures over-time behaviour that IDIs miss.
Journey mapping. Synthesis artefact across touchpoints, emotions, and systems. Living document; updated as evidence grows.
Desk research. Existing data, analytics, competitive scan, literature. Always before primary fieldwork, never in place of it.
GENERATIVE — WHAT TO RUN
Concept testing. Mid-fidelity stimuli (storyboards, clickable prototypes) tested with target users. Qualitative first, then quantitative validation.
Card sorting / tree testing. Information architecture validation. Low cost, high leverage pre-design.
Co-creation workshops. Structured generative sessions with users. Most useful for complex B2B workflows and regulated-industry design.
Quantitative preference testing. When two viable directions exist and the organization disagrees.
EVALUATIVE — WHAT TO RUN
Moderated usability testing. 5–8 participants per iteration. The single most under-used research method in enterprise.
Unmoderated usability. Platform-delivered (Maze, UserTesting, Lyssna) for wider reach and faster turn. Complement to moderated, not replacement.
A/B testing. For live products with enough traffic. Statistical discipline required — sample size, test duration, guardrail metrics.
Accessibility auditing. WCAG 2.2 AA is baseline; WCAG 2.2 AAA for regulated or public-sector contexts. Automated plus human.
Analytics-driven evaluation. Session recordings, funnel analysis, heatmaps. Complementary to direct research, not substitute.
SAMPLING DISCIPLINE
Common misconception: more participants is always better. The five-users heuristic (Nielsen) applies to usability testing at a single design iteration; it does not apply to quantitative preference testing or segmentation-sensitive research.
Our defaults:
- Discovery IDIs: 8–12 per segment, saturation-based
- Generative concept tests (qualitative): 15–25
- Generative concept tests (quantitative): 200+ per cell
- Moderated usability: 5–8 per iteration
- Unmoderated usability: 20–40 per iteration
- Accessibility: 5–10 with assistive-tech users per audit
DELIVERABLES — WHAT WE ACTUALLY SHIP
Discovery phase. Journey map, problem frame, research-backed opportunity areas, video reels of customer moments.
Generative phase. Concept validation report, IA recommendation, annotated prototype with research rationale.
Evaluative phase. Usability findings prioritized by severity and frequency; accessibility audit with remediation roadmap; A/B test readouts with decision.
Every deliverable has a published methodology note with sample, method, and limitations. Video and audio reels accompany readouts — raw customer voice beats any paraphrase.
OPERATING MODEL
A working UX research practice runs three cadences in parallel:
- Continuous — usability testing every sprint or two, rotating feature area.
- Monthly — discovery and generative research on near-term roadmap bets.
- Quarterly — strategic research on longer-horizon questions (new segments, new products, market entry).
Headcount ratio of roughly 1 UX researcher per 8–10 designers-plus-PMs is typical for mature practices. Under-staffed practices lean heavily on external research partners (including NUUN) for surge capacity and specialized studies.
GOVERNANCE
Consent and privacy. Every participant informed, consented, and compensated. Video and audio recorded only with explicit consent. Storage and retention per privacy regime.
Accessibility inclusion. Assistive-tech users included in usability samples for any digital product. Meaningful inclusion, not token.
Bias awareness. Recruitment, moderation, analysis, and reporting reviewed for bias. Diverse researcher teams reduce blind spots.
Methodology transparency. Methodology notes published with every study. Raw data (appropriately anonymized) accessible to product teams.
COMMON MISTAKES
Skipping discovery. Teams jump to evaluation because it feels concrete. They then ship well-tested products that solve the wrong problem.
Over-relying on unmoderated. Great for breadth, poor for depth. Balance with moderated.
Treating A/B tests as UX research. A/B tests show what wins; they do not show why. Pair with qualitative.
Recruiting only your existing users. Produces survivorship bias. Include lapsed, competitive, and prospective users for discovery and strategic work.
Letting research live in a silo. Research must be operational — insights visible to product, design, engineering, and marketing.
FAQ
Q: How much UX research is enough?
A: Measured by decisions informed, not studies run. If product roadmap decisions are being made without research evidence, not enough. If research is landing but nobody acts, the issue is operating model, not volume.
Q: Should UX research live under design or under insights?
A: Either works; consistent reporting matters more than org placement. The most common healthy patterns are UX research under a VP of Design or inside a central Insights function that serves design.
Q: What's the minimum viable UX research program?
A: One trained UX researcher, one recruiting budget, one project of each mode (discovery, generative, evaluative) running in parallel. Below that, teams rely on external partners.
Q: How does NUUN run UX research for clients?
A: Three engagement shapes: discovery sprints (4–6 weeks), in-sprint embedded research (ongoing), and surge project work (3–10 weeks). We staff named senior researchers, not anonymous benches.
Q: Do we need generative AI in UX research?
A: AI accelerates synthesis, recruiting screener design, and transcription. It does not replace fieldwork. We deploy LLM synthesis with governance (PII redaction, human review for high-stakes findings).
Q: What about remote vs in-person research?
A: Remote is default for distributed or enterprise users; in-person for field contexts, assistive-tech research, and strategic workshops. Hybrid is the norm.
Q: How long should a usability test session be?
A: 45–60 minutes for moderated; 15–20 for unmoderated. Longer sessions yield diminishing returns and participant fatigue.
Q: What's the cost profile?
A: A single usability-testing round runs $15K–$35K depending on sample and recruitment. Discovery programs $40K–$150K. Strategic programs $80K–$300K. Embedded research is priced on monthly retainer.
RELATED READING
- CX MetricX methodology explained
- Voice of Customer program playbook
- Qualitative research for product discovery
- UX research (glossary)