Users Own the Present. You Own the Future (6 minute read)
Smart users, especially executives, will confidently tell you exactly what to build, but effective user research means discovering underlying needs rather than accepting stated solutions at face value.
What: An essay arguing that user research should focus on understanding the problems people need to solve rather than the solutions they request, with examples from B2B and premium products where highly trained executives provide convincing but wrong answers because they're conditioned to always have solutions.
Why it matters: Teams that accept user-provided solutions without digging deeper into underlying needs will build incremental improvements instead of innovative products, and this problem is amplified when interviewing intelligent users who provide precise, articulate, but misguided recommendations.
Takeaway: Frame research questions around observing behavior and understanding context (what breaks, where money gets spent, what their day looks like) rather than asking users what features to build.
Deep dive
- The article opens with an example from Moonfare (private equity platform) where a C-level client confidently provided a detailed roadmap that was completely wrong, not due to lack of intelligence but because he was trained to always provide answers
- Distinguishes between wants and needs using the ice cream example: someone saying they want ice cream actually needs to cool down, which opens up many more solution possibilities (popsicle, cold drink, air conditioning, swimming)
- The want is one solution; the need is the territory that contains many possible solutions
- Jobs-to-be-done framework is frequently misused, with PMs writing features they want to build in user-voice format rather than identifying actual underlying needs
- B2B and premium markets have an inverted problem compared to consumer markets: the challenge is getting users to stop talking about solutions rather than getting them to talk at all
- Executives from consulting or finance backgrounds (like Bain's "answer-first" or A1 approach) are explicitly trained to lead with answers and work backwards, making them produce confident but misguided solutions in research sessions
- The clarity and precision of executive answers actually masks that they're answering the wrong question - regular users saying "I dunno, maybe?" provides better signal because the ambiguity reveals you're asking the wrong question
- Analytics suffers from the same problem as bad interviews: at Moonfare, tracking logins looked like engagement but for a 5-10 year private equity product, the right metric was being present when decisions are made, not frequency of access
- Five well-timed touchpoints beat fifty random ones, but you can't determine timing from platform data alone - it requires understanding life context like bonus season or portfolio gaps
- Proposes a division of labor: users own the present (what their day looks like, what breaks, where they've spent money) while you own the future (synthesis, patterns, products that don't exist yet)
- Research depth should scale with question specificity: start with understanding the shape of life (territory-level context like how late invoices affect a small business owner's week), then zoom into behavior (what they do today, what tools failed)
- New designs often test badly in evaluative research due to unfamiliarity rather than actual poor design - Snapchat's navigation was nearly unusable at first but became muscle memory within a week
- Teams that only trust first-session feedback will never ship anything requiring learning, which is most worthwhile products
- Research is intake for decision-making, not a verdict or way to avoid deciding - continuous discovery and the product trio concept can degrade into three biases averaged into consensus that nobody owns
- Someone must own the interpretation and the decision that follows, accepting the risk of being wrong, otherwise research becomes a stalling mechanism that produces carefully informed but mediocre products
Decoder
- Answer-first (A1) approach: A consulting methodology (used at firms like Bain) where you lead with a hypothesis answer, then gather evidence to confirm or deny it, rather than starting from open exploration
- Jobs-to-be-done: A product framework for understanding user needs through the format "when I [situation], I want to [action], so I can [outcome]"
- Continuous discovery: An ongoing research approach with frequent behavioral touchpoints rather than periodic large studies
- Product trio: A collaborative product development model (popularized by Teresa Torres) where a product manager, designer, and engineer work together on discovery
Original article
Smart users often provide convincing but wrong solutions because they're trained to always have answers, especially executives from consulting or finance backgrounds. User research should focus on understanding underlying needs rather than stated wants - when someone says they want ice cream, they actually need to cool down, which opens up many more solution possibilities. Analytics alone can't protect teams from bad user research, as the same problems that affect interviews also impact how metrics are interpreted.