What People Say They’ll Buy vs. What They Actually Buy
Stated purchase intent overstates real demand by 3–5x. Understanding the gap between what consumers say and what they do is essential for making sound product decisions.
Ask someone whether they would buy a healthier snack and most will say yes. Watch what they put in their trolley and the picture changes. This gap between what people say they will do and what they actually do is the oldest problem in consumer research, and it is the reason raw survey-based purchase intent data routinely overstates real demand.
Why Surveys Over-Report Intent
When a survey asks “Would you buy this product?” it creates an artificial decision context. The respondent faces no budget constraint, no time pressure, no competing options, and no consequences. Saying “yes” is easy. It is aspirational rather than predictive.
Several biases compound the effect. Social desirability pushes respondents toward answers that make them look good: they over-report intent to buy sustainable products and under-report impulse purchases.Hypothetical bias inflates willingness to pay when no real money is at stake. Acquiescence bias makes people agree with positively framed questions: “Would you be interested in a product that saves you time?” will always score high, because agreement is the path of least cognitive resistance. And surveys strip away the context that shapes real purchase decisions. In a survey, your product is the only thing the respondent is thinking about. In a shop, it is one of thousands of items competing for attention.
None of this is controversial. The academic literature on the stated-revealed preference gap is extensive. Meta-analyses of contingent valuation studies have found that hypothetical willingness-to-pay averages roughly 2–3 times higher than actual willingness-to-pay, though the ratio varies widely by category. Top-box purchase intent (“definitely would buy”) typically converts at a fraction of the stated rate in FMCG categories. Any team making launch decisions from raw stated intent without applying deflators is building on unreliable foundations.
What Revealed Preference Tells You Instead
Revealed preference is an economist’s term for a simple idea: instead of asking people what they would do, observe what they actually do. In consumer research, this means transaction records, subscription histories, and category spending patterns rather than survey responses.
Every purchase represents a real decision with real trade-offs. When someone spends £35 on a bottle of wine, they have implicitly rejected every other use of that £35. When someone subscribes to a £12/month streaming service, they have decided it is worth more than the alternatives, including doing nothing. A purchase is not an opinion. It is an action that cost something, and actions are far more predictive of future behaviour than words.
The Grounding Problem in Synthetic Research
This is where the stated-revealed distinction becomes critical for AI-generated consumer panels. If synthetic personas answer purchase intent questions, the obvious concern is that they reproduce the same biases as human respondents, or worse.
That concern is justified when the AI is grounded in general knowledge. A language model generating consumer responses from its training data will likely replicate stated preference biases, because it has been trained on the same aspirational, socially desirable language that humans produce in surveys. Ask an ungrounded model whether a consumer “cares about skincare” would pay a premium, and you get a plausible opinion. It is not grounded in anything.
A synthetic persona calibrated against actual purchase data works differently. When the model knows that a consumer profile “regularly buys mid-range skincare at £20–£35 and has never purchased premium products above £50,” its price sensitivity response is anchored in revealed behaviour. The persona is not generating opinions. It is reasoning from a documented spending pattern.
This is the distinction that determines whether synthetic research inherits the stated preference problem or sidesteps it. Grounding in purchase data does not eliminate all bias (the model still introduces its own reasoning artefacts), but it replaces the specific biases (social desirability, hypothetical inflation) that most distort purchase intent research.
Practical Implications
Whether you use traditional surveys, synthetic panels, or both, the stated-revealed gap has direct consequences for how you interpret data.
If you are running traditional surveys, never take top-box intent at face value. Apply category-specific deflators; the ratio between stated and actual conversion varies by category, but raw intent numbers almost always need discounting. If you are using synthetic research, the value depends entirely on what the model is grounded in. Demographic profiles alone are not enough. Purchase history is what separates a synthetic persona that produces useful signal from one that produces articulate noise.
Regardless of method, relative measures (“which of these three options would you choose?”) produce more realistic data than absolute ones (“would you buy this?”). Forced trade-offs are harder to inflate. And the most credible intent signals come from respondents, real or synthetic, whose stated preferences align with their observed behaviour. If someone says they would pay a premium for sustainability and their purchase history shows they already do, that signal is worth acting on. If the claim is not backed by behaviour, discount it.
Why This Matters Now
The stated-revealed preference gap is not a reason to abandon consumer research. It is a reason to ground it in behaviour rather than aspiration. For decades, the cost and difficulty of collecting real purchase data meant that stated preference surveys were the practical default, imperfect but available. That constraint is loosening. Large-scale transaction datasets now exist, and synthetic research built on top of them can anchor consumer insights in what people actually buy rather than what they say they would.
The gap between stated and revealed preference does not go away. But the research methods available to account for it are materially better than they were five years ago. Product teams that understand this, and choose their research tools accordingly, will make fewer expensive mistakes.