Rensis
← Back to blog

The Problem With Focus Groups

Focus groups have been the default qualitative method for decades. Groupthink, social desirability bias, and tiny samples make them less reliable than most teams assume.

Focus groups have been a staple of market research since the 1940s. Eight to twelve people in a room, a moderator with a discussion guide, two hours of conversation, and a report that shapes product decisions worth millions. The method persists because it feels thorough: you hear real people talk about your product in their own words. But the feeling of insight and the reality of useful data are not the same thing. Focus groups suffer from structural problems that no amount of skilled moderation can fully overcome.

Groupthink and Social Desirability Bias

The fundamental problem with putting people in a room together is that they stop behaving like individuals. Social dynamics take over immediately. Participants adjust their responses based on what others have said. If the first person to speak is enthusiastic about a concept, the group shifts positive. If they are sceptical, the group shifts negative. This is not a subtle effect; it is one of the most replicated findings in social psychology.

Social desirability bias compounds this. People want to appear thoughtful, reasonable, and agreeable. They will say they care about sustainability, that they read ingredient labels, that they would pay more for quality. In a group setting, these tendencies are amplified. Nobody wants to be the person who admits they buy the cheapest option without reading the label. The result is data that reflects what people want to be seen believing, not what actually drives their purchase decisions.

The Dominant Respondent Problem

In any group of eight people, one or two will do most of the talking. They are more confident, more articulate, or simply louder. Their opinions carry disproportionate weight, not because they are more representative, but because they are more vocal. The quiet participants, who may hold very different views, contribute less and are underrepresented in the findings.

Skilled moderators try to manage this by drawing out quieter participants, but the structural incentive remains. The dominant voice sets the frame for the discussion, and subsequent responses are anchored to it. A single articulate critic can shift the group’s apparent consensus on a product concept, even if their view is an outlier.

Small Samples That Do Not Generalise

A typical focus group study involves two to four groups of eight to ten participants. That is sixteen to forty people forming the evidence base for decisions that affect thousands or millions of potential customers. This sample size is too small for any quantitative analysis. You cannot calculate meaningful purchase intent percentages, identify statistically significant segment differences, or draw reliable conclusions about price sensitivity from forty people.

Proponents argue that focus groups are qualitative, not quantitative, and should be used for exploration rather than measurement. This is fair in theory, but in practise, focus group findings are routinely used to make quantitative decisions: go or no-go on a product, which of three concepts to develop, whether a price point is viable. The method does not match the decision it is being used to inform.

High Cost and Slow Timelines

A single focus group session costs £4,000–£8,000 when you include facility hire, recruitment, incentives, moderation, and analysis. A standard study with three to four groups runs £15,000–£30,000. Recruitment takes two to four weeks for mainstream audiences and longer for specialist ones. Analysis and reporting add another week or two. The total timeline from brief to findings is typically six to ten weeks.

For large organisations with dedicated research budgets and long planning horizons, this is manageable. For startups and product teams working in sprint cycles, it is incompatible with how decisions actually get made. By the time the focus group report arrives, the team has often already moved on, building features and setting prices based on the internal opinions that filled the research vacuum.

The Moderator Effect

The moderator is simultaneously the greatest strength and the greatest weakness of the focus group method. A skilled moderator can probe beneath surface responses, follow unexpected threads, and create an environment where participants feel comfortable sharing honest opinions. But the moderator also shapes the conversation in ways that are difficult to control or even detect.

The questions they emphasise, the responses they follow up on, the body language they display when participants answer; all of these influence what gets said. Two different moderators running the same discussion guide with similar participants will produce noticeably different conversations. This is not a failure of skill; it is an inherent property of human-led group discussion.

Confirmation bias is particularly insidious. Research commissioners often brief moderators on what they expect to find or what decisions hinge on the research. Even with the best intentions, this framing affects which threads get pursued and which get dropped.

Why Focus Groups Persist

Given these limitations, why does anyone still run focus groups? Three reasons. First, tradition and familiarity. Decision-makers have used them for decades and trust the format. Second, the compelling narrative. Focus group reports include vivid quotes and anecdotes that are persuasive in boardroom presentations, even when the underlying data is weak. Third, the feeling of direct connection with consumers. Watching from behind the one-way mirror feels like understanding your customer, even when the dynamics described above are distorting what you hear.

When Focus Groups Still Make Sense

Focus groups are not entirely without value. They can be useful for genuine exploration: understanding how consumers talk about a category, discovering problems you did not know existed, generating hypotheses to test with quantitative methods. When used as the starting point of a research programme rather than the endpoint, they have a legitimate role.

But for concept evaluation, pricing decisions, and go/no-go calls, the method is poorly suited to the task. Structured quantitative approaches, whether traditional surveys or synthetic panels, provide more reliable, more actionable data at lower cost and faster speed. The question is not whether focus groups produce any useful information. It is whether they produce the right information for the decision you need to make, and for most product decisions, the answer is no.