MaxDiff Analysis
MaxDiff analysis (Maximum Difference Scaling) is a survey-based research technique that forces respondents to choose the best and worst options from a set, producing a clear ranking of preferences that avoids the bias problems of traditional rating scales. It is one of the most reliable methods for understanding what consumers truly value most and least.
What is MaxDiff Analysis?
In a MaxDiff exercise, respondents see a subset of items (typically 4 to 5 at a time) drawn from a larger list and must select the “most important” and “least important” option in each set. By rotating items across multiple sets, the analysis covers all possible comparisons without requiring respondents to evaluate every item against every other item directly.
The underlying statistical model is a multinomial logit that converts choice data into ratio-scaled preference scores. These scores indicate not just the rank order but the relative magnitude of preference. An item scoring 20 is twice as preferred as one scoring 10.
The standard design parameters are:
- Items per set: 4 to 5 (balances cognitive load with data quality)
- Total items tested: 10 to 30 (most common is 12 to 20)
- Sets per respondent: Number of items x 3 / items per set (ensures each item appears enough times)
- Sample size: 200 to 500 respondents for stable estimates
MaxDiff eliminates common rating scale problems: response style bias (some people rate everything high, others rate everything low), scale usage differences across cultures, and the tendency for respondents to mark most items as “important” when given a 1-to-10 scale.
MaxDiff Analysis in Practice
Southwest Airlines used MaxDiff analysis to prioritize 18 potential service improvements across a sample of 3,500 frequent flyers in 2018. The study revealed that Wi-Fi reliability ranked 4x higher than seat-back entertainment, directly influencing the airline’s decision to invest $2 billion in fleet-wide satellite Wi-Fi rather than installing personal screens.
Spotify ran MaxDiff studies across 12 markets when designing its 2019 Premium feature set. With 22 potential features tested among 8,000 respondents, offline downloads and ad-free listening scored 3x higher than podcast integration and social sharing. This prioritization shaped the Premium value proposition that helped grow paid subscribers from 100 million to 205 million by 2023.
Kellogg’s applied MaxDiff to test 15 product claims for a new cereal line across 2,000 U.S. consumers. “High protein” scored 2.5x higher than “organic” and “non-GMO,” reversing internal assumptions about health-conscious positioning. The protein-first messaging drove a 28% higher trial rate compared to test markets that used the organic-first message.
Why MaxDiff Analysis Matters for Marketers
MaxDiff solves the prioritization problem. When every feature, benefit, or message tests as “important” on a rating scale, marketers have no basis for making trade-off decisions. MaxDiff forces discrimination, producing a clear hierarchy.
The technique is especially valuable for pricing and packaging decisions. When a company must choose which 5 features to include in a base plan versus a premium tier, MaxDiff scores directly map to willingness-to-pay differences.
Results are also easy to communicate to non-research stakeholders. A bar chart showing that Feature A is 3x more valued than Feature B is immediately actionable, unlike a table of mean scores where everything falls between 3.8 and 4.3 on a 5-point scale.
Related Terms
- Quantitative Research
- Survey Methodology
- Conjoint Analysis
- Segmentation Study
- Price Sensitivity Meter
FAQ
What is the difference between MaxDiff and conjoint analysis?
MaxDiff ranks the importance of individual items (features, benefits, messages) in isolation. Conjoint analysis evaluates items as bundled profiles, measuring how combinations of attributes influence choice. Use MaxDiff when you need to prioritize a list. Use conjoint when you need to understand trade-offs between multi-attribute products or services.
How many items can MaxDiff effectively test?
MaxDiff works best with 10 to 30 items. Below 10, simpler ranking exercises are sufficient. Above 30, the number of sets per respondent becomes burdensome and response quality degrades. For lists exceeding 30 items, split them into themed subsets and run separate MaxDiff exercises.
Can MaxDiff be used for pricing research?
MaxDiff is not a direct pricing tool, but it informs pricing strategy by identifying which features or benefits consumers value most. These preference scores can be used alongside willingness-to-pay studies to determine which features justify premium pricing. For direct price optimization, the Price Sensitivity Meter or conjoint analysis is more appropriate.
What sample size does MaxDiff require?
A minimum of 200 respondents produces stable aggregate-level results. For segment-level analysis (comparing preferences across demographics or behavioral groups), 300 to 500 respondents per segment is recommended. Hierarchical Bayesian estimation allows individual-level scores with samples as small as 150, though larger samples improve precision.
