Welcome visitor you can log in or create an account


Consumer Insights. Market Innovation.


Monadic Price Testing vs. Price Laddering

price laddering

By Rajan Sambandam, PhD, Chief Research Officer

Which should I use, a Monadic design or a Price Laddering technique?

It’s a practical question that comes up often in pricing research. With Monadic price testing respondents are shown (or read) a single concept, with a single price, and asked about their intentions to purchase or some similar attitude. When more prices need to be measured, more cells are added.

The Price Laddering technique is different. Respondents are initially exposed to a single concept with a single price and intention to purchase is measured, as in a Monadic design approach. From there the two techniques diverge. Those who express lower than desired purchase intent are asked to reconsider their view at successively lower prices. The number of levels tested can be in theory unlimited, but typically the process does not extend beyond three price points.

Monadic designs are generally accepted as the purest means of measuring price sensitivity, as each respondent sees only one price and is not given the opportunity to reevaluate his or her purchase interest. When more prices need to be measured, however, more sample cells must be added. Each new cell must be of sufficient size so that price-demand curves are not muddied by margin of error concerns. Practically, then, it can be difficult to fund and execute Monadic studies when several price points are in play.

Price Laddering offers an obvious practical advantage over Monadic with its smaller sample size requirements, but what about the quality of responses? Are we getting biased answers since the respondent has already been exposed to a price before the second price is displayed? If there were a bias, in which direction would it be (i.e.) would respondents to a laddering task exhibit a higher or a lower take rate?

A test to examine differences between the two techniques

Research on reference prices indicates that people who are shown a higher (reference) price are more likely to pick a lower price than those who are just shown the lower price. For example when a toaster is displayed by itself, the number of people who would buy it is lower than when it is displayed next to a more expensive toaster. If a similar phenomenon is operating in the Price Laddering scenario, then a similar overestimation effect may be expected. To test this idea we conducted two experiments.

The product used in the first experiment was a cell phone concept description and in the second case was a credit card concept description. The cell phone example used three price levels ($139.99, $119.99, $99.99). The credit card example used three levels of annual fee ($50, $25, $0). 

In both cases, four distinct sample cells were used – three for a Monadic test (with exposure to one price per cell) and one for a Laddering test, with exposure to up to three prices. Intention to purchase was measured on a 10-point scale anchored by "Very Likely" and "Not at all Likely." Respondents in the Laddering cell who gave a score below an "8" were shown a lower price and this exercise was repeated one more time. The data were collected online using TRC’s proprietary web panel, USA Panel.

Note that results must be measured with percentages rather than means because of the way "take rate" is typically captured in Price Laddering studies. Consider the following example.

If 20% (out of 100 respondents) provide an answer of 9 or 10, then the remaining 80% are shown the next lower price. If 10% of that group provides an answer of 9 or 10 then the take rate for the first price is 20% and that for the second price is (28/100) 28%.

This is because we can assume that those who would take the product at a higher price would surely take it a lower price. Similarly, if 20% of the 72 people who are shown the third price give a score of 9 or 10, the take rate for the third price is (42/100) 42%.

Expressing this with means would be difficult because we would have to make assumptions about the exact lower price. Now let’s take a look at the actual results of the two experiments to see how Monadic and Price Laddering approaches compared.

Cell Phone Results – Top 2 Box Scores

Price Monadic Laddering
$139.99 10% 9%
$119.99 12% 15%
$99.99 19% 25%

Credit Card Results – Top 2 Box Scores

Annual Fee Monadic Laddering
$50 7% 6%
$25 6% 9%
$0 24% 27%

In both examples the initial Monadic and Price Laddering cells ought to have provided quite similar results, since they were essentially the same. That took place, as shown in the tables. As regards the remaining four comparisons, Laddering take rates were directionally higher than Monadic take rates in each instance. Only the last difference, however, was statistically significant (Cell Phone example: 19% vs. 25%). Interpreting the results, and deciding on the most appropriate technique More examples with additional statistically significant results would further strengthen the case, but there is evidence that a reference price effect is taking place in the Laddering exercise, and this is likely to lead to inflated take rates. A Monadic design, then, is preferable so long as it is practical to put in place a splitsample (i.e., larger case count) study.

Quite often, though, a Monadic approach won’t be practical, and so it is important to understand the implications of these results beyond which is “better” and which is "worse." While Price Laddering is likely to yield elevated purchase interest estimates, the findings above – as compared to Monadic results – are not so far off as to be discredited. Rather, they suggest the need for an extra correction (and with it some increased uncertainty) when attempting to pinpoint take rate. The first is necessary for any "intent to purchase" question, and is done to account for the reality that not everyone who strongly intends to buy something actually will. The second correction should adjust results downward for the rise in interest likely attributable to Price Laddering.

Related White Papers

Conjoint Analysis vs. Self-Explicated Method: A Comparison

Product Configuration: Research Approach for the Times

Product Configuration: Evidence for Effectiveness

Want to know more?

Give us a few details so we can discuss possible solutions.

Please provide your Name.
Please provide a valid Email.
Please provide your Phone.
Please provide your Comments.
Enter code below : Enter code below :
Please Enter Correct Captcha code
Our Phone Number is 1-800-275-2827
 Find TRC on facebook  Follow us on twitter  Find TRC on LinkedIn

Our Clients