Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.
Transcript of Understanding MAXDIFF
Also known as "best-worst scaling"
Variation of the method of Paired Comparisons and similar in some respects to conjoint
Can be used for a variety of things Brands Product Features Brand Claims Advertising
Messages Product Benefits Antidote to standard ratings scales and associated biases (i.e. scale, language, cultural, educational)
Delivers scores showing relative importance of elements being ranked
Demonstrates greater discrimination among items
Easy to use and interpret (both respondent & researcher) Why Use MaxDiff? How is MaxDiff different from conjoint? Similar in that they both force a trade-off, but...
MaxDiff focuses on preference for specific items within a single variable (ex: messages, features)
Conjoint focuses on the interaction across multiple variables (ex: feature-price mix) How does MaxDiff work? Each respondent is asked a single question about one larger idea (ex: reasons for choosing a particular airline) with lots of potential options. They are shown a card with several of those options (typically between 3-5 at a time)
They are asked to give answers at two extremes - MOST and LEAST- from the list.
Typically, a respondent will repeat this task 8 to 12 times depending on the total number of items. The technique can handle the testing of 8-50 items. What do we learn though? Based on the selections above, I now know the following about this traveler's preferences: 1. Legroom is MORE Preferred THAN...
Check Bags, Earning Miles, Free On-Board Meals, In-Flight Entertainment, and Seat Location
2. No Bag Fees is LESS Preferred THAN
Earning Miles, Free On-Board Meals, In-Flight
Entertainment, and Seat Location But there are other features we wish to ask about so the respondent must repeat the exercise until we have a full preference profile Design Matrix The order of items shown is NOT random
A specifically designed MATRIX is needed to indicate what elements each individual respondent will see
Respondents are randomly assigned to a specific path (called a VERSION) and then see multiple cards to evaluate in order (each called a SET)
This Design Matrix ensures that all items are seen and that a useable profile for each respondent and each item tested can be derived. But Don't Take My Word For It... Interpreting MaxDiff Results MaxDiff results are expressed as a share of preference
Example: If I had 10 items and each item was equally preferred by respondents, the preference score for each would be 10% (note: all preference scores sum to 100%)
But in reality, this doesn't happen and there is more variation - consider the banking example below. "Has the most ATM locations" is most preferred over other options while "free online bill pay" is least preferred.
In fact - "Has the most ATM locations" is preferred more than 4X higher than "free online bill pay" A Note about Rescaling Preference Share Sometimes raw preference shares can be hard to interpret because it's not obvious where a baseline of equal preference share for that set of items is
Oftentimes, they are rescaled to where equal preference share is set at 100 - that way anything above 100 would be MORE preferred and anything below 100 would be LESS preferred and by that degree relative to that baseline
Using the last example, equal preference share for the items set would have been 14.28% (100%/7 items) But...it's hard to intuit from the raw score that the baseline preference share is set at 14.28% Example: Rescaling Preference Share With rescaled values, it's easy to see that "Locations", "Free Checking", and "Service" are more preferred by consumers than if all items were preferred equally because their values exceed 100.
But, we also can easily see HOW MUCH MORE each item is preferred because their individual scores are relative to a baseline of 100. "Most ATM locations" is 68% MORE preferred than if all items were equal
"Most ATM locations" is also 35 points higher or ~26% more preferred than the next highest item, "Free Checking" Comparing Results from a Likert Scale vs. MaxDiff Things to Consider with MaxDiff Though You CAN compare across markets and samples IF you use the same number of items and phrasing for each
You CANNOT compare MaxDiff results when the inputs are different because all of the results are relative to each other
You CAN conduct MaxDiff analysis on more than just text - images work too
Overall MaxDiff results can be misleading though because of averaging... Subgroup Analysis Important with MaxDiff Sometimes, small groups within your sample can skew the overall results.
Example: If 20% of my sample always chooses one particular item as their MOST favorite every chance they get while the remaining 80% chooses it only some of the time, that 20% can skew my overall scores for that item higher (and the other items lower since everything is relative).
Running cluster analysis on your MaxDiff results can help provide a check on your overall scores to see if one group has too much influence.
Cluster analysis can also tell you more about the different types of preference-based subgroups that exist (ex: Subgroup 1 = Free Checking, Subgroup 2 = Customer Service Group, Subgroup 3 = ATM Convenience) But don't just take my word for it... With a Likert Scale - everything seems to be important.
With MAXDIFF, true differentiation emerges.