Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
A paradigm for developing better measures of marketing const
Transcript of A paradigm for developing better measures of marketing const
Rules for assigning numbers to objects to represent quantities of attributes
The definition involves two key notions:
- it is attributes of object that are measured and not object themselves
- definition does not specify the rules by which the numbers are assigned.
Problem and approach
A Paradigm for Developing better Measures of
Assess Reliability with New Data
Assess Construct Validity
Correlation With Other Measurement
Uugi - 610133027
Sarah - 610133022
Does the Measure as Expected? (Churchill, 1979)
Four separate propositions (Nunnally, 1967, p. 93)
1. The constructs job satisfaction
(A) likelihood of quitting
(B) are related.
2. The scale X provides a measure of A.
3. Y provides a measure of B.
4. X and Y correlate positively.
*Only the fourth proposition is directly
examined with empirical data.
*To establish that X truly measures A,
one must assume that propositions
1 and 3 are correct.
*One must have a good measure for B, and the theory relating A and B must be true.
*The analyst tries to establish the construct validity of a measure by relating it to a number of other constructs and not simply one.
A better way of assessing the position of the individual on the characteristic is to compare the person’s score with the score achieved by other people.
Norm quality is a function of both the number of cases on which the average is based and their representativeness.
Summary of Suggested Procedure for Developing Better Measures (Churchill, 1979)
- Marketers certainly need to pay for attention to measure developments
- Many measures with which marketers now work are woefully inadequate, as the many literature reviews suggest.
-At a minimum the execution of 1-4 should reduce the prevalent tendency to apply extremely.
- Researchers doing applied work and
practitioners could at least be expected to
complete the process through step 4.
- Marketing researchers are already
collecting data relevant to steps 5-8.
Factor Analysis :
- When factor analysis is done before the purification steps suggested heretofore, there seems to be a tendency many more dimensions than can be conceptually identified.
- Factor analysis then can be used to confirm whether the number of dimensions conceptualized can be verified empirically.
- Construct Validity, which lies at the very heart of the scientific process, is most directly related to the question of what the instrument is in fact measuring what construct, trait, or concept underlies a person`s performance or score on a measure.
- All the source error occurring within a measurement will tend to lower the average correlation is all that is needed to estimate the
The extent to which the measure correlates with other measures designed to measure the same thing whether the measure behaves as expected
Factor analysis has been the prime statistical technique for the development of structural:
- Theories in social science
Such as the hierarchical factor model of human cognitive abilities
What is Factor Analysis ?
The General Factor of Personality (GFP) going to the Big Two to the Big Five using the medians from
Digman’s (1997) 14 samples.
From Rushton and Irwing (2008)
In the example below,
The least desirable outcome occurs when the Alhpa coefficient is too low and restructuring of the items forming each dimensions is unproductive.
In this case, the appropriate strategy is to loop back --->
What might have gone wrong ??
What is ?
Is necessary contributor to Validity but is not sufficient condition for Validity.
A measure is reliable to the degree that it supplies consistent result.
How to improving
- Minimize external sources of variation
-Standardize condition under which measurement occur
-Improving investigator consistency by using only well trained, supervised and motivated person conduct the research
True differences in other relatively stable characteristics
Differences due to transient personal factors
Differences due to situational factors
Differences due to variations in administration
Differences due to sampling of items
Differences due to lack of clarify of measuring instruments
Differences due to mechanical factors
A measure is valid when the differences in observed scores reflect true differences on the characteristic one is attempting to measure and nothing else，that is XO=XTReliability depends on how much of the variation in scores is attributable to random or chance errors if XR=0 (perfectly reliable
A measure is
when the differences in observed scores reflect true differences on the characteristic one is attempting to measure and nothing else， that is Xo=Xt
depends on how much of the variation in scores is attributable to random or chance errors.
If XR=0 (perfectly reliable)
The researcher must be exacting in delineating
what is included in the definition and what is exclude
It should indicate how the variable has been
defined previously and how many dimensions or components
A judgment sample of persons who can offer some ideas and insights into the phenomenon
Critical incidents and focus groups
They can be used to advantage at the
item-generation stage content validity
- Internal consistency or internally homogeneous
- A large alpha indicates that the
k-item test correlates well with true scores.
Purify the Measure
- Coefficient alpha does not adequately estimate,
though, errors caused by factor external to the instrument, such as differences in testing situations and respondents over time.
Many forms of Validity are mentioned in the research literature, and the number grows as we expand the concern for more scientific measurement .
Convergent validity tests that constructs that are expected to be related are, in fact, related.
Discriminant validity (or divergent validity) tests that constructs that should have no relationship do, in fact, not have any relationship.
A parameter often used in:
- Other behavioral sciences
Concept of discriminant validity within
their discussion on evaluating test validity.