A test of the measurement equivalence of the revised job diagnostic survey: past problems and current solutions
Article Abstract:
The measurement equivalence of the revised Job Diagnostic Survey (JDS) was studied across samples from five worker populations. Samples included workers at a printing plant, engineers, nurses, and nurses' aides, dairy employees, and part-time workers. Data were analyzed according to Joreskog's model for simultaneous factor analysis in several populations (SIFASP), revealing the five factors contained in Hackman and Oldham's theory of job characteristics. A sixth factor also appeared that apparently resulted from the two different formats used on the instrument. When the data from each group were analyzed separately by principal axes factor analysis three-, four-, and five-factor solutions appeared. TO explain these inconsistencies, a Monte carlo simulation was conducted. Matrices representing the a priori JDS factor loadings and a hypothetical, lengthened JDS with twice the number of items per factor were used in the simulation with three sample sizes (Ns = 75, 150, and 900). Results suggested that for scales like the JDS, which has only a few items per factor, sample sizes larger than those typically recommended are needed to consistently recover the true underlying structure. The simulation results support our conclusions that the SIFASP solution is preferable to the principal axes solution and that the JDS provides measurement equivalence across worker populations. (Reprinted by permission of the publisher.)
Publication Name: Journal of Applied Psychology
Subject: Social sciences
ISSN: 0021-9010
Year: 1988
User Contributions:
Comment about this article or add new information about this topic:
Study of the measurement bias of two standardized psychological tests
Article Abstract:
Psychological tests are subject to two distinct forms of bias. The first form, measurement bias, occurs when individuals with equal standing on the trait measured by the test, but sampled from different subpopulations, have different expected test scores. Relational bias, the second type of bias, exists with respect to a second variable if a measure of bivariate association differs across groups. Empirical studies have found little evidence of relational bias. Two recent court cases, however, seem to have been more influenced by considerations of measurement bias than by the literature concerning relational bias. Unfortunately, a consequence of both court cases is that the respective test makers must select items for future tests on the basis of a statistic (proportion correct) that is inappropriate for evaluating measurement bias. More sophisticated approaches may also suffer from methodological difficulties unless special precautions are taken. In this article, tests of English and Mathematics Usage are analyzed by measurement bias methods in which several steps are taken to reduce methodological artifacts. Many items are found to be biased. Nonetheless, the sizes of these effects are very small and no cumulative bias across items is found. (Reprinted by permission of the publisher.)
Publication Name: Journal of Applied Psychology
Subject: Social sciences
ISSN: 0021-9010
Year: 1987
User Contributions:
Comment about this article or add new information about this topic:
A decision-theoretic approach to the use of appropriateness measurement for detecting invalid test and scale scores
Article Abstract:
In psychological measurement it is important to determine when a particular examinee's test or scale score provides an invalid measure of the trait or attitude being assessed. In this article we present several quantitative indices that have been found to effectively identify some types of inappropriate scores. These measures, termed appropriateness indices, are all derived from item response theory. They are computed directly from the item responses that are combined to form the test or scale of interest; information from other scales or tests is not needed. A decision-theoretic approach to the use of appropriateness indices in selection decisions and theoretical research is introduced. An example is then presented to illustrate how researchers can use appropriateness indices. Finally, we discuss policy options that are available for dealing with individuals who are identified as having inappropriate scores. (Reprinted by permission of the publisher.)
Publication Name: Journal of Applied Psychology
Subject: Social sciences
ISSN: 0021-9010
Year: 1987
User Contributions:
Comment about this article or add new information about this topic:
- Abstracts: A metacognitive model of attitudes. Whence univalent ambivalence? From the anticipation of conflicting reactions
- Abstracts: Effects of application blanks and employment equity on applicant reactions and job pursuit intentions. Effects of an absenteeism feedback intervention on employee absence behavior
- Abstracts: Acts of remembrance, cherished possessions, and living memorials. Value that marketing cannot manufacture: cherished possessions as links to identity and wisdom
- Abstracts: The effect of gender and organizational level on how survivors appraise and cope with organizational downsizing
- Abstracts: Assessing the influence of Journal of Consumer Research: a citation analysis. The role of price in multi-attribute product evaluations