An exercise design approach to understanding assessment center dimension and exercise constructs
Article Abstract:
A robust finding in assessment center research is that ratings cluster according to exercises and not dimensions. Seeking further understanding of exercise effects, we proposed two exercise-based factors, exercise form and exercise content, as sources of variance in assessment center ratings. Exercise designs were manipulated so that two levels of form (leaderless group discussions and role play exercises) were crossed with two levels of content (cooperative and competitive task designs). Analysis of variance and confirmatory factor analysis procedures applied to multitrait-multimethod ratings of 89 high school student assessees revealed that most of the variance in the ratings was explained by exercises and not dimensions. Exercise form accounted for 16% of method variance, whereas exercise content had near zero effect. The form effect was primarily due to higher correlations of dimension ratings across role play exercises. Practical implications and future directions for exercise research design research are discussed. (Reprinted by permission of the publisher.)
Publication Name: Journal of Applied Psychology
Subject: Social sciences
ISSN: 0021-9010
Year: 1992
User Contributions:
Comment about this article or add new information about this topic:
Models of supervisory job performance ratings
Article Abstract:
Proposed and evaluated in this research were causal models that included measure of cognitive ability, job knowledge, task proficiency, two temperament constructs (achievement and dependability), awards, problems behavior, and supervisory ratings. The models were tested on a sample of 4,362 U.S. Army enlisted personnel in nine different jobs. Results of LISREL analyses showed partial confirmation of Hunter's (1983) earlier model, which included cognitive ability, job knowledge, task proficiency, and ratings. In an expanded model of supervisory ratings, including the other variables mentioned, technical proficiency and ratee problem behavior had substantial direct effect on supervisory ratings. Ratee ability, job knowledge, and dependability played strong indirect roles in this rating model. The expanded model accounted for more than twice the variance in ratings in the present research than did Hunter's variables alone. (Reprinted by permission of the publisher.)
Publication Name: Journal of Applied Psychology
Subject: Social sciences
ISSN: 0021-9010
Year: 1991
User Contributions:
Comment about this article or add new information about this topic:
A warning about the use of a standard deviation across dimensions within ratees to measure halo
Article Abstract:
Many practitioners measure the halo effect, the tendency of some evaluators to generate higher correlations between performance factors than reality suggests, by taking an average standard deviation across performance dimensions. This, however, does not really produce a clearer picture of the real correlations. The past use of this measure may have affected published studies on halo effects. In the future, psychologists should use each rater's average observed intercorrelation among dimensions to assess the halo effect.
Publication Name: Journal of Applied Psychology
Subject: Social sciences
ISSN: 0021-9010
Year: 1986
User Contributions:
Comment about this article or add new information about this topic:
- Abstracts: Age differences in information processing: understanding deficits in young and elderly consumers. Age differences in product categorization
- Abstracts: The role of television in the construction of consumer reality. The influence of involvement on disaggregate attribute choice models
- Abstracts: Templates of original innovation: projecting original incremental innovations from intrinsic information. Using cellular automata modeling of the emergence of innovations
- Abstracts: More than meets the eye: the effect of missing information on purchase evaluations
- Abstracts: On the relationship between cognitive and affective processes: a critique of Zajonc and Markus. A meta-analysis of effect sizes in consumer behavior experiments