Meta-analysis for integrating study outcomes: a Monte Carlo study of its susceptibility to Type I and Type II errors
Article Abstract:
A Monte Carlo study was conducted to determine Types I and II error rates of the Schmidt and Hunter (S & H) meta-analysis method and the U statistic for assessing homogeneity within a set of correlations. One thousand samples of correlations were generated randomly to fill each of 450 cells of an 18 by 5 by 5 (Underlying Population Correlations by Numbers of Correlations Compared by Sample Size Per Correlation) design. To assess Type I error rates, correlations were drawn from the same population. To assess power, correlations were drawn from two different populations. As compared with U, which was uniformly robust, the Type I error rate for the S & H method was unacceptably high in many cells, particularly when the criterion for determining homogeneity was set at a highly conservative level. Power for the S & H method increased with increasing size of population differences, sample size per correlation, and in some cases, number of correlations compared. The U statistic did more poorly in most conditions in protecting from Type II errors. (Reprinted by permission of the publisher.)
Publication Name: Journal of Applied Psychology
Subject: Social sciences
ISSN: 0021-9010
Year: 1987
User Contributions:
Comment about this article or add new information about this topic:
Guidelines for clean data: detection of common mistakes
Article Abstract:
Ways to avoid errors while collecting, analyzing and compiling data into research results for psychological research studies are discussed. Thirteen rules for avoiding such errors are identified; among these are included: (1) not changing standard scales, or noting all such changes necessitated by the research, (2) designing possible questionnaire responses with future response coding processes in mind, (3) participating in data gathering activities, (4) ensuring research instruments are properly completed, (5) familiarizing one's self with the measuring instruments, (6) using weight scales in accordance with standard scoring, (7) verifying the order of data input, (8) ensuring data sets are comparable prior to combining them into a single sample, and (9) label all aspects of the research sample.
Publication Name: Journal of Applied Psychology
Subject: Social sciences
ISSN: 0021-9010
Year: 1986
User Contributions:
Comment about this article or add new information about this topic:
The validity of validity: an analysis of validation study designs
Article Abstract:
An analysis of eleven validation designs indicates that the selection procedure used is a major design property, which can affect the validity of the validation study. Validity and validation are distinguished; validity is the logic of inferences and validation is the process of the research design. Evaluations of validity consider statistical conclusions, internal validity, construct validity, and external validity. No specific research design is identified as the most valid form of research; consequently, validation research programs should continue to be used. Differences between concurrent and predictive validation are also discussed.
Publication Name: Journal of Applied Psychology
Subject: Social sciences
ISSN: 0021-9010
Year: 1986
User Contributions:
Comment about this article or add new information about this topic:
- Abstracts: Linking operations strategy and product innovation: an empirical study of Spanish ceramic tile producers. Postindustrial technology policy
- Abstracts: Affect and favorable work outcomes: two longitudinal studies of the happy-productive worker thesis
- Abstracts: Unravelling the cognitive and interorganizational structure of public/private R&D networks: a case study of catalysis research in the Netherlands
- Abstracts: Fuzzy seasonal time series for forecasting the production value of the mechanical industry in Taiwan. Applied hybrid grey model to forecast seasonal time series
- Abstracts: Delphi analysis of national specificities in selected innovative areas in Germany and France. Personal attitudes in the assessment of the future of science and technology: A factor analysis approach