A comparison of data reduction methods for job analysis

Abstract

Since the enactment of the 1964 Civil Rights Act and the creation of the Uniform Guidelines on Employee Selection Procedures (EEOC, 1978), the public and private sectors have made notable efforts to avoid both prima facie and actual discrimination in the areas of job development, training, assessment, career planning, and many other HR disciplines. The common denominator in improving the reliability, validity, and overall fairness in all of these areas is job analysis (SIOP, 1987). Job analysis is defined as, "A procedure undertaken to understand job duties and behaviors and performance standards" (SIOP, 1987, p. 38). Job analysis, " ... serves a host of organizational purposes related to human resources management, including such purposes as job evaluation, personnel selection, perfonnance appraisal, and the like" (Ash & Levine, 1980, p. 53). The specific purpose for which the job analysis data is collected (i.e. the application) can make a significant impact on how the data are collected, where the data are collected, and ultimately, the sufficiency of job analysis data itself. Similarly, the quality of the job analysis data collected has a very powerful effect on the utility (i.e., validity, reliability, cost, acceptance, utilization rates, etc.) of the application. Despite its value, the utilization of job analysis as a first step in the development of human resources activities is often ignored. Ash (1988) points out that ''Job analysis has served as a developmental base for performance appraisal systems in organizations, but as with the personnel selection function, job analysis information has typically been underutilized for this purpose"(p. 9). This underutilization by practitioners is mirrored by a lack of empirical research. In fact, underutilization may be a result of a lack of empirical research. An important job analysis activity that has attracted limited study is data reduction. Hughes and Prien (1989) and Ash (1988) have pointed out that relatively little research has been conducted on the best method for translating KSAs into a selection test. This is a key procedural issue that addresses the question of how should large data sets (task and KSA items) be combined into a smaller number of factors or dimensions. Should dimensions be qualitatively or quantitatively determined (rational methods vs. statistical methods)? Which methods predict performance better, and thus, provide a better test budget? The current study will address these key questions and will, in tum, address an important empirical research issue: what is the best way to reduce job analysis data into a manageable form?

Notes

If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu

Graduation Date

1995

Semester

Fall

Advisor

Wooten, William

Degree

Master of Science (M.S.)

College

College of Arts and Sciences

Department

Psychology

Format

PDF

Pages

80 p.

Language

English

Length of Campus-only Access

None

Access Status

Masters Thesis (Open Access)

Identifier

DP0029482

Subjects

Arts and Sciences -- Dissertations, Academic; Dissertations, Academic -- Arts and Sciences

This document is currently not available here.

Share

COinS