•  
  •  
 

Abstract

This article interrogates prevailing student evaluation instruments in higher education, assessing their construct validity through exploratory factor analysis of a campus wide rating scale. The analysis reveals that a single ambiguous factor explains negligible variance, suggesting ratings measure student attraction rather than pedagogical competence. Situating findings within accountability discourse and psychometric theory, this article urges replacing item aggregation strategies with a theory driven model of teaching effectiveness. It outlines an iterative procedure for item generation, confirmatory analysis, and context sensitive deployment, differentiating between global and specific, descriptive and evaluative prompts, and recognizing curricular variation across service, undergraduate, and graduate courses. By charting these methodological considerations, this article offers administrators and researchers a blueprint for constructing valid, reliable instruments that support equitable faculty assessment and meaningful instructional improvement.

Share

COinS