Beyond the Basics: A Structured Faculty Development Model for Evaluating GenAI Research Applications
Location
Sun & Surf I/II
Start Date
29-5-2025 2:00 PM
End Date
29-5-2025 2:25 PM
Description
As generative AI transforms academic work, faculty development initiatives often struggle to bridge the gap between generic AI overviews and discipline-specific needs. Rush University implemented an innovative three-part workshop series that paired disciplinary expertise with AI-focused educational guidance to help faculty critically evaluate GenAI tools for academic research. Moving beyond theoretical presentations, the series demonstrated AI's potential through mini-experiments in the following areas: AI-powered literature reviews, data analysis and visualization, and innovative uses as a peer reviewer and evaluator. This evidence-based approach to systematically testing AI systems advances faculty development beyond general awareness toward rigorous, discipline-specific evaluation.
Recommended Citation
Rush, Emily and Wilson, Adam, "Beyond the Basics: A Structured Faculty Development Model for Evaluating GenAI Research Applications" (2025). Teaching and Learning with AI Conference Presentations. 105.
https://stars.library.ucf.edu/teachwithai/2025/thursday/105
Beyond the Basics: A Structured Faculty Development Model for Evaluating GenAI Research Applications
Sun & Surf I/II
As generative AI transforms academic work, faculty development initiatives often struggle to bridge the gap between generic AI overviews and discipline-specific needs. Rush University implemented an innovative three-part workshop series that paired disciplinary expertise with AI-focused educational guidance to help faculty critically evaluate GenAI tools for academic research. Moving beyond theoretical presentations, the series demonstrated AI's potential through mini-experiments in the following areas: AI-powered literature reviews, data analysis and visualization, and innovative uses as a peer reviewer and evaluator. This evidence-based approach to systematically testing AI systems advances faculty development beyond general awareness toward rigorous, discipline-specific evaluation.