Using Fine-tuned LLMs to Grade Homework

Alternative Title

Using Fine-tuned Large Language Models (LLMs) to Grade Homework

Contributor

University of Central Florida. Faculty Center for Teaching and Learning; University of Central Florida. Division of Digital Learning; Teaching and Learning with AI Conference (2025 : Orlando, Fla.)

Location

Gold Coast I/II

Start Date

28-5-2025 4:00 PM

End Date

18-5-2025 4:25 PM

Publisher

University of Central Florida Libraries

Keywords:

Fine-tuning; LLMs; Homework grading; Prompt engineering; Educational technology

Subjects

Grading and marking (Students)--Computer-assisted instruction; Machine learning--Study and teaching; Grading and marking (Students)--Computer programs; Artificial intelligence--Educational applications; Natural language generation (Computer science)

Description

LLMs have the potential to improve education by automatically grading homework and giving students hints. However, due to hallucinations and general lack of capabilities, using LLM for grading and giving hints has mixed results. The performance of LLMs can be improved by using methods such as prompt engineering, RAG (Retrieval-Augmented Generation), and fine-tuning. In this talk, we explore the possibility of using fine-tuned LLMs to grade and give hints. Fine-tuning an LLM allows one to take a pretrained LLM, such as those built by OpenAI, and adapt the LLM for a highly specific purpose.

Language

eng

Type

Presentation

Format

application/vnd.openxmlformats-officedocument.presentationml.presentation

Rights Statement

All Rights Reserved

Audience

Faculty, Students, Instructional designers

This document is currently not available here.

Share

COinS
 
May 28th, 4:00 PM May 18th, 4:25 PM

Using Fine-tuned LLMs to Grade Homework

Gold Coast I/II

LLMs have the potential to improve education by automatically grading homework and giving students hints. However, due to hallucinations and general lack of capabilities, using LLM for grading and giving hints has mixed results. The performance of LLMs can be improved by using methods such as prompt engineering, RAG (Retrieval-Augmented Generation), and fine-tuning. In this talk, we explore the possibility of using fine-tuned LLMs to grade and give hints. Fine-tuning an LLM allows one to take a pretrained LLM, such as those built by OpenAI, and adapt the LLM for a highly specific purpose.