Keywords

Neuro-symbolic AI, Deep Reinforcement Learning, Symbolic Regression, Explainability

Abstract

In the past decade, reinforcement learning (RL) has achieved breakthroughs across various domains, from surpassing human performance in strategy games to enhancing the training of large language models (LLMs) with human feedback. However, RL has yet to gain widespread adoption in mission-critical fields such as healthcare and autonomous vehicles. This is primarily attributed to the inherent lack of trust, explainability, and generalizability of neural networks in deep reinforcement learning (DRL) agents. While neural DRL agents leverage the power of neural networks to solve specific tasks robustly and efficiently, this often comes at the cost of explainability and generalizability. In contrast, pure symbolic agents maintain explainability and trust but often underperform in high-dimensional data. In this work, we developed a method to distill explainable and trustworthy agents using neuro-symbolic AI. Neuro-symbolic distillation combines the strengths of symbolic reasoning and neural networks, creating a hybrid framework that leverages the structured knowledge representation of symbolic systems alongside the learning capabilities of neural networks. The key steps of neuro-symbolic distillation involve training traditional DRL agents, followed by extracting, selecting, and distilling their learned policies into symbolic forms using symbolic regression and tree-based models. These symbolic representations are then employed instead of the neural agents to make interpretable decisions with comparable accuracy. The approach is validated through experiments on Lunar Lander and Pong, demonstrating that symbolic representations can effectively replace neural agents while enhancing transparency and trustworthiness. Our findings suggest that this approach mitigates the black-box nature of neural networks, providing a pathway toward more transparent and trustworthy AI systems. The implications of this research are significant for fields requiring both high performance and explainability, such as autonomous systems, healthcare, and financial modeling.

Completion Date

2024

Semester

Summer

Committee Chair

Ewetz, Rickard

Degree

Master of Science (M.S.)

College

College of Engineering and Computer Science

Department

Electrical and Computer Engineering

Degree Program

Computer Engineering

Format

application/pdf

Release Date

8-15-2027

Length of Campus-only Access

3 years

Access Status

Masters Thesis (Campus-only Access)

Campus Location

Orlando (Main) Campus

Accessibility Status

Meets minimum standards for ETDs/HUTs

Pong_Decision_Tree.mp4 (100 kB)
Farhan Fuad Abir

LunarLander_Symbolic_Regression.mp4 (45 kB)
Farhan Fuad Abir

Restricted to the UCF community until 8-15-2027; it will then be open access.

Share

COinS