The Random Word Trick: Using Unsophisticated Code to Make an LLM Less Boring

Submission Type

Paper

Start Date/Time (EDT)

20-7-2024 10:30 AM

End Date/Time (EDT)

20-7-2024 11:30 AM

Location

Algorithms & Imaginaries

Abstract

On the one hand, the mighty Large Language Model (LLM).

On the other, all those earlier, less “sophisticated” modes of computationally-generating text---for instance, through simpler/smaller language models (e.g., n-gram models) or aleatoric operations (e.g., randomly choosing words).

The point of my talk is that the former does not make the latter obsolete; on the contrary, LLMs have certain limitations that can be addressed via more rudimentary generative techniques. I theorize the “LLM-” (LLM minus): a hybrid of an LLM with some other, simpler, less-clever code that intentionally thwarts or degrades the LLM’s sense-making ability.

I consider the “LLM-” in the context of my book-length creative project, Salon des Fantômes (published by Inside the Castle in 2024). In this project, completed in the fall and winter of 2022, I used GPT-3.5 to create a literary/philosophical salon of which I would be the only human attendee, the others being characters (a Maoist, a Freudian, etc.) fabricated via prompts. While developing this project, I grew frustrated with just how boring the responses of the GPT-fabricated characters were.

To try to overcome this, I introduced what I call the “random word trick” for prompt engineering. I wrote some Python code to add a simple aleatoric element to the prompts I issued to GPT-3.5. The simplest version of this involved the inclusion of a demand that the LLM respond to me using a randomly-chosen word---i.e., instead of the prompt:

“Respond to me in the voice of a devotee of Freud.”

a slightly more complicated one:

“Respond to me in the voice of a devotee of Freud. Use the words ‘salamander’ and ‘symptom.’”

I will demonstrate how this and other aleatoric techniques stimulated the LLM to produce more interesting responses, and I will suggest other ways that practitioners might experiment with “LLM-” hybrids.

Bio

Kyle Booten is an assistant professor of English at the University of Connecticut, Storrs. His research explores the ways that small-scale, personalized algorithmic systems may be designed to care for one's own mind. He is the author of Salon des Fantômes (Inside the Castle 2014), a book that documents a philosophical salon he attended with a cast of AI-fabricated characters, and the creator of Nightingale, a web extension that re-distracts the user with contextually-relevant excerpts from the poetry of John Keats (available in the Chrome Web Store). His poetry written with algorithmic feedback and interference has been published in Fence, Lana Turner, and Blackbox Manifold; his scholarly writing has recently appeared or is forthcoming in electronic book review, Critical AI, and xCoAx '23.

This document is currently not available here.

Share

COinS
 
Jul 20th, 10:30 AM Jul 20th, 11:30 AM

The Random Word Trick: Using Unsophisticated Code to Make an LLM Less Boring

Algorithms & Imaginaries

On the one hand, the mighty Large Language Model (LLM).

On the other, all those earlier, less “sophisticated” modes of computationally-generating text---for instance, through simpler/smaller language models (e.g., n-gram models) or aleatoric operations (e.g., randomly choosing words).

The point of my talk is that the former does not make the latter obsolete; on the contrary, LLMs have certain limitations that can be addressed via more rudimentary generative techniques. I theorize the “LLM-” (LLM minus): a hybrid of an LLM with some other, simpler, less-clever code that intentionally thwarts or degrades the LLM’s sense-making ability.

I consider the “LLM-” in the context of my book-length creative project, Salon des Fantômes (published by Inside the Castle in 2024). In this project, completed in the fall and winter of 2022, I used GPT-3.5 to create a literary/philosophical salon of which I would be the only human attendee, the others being characters (a Maoist, a Freudian, etc.) fabricated via prompts. While developing this project, I grew frustrated with just how boring the responses of the GPT-fabricated characters were.

To try to overcome this, I introduced what I call the “random word trick” for prompt engineering. I wrote some Python code to add a simple aleatoric element to the prompts I issued to GPT-3.5. The simplest version of this involved the inclusion of a demand that the LLM respond to me using a randomly-chosen word---i.e., instead of the prompt:

“Respond to me in the voice of a devotee of Freud.”

a slightly more complicated one:

“Respond to me in the voice of a devotee of Freud. Use the words ‘salamander’ and ‘symptom.’”

I will demonstrate how this and other aleatoric techniques stimulated the LLM to produce more interesting responses, and I will suggest other ways that practitioners might experiment with “LLM-” hybrids.