Proposal Title
Loading...
Submission Type
Virtual Engagement Session
Start Date
17-7-2020 2:00 PM
End Date
17-7-2020 2:30 PM
Abstract
This is the debut show of Singling, a new text sonification software we have developed to analyze and otherwise perform string data, choosing from a wide variety of linguistic and musical parameters and multilevel parsing of text. Virtual attendees of the performance will be invited to share their thoughts, reactions, poems, and codework with us through the chat function of the video conferencing platform; our program can translate all manner of keyboard symbols, numbers, the English alphabet as well as various classes of words into MIDI code. We will perform the output of this textual data live, using a variety of MIDI-enabled instruments and digital interfaces. Each “song” will take a new portion of text to demonstrate particular settings and parameters for the transmediation of linguistic input to musical output, and feature different combinations of presenter-performers / instruments performing the MIDI code. Through screen sharing, we will show and narrate the choice of text, various settings and combinations of instruments used for each song, and then remotely perform it. Attendees are encouraged to use audio headphones or sound systems to hear the full range of audio frequencies.
Singling and the Earful Yearning: A remote, digital, hyper-interactive text-to-MIDI literacoustic jam
This is the debut show of Singling, a new text sonification software we have developed to analyze and otherwise perform string data, choosing from a wide variety of linguistic and musical parameters and multilevel parsing of text. Virtual attendees of the performance will be invited to share their thoughts, reactions, poems, and codework with us through the chat function of the video conferencing platform; our program can translate all manner of keyboard symbols, numbers, the English alphabet as well as various classes of words into MIDI code. We will perform the output of this textual data live, using a variety of MIDI-enabled instruments and digital interfaces. Each “song” will take a new portion of text to demonstrate particular settings and parameters for the transmediation of linguistic input to musical output, and feature different combinations of presenter-performers / instruments performing the MIDI code. Through screen sharing, we will show and narrate the choice of text, various settings and combinations of instruments used for each song, and then remotely perform it. Attendees are encouraged to use audio headphones or sound systems to hear the full range of audio frequencies.
Bio
This performance / virtual engagement session is presented by members of the Digital Literacy Centre of the University of British Columbia. Dr. Kedrick James is Director of the Digital Literacy Centre, Deputy Head and Associate Professor of Teaching in the Department of Language and Literacy Education. He specializes in digital literacy, automation of literacy, and arts-based research. He is joined by Rachel Horst, Yuya Takeda, and Esteban Morales, PhD students in the Department of Language and Literacy Education, along with Effiam Yung, Communications Specialist for the Department and programmer for this sonification project.