Loading...
Submission Type
Paper
Start Date/Time (EDT)
19-7-2024 4:45 PM
End Date/Time (EDT)
19-7-2024 5:45 PM
Location
Hypertexts & Fictions
Abstract
This proposal is for the "Experimental Track," which is not showing up as a choice in the "Submission Type" above.
This demonstration is derived from the research currently being undertaken by an international team of artists, scientists, and digital humanities scholars about the future of text in XR. Our work, supported by a grant from The Alfred P. Sloan Foundation awarded us in 2023, looks at ways to harness the potential of Virtual and Augmented Reality (VR/AR), commonly collectively referred to as Extended Reality, or ‘XR’, to expand academic communication through the development of open-source software to make it possible for users to read, manipulate, navigate, and create in three-dimensional space. At the heart of our project is to explore ways for those interested in using XR for research and creative activities to get the most out of their data without proprietary systems containing core data stripped “on export” (as is the case with authoring systems exporting to PDF today). It involves showing the progress of our work to make a personal library of academic articles (in the form of PDFs) accessible from one’s computer, read and interact with the documents in XR, and export the documents in useful formats.
Specifically, we will demonstrate:
- Full document interactions: The user will be able to directly interact with a document to move, scale, and set preferred reading angle. The user will further be able to lock the document to table or headset/user’s head. The user will be able to read as a single page, two-page spread, multiple-page spread or as full pages in a large rectangle.
- Document component interactions: The user will be able to interact with the document to put elements from the document in 3D spatial positions either manually or to pre-determined locations, including images, table of contents, glossary, graphs, and references.
- Multi-Document interactions (Connections): The user will be able to interact with citations in one document and see how they connect to other documents in their Library and beyond.
- External Document interactions: Documents not in the user’s Library will be presented as ‘token’s in citation trees and will be made quick and easy to retrieve.
- Headset/ Traditional Computer Transition: The user will be able to take off their headset at any time, and because this approach to headset/traditional computers uses Visual-Meta, any document presented in XR will feature an additional and temporary Appendix where full spatial information will be recorded for use next time that the user chooses to interact with the document in XR.
For our demonstration we will use the Apple Vision Pro and Meta Quest 3 for showing the advancements we have made during Year 1 of our project. If participants have access to a VR headset, they will be invited to access our files on GitHub and follow the demonstration via Zoom, thus allowing them to set up their own library of documents to explore.
What’s at stake is simply this: XR extends our potential. How we choose to extend ourselves defines who we are, and who we want to be. We are at a pivotal point in our co-evolution with the tools and information environments we use, and how we choose to design how we work in XR will have repercussions for generations. It is important for academics with the capacity to do so to take the lead in the development of this powerful technology and the formation of practices associated with it.
Recommended Citation
Grigar, Dene and Thompson, Andrew, "The Future of Text in XR: Phase 1 of the Project" (2024). ELO (Un)linked 2024. 12.
https://stars.library.ucf.edu/elo2024/hypertextsandfictions/schedule/12
The Future of Text in XR: Phase 1 of the Project
Hypertexts & Fictions
This proposal is for the "Experimental Track," which is not showing up as a choice in the "Submission Type" above.
This demonstration is derived from the research currently being undertaken by an international team of artists, scientists, and digital humanities scholars about the future of text in XR. Our work, supported by a grant from The Alfred P. Sloan Foundation awarded us in 2023, looks at ways to harness the potential of Virtual and Augmented Reality (VR/AR), commonly collectively referred to as Extended Reality, or ‘XR’, to expand academic communication through the development of open-source software to make it possible for users to read, manipulate, navigate, and create in three-dimensional space. At the heart of our project is to explore ways for those interested in using XR for research and creative activities to get the most out of their data without proprietary systems containing core data stripped “on export” (as is the case with authoring systems exporting to PDF today). It involves showing the progress of our work to make a personal library of academic articles (in the form of PDFs) accessible from one’s computer, read and interact with the documents in XR, and export the documents in useful formats.
Specifically, we will demonstrate:
- Full document interactions: The user will be able to directly interact with a document to move, scale, and set preferred reading angle. The user will further be able to lock the document to table or headset/user’s head. The user will be able to read as a single page, two-page spread, multiple-page spread or as full pages in a large rectangle.
- Document component interactions: The user will be able to interact with the document to put elements from the document in 3D spatial positions either manually or to pre-determined locations, including images, table of contents, glossary, graphs, and references.
- Multi-Document interactions (Connections): The user will be able to interact with citations in one document and see how they connect to other documents in their Library and beyond.
- External Document interactions: Documents not in the user’s Library will be presented as ‘token’s in citation trees and will be made quick and easy to retrieve.
- Headset/ Traditional Computer Transition: The user will be able to take off their headset at any time, and because this approach to headset/traditional computers uses Visual-Meta, any document presented in XR will feature an additional and temporary Appendix where full spatial information will be recorded for use next time that the user chooses to interact with the document in XR.
For our demonstration we will use the Apple Vision Pro and Meta Quest 3 for showing the advancements we have made during Year 1 of our project. If participants have access to a VR headset, they will be invited to access our files on GitHub and follow the demonstration via Zoom, thus allowing them to set up their own library of documents to explore.
What’s at stake is simply this: XR extends our potential. How we choose to extend ourselves defines who we are, and who we want to be. We are at a pivotal point in our co-evolution with the tools and information environments we use, and how we choose to design how we work in XR will have repercussions for generations. It is important for academics with the capacity to do so to take the lead in the development of this powerful technology and the formation of practices associated with it.
Bio
Dene Grigar is Founder and Director of the Electronic Literature Lab. She also serves as the Director of the Creative Media & Digital Technology Program at Washington State University Vancouver, with research focusing on the creation, curation, preservation, and criticism of Electronic Literature, specifically building multimedial environments and experiences. She has authored or co-authored 14 media works, such as Curlew (with Greg Philbrook, 2014), “A Villager’s Tale” (with Brett Oppegaard, 2011), the “24-Hour Micro-Elit Project” (2009), as well as six books and over 60 articles. She curates exhibits of electronic literature and media art, mounting shows at the Library of Congress and for the Modern Language Association, among other venues. She serves as Associate Editor for Leonardo Reviews. For the Electronic Literature Organization she served as President from 2013-2019 and currently as its Managing Director and Curator of The NEXT. Her website is located at http://nouspace.net/dene.
Andrew Thompson is the XR programmer on "The Future of Text in XR" project. He is also works on Flash preservation and born-digital reconstruction projects for the Electronic Literature Lab. He served as Project Manager for Amnesia: Restored, as well as a core developer on many other reconstructions, including David Kolb's Caged Texts, Stuart Moulthrop's Victory Garden, John McDaid's Uncle Buddy's Phantom Funhouse, and Rob Swigart's Data Entry: Portal. Outside of ELL, he works as a game developer using Unreal Engine 5 and Unity, and serves on the executive team and is a founder of CMDC Studios