Towards Sound Accessibility in Virtual Reality ICMI ’21, October 18–22, 2021, Montréal, QC, Canada
[20]
Steven Goodman, Susanne Kirchner, Rose Guttman, Dhruv Jain, Jon Froehlich,
and Leah Findlater. Evaluating Smartwatch-based Sound Feedback for Deaf and
Hard-of-hearing Users Across Contexts. In Proceedings of the SIGCHI Conference
on Human Factors in Computing Systems, 1–13.
[21]
Ru Guo, Yiru Yang, Johnson Kuang, Xue Bin, Dhruv Jain, Steven Goodman,
Leah Findlater, and Jon Froehlich. 2020. HoloSound: Combining Speech and
Sound Identication for Deaf or Hard of Hearing Users on a Head-mounted
Display. In The 22nd International ACM SIGACCESS Conference on Computers and
Accessibility, 1–4.
[22]
Sander Huiberts. 2010. Captivating sound the role of audio for immersion in
computer games. University of Portsmouth.
[23]
John W Hunt, Marcel Arditi, and F Stuart Foster. 1983. Ultrasound transducers
for pulse-echo medical imaging. IEEE Transactions on Biomedical Engineering, 8:
453–481.
[24]
Dhruv Jain, Bonnie Chinh, Leah Findlater, Raja Kushalnagar, and Jon Froehlich.
2018. Exploring Augmented Reality Approaches to Real-Time Captioning: A
Preliminary Autoethnographic Study. In Proceedings of the 2018 ACM Conference
Companion Publication on Designing Interactive Systems, 7–11.
[25]
Dhruv Jain, Brendon Chiu, Steven Goodman, Chris Schmandt, Leah Findlater,
and Jon E Froehlich. 2020. Field study of a tactile sound awareness device for deaf
users. In Proceedings of the 2020 International Symposium on Wearable Computers,
55–57.
[26]
Dhruv Jain, Leah Findlater, Christian Volger, Dmitry Zotkin, Ramani Duraiswami,
and Jon Froehlich. 2015. Head-Mounted Display Visualizations to Support Sound
Awareness for the Deaf and Hard of Hearing. In Proceedings of the 33rd Annual
ACM Conference on Human Factors in Computing Systems, 241–250.
[27]
Dhruv Jain, Rachel Franz, Leah Findlater, Jackson Cannon, Raja Kushalnagar, and
Jon Froehlich. 2018. Towards Accessible Conversations in a Mobile Context for
People Who are Deaf and Hard of Hearing. In ASSETS 2018 - Proceedings of the
20th International ACM SIGACCESS Conference on Computers and Accessibility,
81–92.
[28]
Dhruv Jain, Sasa Junuzovic, Eyal Ofek, Mike Sinclair, John Porter, Chris Yoon,
Swetha Machanavajhala, and Meredith Ringel Morris. 2021. A Taxonomy of
Sounds in Virtual Reality. In Designing Interactive Systems (DIS) 2021, 160–170.
[29]
Dhruv Jain, Angela Carey Lin, Marcus Amalachandran, Aileen Zeng, Rose
Guttman, Leah Findlater, and Jon Froehlich. 2019. Exploring Sound Awareness in
the Home for People who are Deaf or Hard of Hearing. In Proceedings of the 2019
CHI Conference on Human Factors in Computing Systems, 94:1-94:13.
[30]
[30] Dhruv Jain, Kelly Mack, Akli Amrous, Matt Wright, Steven Goodman, Leah
Findlater, and Jon E Froehlich. 2020. HomeSound: An Iterative Field Deployment
of an In-Home Sound Awareness System for Deaf or Hard of Hearing Users. In
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems
(CHI ’20), 1–12.
[31]
Dhruv Jain, Hung Ngo, Pratyush Patel, Steven Goodman, Leah Findlater, and
Jon Froehlich. 2020. SoundWatch: Exploring Smartwatch-based Deep Learning
Approaches to Support Sound Awareness for Deaf and Hard of Hearing Users. In
ACM SIGACCESS conference on Computers and accessibility, 1–13.
[32]
Maria Karam, Carmen Branje, Gabriel Nespoli, Norma Thompson, Frank Russo,
and Deborah Fels. 2010. The emoti-chair: An interactive tactile music exhibit.
3069–3074.
[33]
Klaus Krippendor. 2018. Content analysis: An introduction to its methodology.
Sage publications.
[34]
Arpi Mardirossian and Elaine Chew. 2007. Visualizing Music: Tonal Progressions
and Distributions. In ISMIR, 189–194.
[35]
Tiago A Marques, Len Thomas, Stephen W Martin, David K Mellinger, Jessica A
Ward, David J Moretti, Danielle Harris, and Peter L Tyack. 2013. Estimating animal
population density using passive acoustics. Biological Reviews 88, 2: 287–309.
[36]
Martyn Reding. Designing haptic responses. Retrieved September 6, 2020 from
https://medium.com/@martynreding/basics-of-designing-haptic-responses-
63dc6b52e010
[37]
Tara Matthews, Janette Fong, F. Wai-Ling Ho-Ching, and Jennifer Manko. 2006.
Evaluating non-speech sound visualizations for the deaf. Behaviour & Information
Technology 25, 4: 333–351. https://doi.org/10.1080/01449290600636488
[38]
Matthias Mielke and Rainer Brueck. 2015. Design and evaluation of a smartphone
application for non-speech sound awareness for people with hearing loss. In En-
gineering in Medicine and Biology Society (EMBC), 2015 37th Annual International
Conference of the IEEE, 5008–5011.
[39]
Mohammadreza Mirzaei, Peter Kan, and Hannes Kaufmann. 2020. EarVR: Using
ear haptics in virtual reality for deaf and Hard-of-Hearing people. IEEE Transac-
tions on Visualization and Computer Graphics 26, 5: 2084–2093.
[40]
Reiko Miyazaki, Issei Fujishiro, and Rumi Hiraga. 2003. Exploring MIDI datasets.
In ACM SIGGRAPH 2003 Sketches & Applications. 1.
[41]
Martez Mott, Edward Cutrell, Mar Gonzalez Franco, Christian Holz, Eyal Ofek,
Richard Stoakley, and Meredith Ringel Morris. 2019. Accessible by design: An
opportunity for virtual reality. In 2019 IEEE International Symposium on Mixed
and Augmented Reality Adjunct (ISMAR-Adjunct), 451–454.
[42]
Suranga Chandima Nanayakkara, Lonce Wyse, S. H. Ong, and Elizabeth A. Taylor.
2013. Enhancing Musical Experience for the Hearing-Impaired Using Visual and
Haptic Displays. Human–Computer Interaction 28, 2: 115–160.
[43]
Suranga Nanayakkara, Elizabeth Taylor, Lonce Wyse, and S H Ong. 2009. An
enhanced musical experience for the deaf: design and evaluation of a music
display and a haptic chair. In Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems, 337–346.
[44]
S C W Ong and S Ranganath. 2005. Automatic sign language analysis: a survey
and the future beyond lexical meaning. Pattern Analysis and Machine Intelligence,
IEEE Transactions on 27, 6: 873–891.
[45]
Shanmugam Muruga Palaniappan, Ting Zhang, and Bradley S Duerstock. 2019.
Identifying Comfort Areas in 3D Space for Persons with Upper Extremity Mobility
Impairments Using Virtual Reality. In The 21st International ACM SIGACCESS
Conference on Computers and Accessibility, 495–499.
[46]
Phil Parette and Marcia Scherer. 2004. Assistive Technology Use and Stigma.
Education and Training in Developmental Disabilities-September 2004: 217–226.
[47]
David Passig and Sigal Eden. 2001. Virtual reality as a tool for improving spatial
rotation among deaf and hard-of-hearing children. CyberPsychology & Behavior
4, 6: 681–686.
[48]
Mark Paterson. 2017. On haptic media and the possibilities of a more inclusive
interactivity. New Media & Society 19, 10: 1541–1562.
[49]
Yi-Hao Peng, Ming-Wei Hsu, Paul Taele, Ting-Yu Lin, Po-En Lai, Leon Hsu,
Tzu-chuan Chen, Te-Yen Wu, Yu-An Chen, Hsien-Hui Tang, and Mike Y. Chen.
2018. SpeechBubbles: Enhancing Captioning Experiences for Deaf and Hard-of-
Hearing People in Group Conversations. In SIGCHI Conference on Human Factors
in Computing Systems (CHI), Paper No. 293.
[50]
A J Phillips, A R D Thornton, S Worsfold, A Downie, and J Milligan. 1994. Expe-
rience of using vibrotactile aids with the profoundly deafened. European journal
of disorders of communication 29, 1: 17–26.
[51]
Martin Pielot, Benjamin Poppinga, Wilko Heuten, and Susanne Boll. 2011. A
tactile compass for eyes-free pedestrian navigation. In IFIP Conference on Human-
Computer Interaction, 640–656.
[52]
Ilyas Potamitis, Stavros Ntalampiras, Olaf Jahn, and Klaus Riede. 2014. Automatic
bird sound detection in long real-eld recordings: Applications and tools. Applied
Acoustics 80: 1–9.
[53]
Marti L Riemer-Reiss and Robbyn R Wacker. 2000. Factors associated with assis-
tive technology discontinuance among individuals with disabilities. Journal of
Rehabilitation 66, 3.
[54]
Tom Ritchey. 2011. General morphological analysis (GMA). In Wicked problems–
Social messes. Springer, 7–18.
[55]
Frank A Saunders, William A Hill, and Barbara Franklin. 1981. A wearable tactile
sensory aid for profoundly deaf children. Journal of Medical Systems 5, 4: 265–270.
[56]
Andrew Sears, Min Lin, Julie Jacko, and Yan Xiao. 2003. When computers fade:
Pervasive computing and situationally-induced impairments and disabilities. In
HCI international, 1298–1302.
[57]
Kristen Shinohara and JO Wobbrock. 2011. In the shadow of misperception:
assistive technology use and social interactions. In SIGCHI Conference on Human
Factors in Computing Systems (CHI), 705–714.
[58]
Liu Sicong, Zhou Zimu, Du Junzhao, Shangguan Longfei, Jun Han, and Xin Wang.
2017. UbiEar: Bringing Location-independent Sound Awareness to the Hard-of-
hearing People with Smartphones. Proceedings of the ACM on Interactive, Mobile,
Wearable and Ubiquitous Technologies 1, 2: 17.
[59]
Alexa F Siu, Mike Sinclair, Robert Kovacs, Eyal Ofek, Christian Holz, and Edward
Cutrell. 2020. Virtual Reality Without Vision: A Haptic and Auditory White Cane
to Navigate Complex Virtual Worlds. In Proceedings of the 2020 CHI Conference
on Human Factors in Computing Systems, 1–13.
[60]
Sean M Smith and Glen N Williams. 1997. A visualization of music. In Proceedings.
Visualization’97 (Cat. No. 97CB36155), 499–503.
[61]
Axel Stockburger. 2003. The game environment from an auditive perspective.
Level Up: 4–6.
[62]
I R Summers, M A Peake, and M C Martin. 1981. Field trials of a tactile acoustic
monitor for the profoundly deaf. British journal of audiology 15, 3: 195–199.
[63]
Mauro Teólo, Alvaro Lourenço, Juliana Postal, and Vicente F Lucena. 2018.
Exploring virtual reality to enable deaf or hard of hearing accessibility in live
theaters: A case study. In International Conference on Universal Access in Human-
Computer Interaction, 132–148.
[64]
Ryan Wedo, Lindsay Ball, Amelia Wang, Yi Xuan Khoo, Lauren Lieberman,
and Kyle Rector. 2019. Virtual showdown: An accessible virtual reality game
with scaolds for youth with visual impairments. In Proceedings of the 2019 CHI
Conference on Human Factors in Computing Systems, 1–15.
[65]
Janet M Weisenberger, Susan M Broadstone, and Frank A Saunders. 1989. Evalu-
ation of two multichannel tactile aids for the hearing impaired. The Journal of
the Acoustical Society of America 86, 5: 1764–1775.
[66]
Jacob O Wobbrock, Shaun K Kane, Krzysztof Z Gajos, Susumu Harada, and Jon
Froehlich. 2011. Ability-Based Design: Concept, Principles and Examples. ACM
Trans. Access. Comput. 3, 3: 9:1–9:27. https://doi.org/10.1145/1952383.1952384
[67]
Lining Yao, Yan Shi, Hengfeng Chi, Xiaoyu Ji, and Fangtian Ying. 2010. Music-
touch Shoes: Vibrotactile Interface for Hearing Impaired Dancers. In Proceedings
of the Fourth International Conference on Tangible, Embedded, and Embodied
Interaction (TEI ’10), 275–276.
90