Daniel Killough, Justin Feng, Rithvik Dyava, ZX Ching, Yapeng Tian, Yuhang Zhao
Using state-of-the-art object detection, zero-shot depth estimation, and multimodal large language models to identify virtual objects in social VR applications for blind and low vision people.
Ruijia Chen*, Daniel Killough*, Leo Cui, Victor Suciu, Bilge Mutlu
Evaluating effects of mixed reality's tendency to drift objects on user perception and performance of task difficulty.
Daniel Killough, Tiger F. Ji, Kexin Zhang, Yaxin Hu, Yu Huang, Ruofei Du, Yuhang Zhao
Analyzing developer challenges on integrating a11y features into their XR apps. Covering a11y features for people with visual, cognitive, motor, and speech & hearing impairments.
Ru Wang, Zach Potter, Yun Ho, Daniel Killough, Linda Zeng, Sanbrita Mondal, Yuhang Zhao
System using eyetracking to augment passages of text, supporting low vision peoples'
reading challenges (e.g., line switching and difficult word recognition).
Daniel Killough, Amy Pavel
Making live video more accessible to blind users by crowdsourcing audio descriptions
for
real-time playback. Crowdsourced descriptions with 18 sighted community experts and
evaluated
with 9 blind participants.
Arman Farsad*, Daniel Killough*, Sahar Ali, Neha Momin, Sajani Patel, Thushani Herath, Lucy Atkinson, Erin Reilly
Two-part system leveraging advertising, game theory, and augmented reality to encourage young adults in Singapore and Texas to protect themselves against skin cancer. Image shows realistic cosmetic skin cancer effects on users' faces over time, as informed by collaborators in public health.
Daniel Killough
Senior thesis investigating the feasibility of virtual reality technologies for use in oral placement therapy training (speech therapy largely for kids with Down syndrome). Developed and evaluated system to convert existing 2D recordings to stereoscopic 3D for use with mobile VR. Recorded additional visuals to "bring" therapists into the presenter's context.