
I'm a 3rd year PhD student in the University of Wisconsin-Madison Computer Sciences department working with Dr. Yuhang Zhao and the madAbility Lab. I study human-computer interaction (HCI), particularly extended reality (AR/VR/XR), accessibility (a11y), and AI-powered interactive systems for people with disabilities. I am also interested in how immersive video technologies (livestreams, 360° cameras, projection mapping) and computer graphics concepts (raytracing, real-time rendering) can be applied to fields like education, communication, esports, and healthcare.
Prior to Wisconsin, I achieved my BS in CS from The University of Texas at Austin with certifications in Digital Arts & Media and immersive technologies. At UT I worked with Amy Pavel on livestream accessibility and Erin Reilly on AR for skin cancer prevention.
Outside of research, I test new products for HP HyperX and Dell Alienware. I also enjoy longboarding, running, backpacking, language learning, moderating online communities, and tracking the music I listen to on last.fm.
Daniel Killough, Justin Feng*, Zheng Xue "ZX" Ching*, Daniel Wang*, Rithvik Dyava*, Yapeng Tian, Yuhang Zhao
Using state-of-the-art object detection, zero-shot depth estimation, and multimodal large language models to identify virtual objects in social VR applications for blind and low vision people. *Authors 2-5 equally contributed to this work.
Zheng Ning, Leyang Li, Daniel Killough, JooYoung Seo, Patrick Carrington, Yapeng Tian, Yuhang Zhao, Franklin Mingzhe Li, Toby Jia-Jun Li
AI-powered cooking assistant using wearable cameras to help blind and low-vision users by integrating non-visual cues (touch, smell) with video recipe content. The system proactively offers alerts and guidance, helping users understand their cooking state by aligning their physical environment with recipe instructions.
Daniel Killough*, Ruijia Chen*, Yuhang Zhao, Bilge Mutlu
Evaluating effects of mixed reality's tendency to drift objects on user perception and performance of task difficulty.
Daniel Killough, Tiger F. Ji, Kexin Zhang, Yaxin Hu, Yu Huang, Ruofei Du, Yuhang Zhao
Analyzing developer challenges on integrating a11y features into their XR apps. Covering a11y features for people with visual, cognitive, motor, and speech & hearing impairments.
Daniel Killough, Amy Pavel
Making live video more accessible to blind users by crowdsourcing audio descriptions for real-time playback. Crowdsourced descriptions with 18 sighted community experts and evaluated with 9 blind participants.