I'm a PhD student at the UW-Madison Computer Sciences department working with Dr. Yuhang Zhao's madAbility Lab. I study human-computer interaction (HCI), particularly extended reality (AR/VR/MR/XR), accessibility (a11y), and AI-powered interactive systems for people with disabilities. I am also interested in how immersive video technologies (livestreams, 360° cameras, projection mapping) and computer graphics concepts (raytracing, real-time rendering) can be applied to fields like education, communication, esports, and healthcare.
Before Wisconsin, I earned my BS in CS from The University of Texas at Austin alongside certifications in Digital Arts & Media and immersive technologies. There, I worked with Amy Pavel on live video accessibility for screenreader users and Erin Reilly using augmented reality for young adult skin cancer prevention.
Outside of research, I test new products for HP HyperX and Dell Alienware. I also enjoy longboarding, backpacking, language learning, achievement hunting, moderating online communities, and tracking the music I listen to on last.fm.
Daniel Killough, Justin Feng*, Zheng Xue "ZX" Ching*, Daniel Wang*, Rithvik Dyava*, Yapeng Tian, Yuhang Zhao
Using state-of-the-art object detection, zero-shot depth estimation, and multimodal large language models to identify virtual objects in social VR applications for blind and low vision people. *Authors 2-5 equally contributed to this work.
Zheng Ning, Leyang Li, Daniel Killough, JooYoung Seo, Patrick Carrington, Yapeng Tian, Yuhang Zhao, Franklin Mingzhe Li, Toby Jia-Jun Li
Daniel Killough*, Ruijia Chen*, Yuhang Zhao, Bilge Mutlu
Evaluating effects of mixed reality's tendency to drift objects on user perception and performance of task difficulty.
Daniel Killough, Tiger F. Ji, Kexin Zhang, Yaxin Hu, Yu Huang, Ruofei Du, Yuhang Zhao
Analyzing developer challenges on integrating a11y features into their XR apps. Covering a11y features for people with visual, cognitive, motor, and speech & hearing impairments.
Daniel Killough, Amy Pavel
Making live video more accessible to blind users by crowdsourcing audio descriptions for real-time playback. Crowdsourced descriptions with 18 sighted community experts and evaluated with 9 blind participants.