
I'm a 3rd year PhD student in the University of Wisconsin-Madison Computer Sciences department Top 15 graduate CS program; public research university working with Dr. Yuhang Zhao Assistant Professor, HCI & Accessibility and her madAbility Lab Accessibility research lab at UW-Madison. I study human-computer interaction (HCI), particularly extended reality (AR/VR/XR), accessibility (a11y), and AI-powered interactive systems, to improve how people with disabilities access novel technology. I am also interested in how immersive video technologies (livestreams, 360° cameras, projection mapping) and computer graphics concepts (raytracing, real-time rendering) can be applied to fields like education, communication, esports, and healthcare.
Prior to Wisconsin, I achieved my Bachelor's of Science in Computer Science from The University of Texas at Austin Top 10 undergraduate CS program with certifications in Digital Arts & Media Undergraduate intersectional program, 19 credits including capstone project and immersive technologies Graduate program, interactive media and storytelling. Inaugural cohort. At UT I worked with Amy Pavel Researcher and Professor, HCI & Accessibility -- Now at UC Berkeley on livestream accessibility and Erin Reilly Professor, Immersive Media and Storytelling & Lucy Atkinson Professor, Environmental Communication on multiple projects including AR for skin cancer prevention.
Outside of research, I test new products for HP HyperX and Dell Alienware. I also enjoy longboarding, running, backpacking, language learning, moderating online communities, and tracking the music I listen to on last.fm.

Daniel Killough, Justin Feng*, Zheng Xue "ZX" Ching*, Daniel Wang*, Rithvik Dyava*, Yapeng Tian, Yuhang Zhao
Using state-of-the-art object detection, zero-shot depth estimation, and multimodal large language models to identify virtual objects in social VR applications for blind and low vision people. *Authors 2-5 equally contributed to this work.

Daniel Killough, Tiger F. Ji, Kexin Zhang, Yaxin Hu, Yu Huang, Ruofei Du, Yuhang Zhao
Analyzing developer challenges on integrating a11y features into their XR apps. Covering a11y features for people with visual, cognitive, motor, and speech & hearing impairments.

Zheng Ning, Leyang Li, Daniel Killough, JooYoung Seo, Patrick Carrington, Yapeng Tian, Yuhang Zhao, Franklin Mingzhe Li, Toby Jia-Jun Li
AI-powered cooking assistant using wearable cameras to help blind and low-vision users by integrating non-visual cues (touch, smell) with video recipe content. The system proactively offers alerts and guidance, helping users understand their cooking state by aligning their physical environment with recipe instructions.

Daniel Killough*, Ruijia Chen*, Yuhang Zhao, Bilge Mutlu
Evaluating effects of mixed reality's tendency to drift objects on user perception and performance of task difficulty.

Daniel Killough, Amy Pavel
Making live video more accessible to blind users by crowdsourcing audio descriptions for real-time playback. Crowdsourced descriptions with 18 sighted community experts and evaluated with 9 blind participants.