Very excited to announce!: (1) I'll be working at Google this summer as a student researcher; (2) I have a first-author paper accepted to CHI 2026; and (3) our AI-driven screenreader for VR and MR environments is open-source on github.com/MadisonAbilityLab/VRSight!

Daniel Killough

CS PhD Student, UW-Madison

contact [at] dkillough.com

Profile of Daniel Killough, a long haired person slightly smiling at the camera, wearing a black hoodie and standing in front of a lake with rocks.

Hi, I'm Daniel!

I'm a 3rd year PhD student in the University of Wisconsin-Madison Computer Sciences departmentTop 15 graduate CS program; public research university working with Dr. Yuhang ZhaoAssistant Professor, HCI & Accessibility and her madAbility LabAccessibility research lab at UW-Madison⁠. I study human-computer interaction (HCI), particularly extended reality (AR/VR/XR), accessibility (a11y), and AI-powered interactive systems, to improve how people with disabilities access novel technology. I am also interested in how immersive video technologies (livestreams, 360° cameras, projection mapping) and computer graphics concepts (raytracing, real-time rendering) can be applied to fields like education, communication, esports, and healthcare.

Prior to Wisconsin, I achieved my Bachelor's of Science in Computer Science from The University of Texas at AustinTop 10 undergraduate CS program with certifications in Digital Arts & MediaUndergraduate intersectional program, 19 credits including capstone project and immersive technologiesGraduate program, interactive media and storytelling. Inaugural cohort⁠. At Texas I worked with Amy PavelResearcher and Professor, HCI & Accessibility -- Now at UC Berkeley on livestream accessibility and Erin ReillyProfessor, Immersive Media and Storytelling & Lucy AtkinsonProfessor, Environmental Communication on multiple projects including AR for skin cancer prevention.

Outside of research, I test new products for HP HyperX and Dell Alienware. I also enjoy longboarding, running, backpacking, language learning, moderating online communities, and tracking the music I listen to on last.fm.

News

May 2026
Moving to San Jose to start working with the Google Maps team researching how to make navigation more accessible using XR :)
Apr 14, 2026
I'm also giving an invited vision paper talk at the CHI Human-AI-UI Interactions Across Modalities workshop on my position paper for AI-driven assistive technologies in XR. Come join us Tuesday afternoon!
Apr 12, 2026
Attending CHI 2026 in Barcelona, Spain to present our work evaluating 3D guidelines with XR practitioners to improve accessibility features in their apps! Please check out my talk Monday morning and say hi if you're here :)
Mar 16, 2026
Excited to be giving an invited talk at the Carnegie Mellon Human-Computer Interaction Institute on my research improving the accessibility of XR technologies!
Jan 24, 2026
Please note that I just had shoulder surgery, so responses may be delayed over the next few weeks.

Featured Research

Examples of developers' development environments for XR Accessibility

How Well Can 3D Accessibility Guidelines Support XR Development? An Interview Study with XR Practitioners in Industry

Daniel Killough, Tiger F. Ji, Kexin Zhang, Yaxin Hu, Yu Huang, Ruofei Du, Yuhang Zhao


Evaluating existing 3D accessibility guidelines with XR practitioners across different levels of industry and why they don't fully support XR development. Also see our work 'XR for All' for an extended version of this work, which includes additional perspectives on a11y development as a whole.

VRSight: AI-Driven Real-Time Scene Descriptions to Improve Virtual Reality Accessibility for Blind People

Daniel Killough, Justin Feng*, Zheng Xue "ZX" Ching*, Daniel Wang*, Rithvik Dyava*, Yapeng Tian, Yuhang Zhao


Using state-of-the-art object detection, zero-shot depth estimation, and multimodal large language models to identify virtual objects in social VR applications for blind and low vision people. *Authors 2, 3, 4, & 5 contributed equally to this work.

AROMA system showing a wearable camera monitoring cooking environment

AROMA: Mixed-Initiative AI Assistance for Non-Visual Cooking by Grounding Multi-modal Information Between Reality and Videos

Zheng Ning, Leyang Li, Daniel Killough, JooYoung Seo, Patrick Carrington, Yapeng Tian, Yuhang Zhao, Franklin Mingzhe Li, Toby Jia-Jun Li


AI-powered cooking assistant using wearable cameras to help blind and low-vision users by integrating non-visual cues (touch, smell) with video recipe content. The system proactively offers alerts and guidance, helping users understand their cooking state by aligning their physical environment with recipe instructions.

Mixed Reality Drift project image showing a person writing on a virtual page

Understanding Mixed Reality Drift Tolerance

Daniel Killough*, Ruijia Chen*, Yuhang Zhao, Bilge Mutlu


Evaluating effects of mixed reality's tendency to drift objects on user perception and performance of task difficulty.

Livestreaming Accessibility project image

Exploring Community-Driven Descriptions for Making Livestreams Accessible

Daniel Killough, Amy Pavel


Making live video more accessible to blind users by crowdsourcing audio descriptions for real-time playback. Crowdsourced descriptions with 18 sighted community experts and evaluated with 9 blind participants.