VRSight is now open-source!
Check out the project page on github.com/MadisonAbilityLab/VRSight!

Daniel Killough

CS PhD Student, UW-Madison

contact [at] dkillough.com

Profile of Daniel Killough, a long haired guy slightly smiling at the camera wearing a black hoodie. Daniel stands in front of a lake background with rocks.

Hi, I'm Daniel !

I'm a 3rd year PhD student in the University of Wisconsin-Madison Computer Sciences department working with Dr. Yuhang Zhao and the madAbility Lab. I study human-computer interaction (HCI), particularly extended reality (AR/VR/XR), accessibility (a11y), and AI-powered interactive systems for people with disabilities. I am also interested in how immersive video technologies (livestreams, 360° cameras, projection mapping) and computer graphics concepts (raytracing, real-time rendering) can be applied to fields like education, communication, esports, and healthcare.

Prior to Wisconsin, I achieved my BS in CS from The University of Texas at Austin with certifications in Digital Arts & Media and immersive technologies. At UT I worked with Amy Pavel on livestream accessibility and Erin Reilly on AR for skin cancer prevention.


Outside of research, I test new products for HP HyperX and Dell Alienware. I also enjoy longboarding, running, backpacking, language learning, moderating online communities, and tracking the music I listen to on last.fm.

News

Oct 7, 2025
My UIST 2025 paper VRSight has been officially open-sourced on GitHub, including the full system, DISCOVR dataset, and our object detection model weights! Check it out here!
Sept 27, 2025
I'm attending UIST 2025 to present my first-author paper VRSight in Busan, South Korea! Check out our demo Monday and the full paper talk Tuesday. It's also my first time student volunteering, so please feel free to say hi! VRSight full paper link
Aug 18, 2025
I have two papers accepted to UIST 2025! Excited to present VRSight, an AI-powered scene description system to help BLV use VR, and support AROMA, helping BLV follow cooking how-to videos using sighted and non-sighted input, in Busan this September. UIST 2025
July 16, 2025
Excited to be giving an invited talk at XR Access on my research improving the accessibility of XR technologies! Check out the talk here.
May 2, 2025
Presenting a lightning talk alongside fellow accessibility researchers at Miraikan in Tokyo!

Featured Research

VRSight: AI-Driven Real-Time Scene Descriptions to Improve Virtual Reality Accessibility for Blind People

Daniel Killough, Justin Feng*, Zheng Xue "ZX" Ching*, Daniel Wang*, Rithvik Dyava*, Yapeng Tian, Yuhang Zhao


Using state-of-the-art object detection, zero-shot depth estimation, and multimodal large language models to identify virtual objects in social VR applications for blind and low vision people. *Authors 2-5 equally contributed to this work.

AROMA: Mixed-Initiative AI Assistance for Non-Visual Cooking by Grounding Multi-modal Information Between Reality and Videos

Zheng Ning, Leyang Li, Daniel Killough, JooYoung Seo, Patrick Carrington, Yapeng Tian, Yuhang Zhao, Franklin Mingzhe Li, Toby Jia-Jun Li


AI-powered cooking assistant using wearable cameras to help blind and low-vision users by integrating non-visual cues (touch, smell) with video recipe content. The system proactively offers alerts and guidance, helping users understand their cooking state by aligning their physical environment with recipe instructions.

Understanding Mixed Reality Drift Tolerance

Daniel Killough*, Ruijia Chen*, Yuhang Zhao, Bilge Mutlu


Evaluating effects of mixed reality's tendency to drift objects on user perception and performance of task difficulty.

XR for All: Understanding Developer Perspectives on Accessibility Integration in Extended Reality

Daniel Killough, Tiger F. Ji, Kexin Zhang, Yaxin Hu, Yu Huang, Ruofei Du, Yuhang Zhao


Analyzing developer challenges on integrating a11y features into their XR apps. Covering a11y features for people with visual, cognitive, motor, and speech & hearing impairments.

Exploring Community-Driven Descriptions for Making Livestreams Accessible

Daniel Killough, Amy Pavel


Making live video more accessible to blind users by crowdsourcing audio descriptions for real-time playback. Crowdsourced descriptions with 18 sighted community experts and evaluated with 9 blind participants.