Excited to announce I've had two papers accepted to UIST 2025!
Come see VRSight and AROMA in Busan this September!

Daniel Killough

CS PhD Student

contact [at] dkillough.com

Profile of Daniel Killough, a long haired guy slightly smiling at the camera wearing a black hoodie. Daniel stands in front of a lake background with rocks.

Hi, I'm Daniel !

I'm a PhD student at the UW-Madison Computer Sciences department working with Dr. Yuhang Zhao's madAbility Lab. I study human-computer interaction (HCI), particularly extended reality (AR/VR/MR/XR), accessibility (a11y), and AI-powered interactive systems for people with disabilities. I am also interested in how immersive video technologies (livestreams, 360° cameras, projection mapping) and computer graphics concepts (raytracing, real-time rendering) can be applied to fields like education, communication, esports, and healthcare.

Before Wisconsin, I earned my BS in CS from The University of Texas at Austin alongside certifications in Digital Arts & Media and immersive technologies. There, I worked with Amy Pavel on live video accessibility for screenreader users and Erin Reilly using augmented reality for young adult skin cancer prevention.


Outside of research, I test new products for HP HyperX and Dell Alienware. I also enjoy longboarding, backpacking, language learning, achievement hunting, moderating online communities, and tracking the music I listen to on last.fm.

News

Aug 18, 2025
I have two papers accepted to UIST 2025! Excited to present VRSight, an AI-powered scene description system to help BLV use VR, and support AROMA, helping BLV follow cooking how-to videos using sighted and non-sighted input, in Busan this September. UIST 2025
July 16, 2025
Excited to be presenting XR for All and VRSight with XR Access! RSVP here!
May 2, 2025
Presenting a lightning talk alongside fellow accessibility researchers at Miraikan in Tokyo! Link (deprecated)
Apr 27, 2025
Very excited to announce that we'll be presenting a demonstration of VRSight and our MR drift tolerance poster at CHI 2025 in Yokohama! CHI 2025
Dec 20, 2024
XR for All, our work understanding XR practitioners' experiences developing (or intentionally not developing!) accessibility features for their apps, is now Live on arXiv!

Featured Projects

VRSight: AI-Driven Real-Time Scene Descriptions to Improve Virtual Reality Accessibility for Blind People

Daniel Killough, Justin Feng*, Zheng Xue "ZX" Ching*, Daniel Wang*, Rithvik Dyava*, Yapeng Tian, Yuhang Zhao


Using state-of-the-art object detection, zero-shot depth estimation, and multimodal large language models to identify virtual objects in social VR applications for blind and low vision people. *Authors 2-5 equally contributed to this work.

AROMA: Mixed-Initiative AI Assistance for Non-Visual Cooking by Grounding Multi-modal Information Between Reality and Videos

Zheng Ning, Leyang Li, Daniel Killough, JooYoung Seo, Patrick Carrington, Yapeng Tian, Yuhang Zhao, Franklin Mingzhe Li, Toby Jia-Jun Li


Understanding Mixed Reality Drift Tolerance

Daniel Killough*, Ruijia Chen*, Yuhang Zhao, Bilge Mutlu


Evaluating effects of mixed reality's tendency to drift objects on user perception and performance of task difficulty.

XR for All: Understanding Developer Perspectives on Accessibility Integration in Extended Reality

Daniel Killough, Tiger F. Ji, Kexin Zhang, Yaxin Hu, Yu Huang, Ruofei Du, Yuhang Zhao


Analyzing developer challenges on integrating a11y features into their XR apps. Covering a11y features for people with visual, cognitive, motor, and speech & hearing impairments.

Exploring Community-Driven Descriptions for Making Livestreams Accessible

Daniel Killough, Amy Pavel


Making live video more accessible to blind users by crowdsourcing audio descriptions for real-time playback. Crowdsourced descriptions with 18 sighted community experts and evaluated with 9 blind participants.