Very excited to announce!: (1) I'll be working at Google this summer as a student researcher; (2) I have a first-author paper accepted to CHI 2026; and (3) our AI-driven screenreader for VR and MR environments is open-source on github.com/MadisonAbilityLab/VRSight!

Daniel Killough

CS PhD Student, UW-Madison

contact [at] dkillough.com

Profile of Daniel Killough, a long haired person slightly smiling at the camera, wearing a black hoodie and standing in front of a lake with rocks.

Hi, I'm Daniel!

I'm a 3rd year PhD student in the University of Wisconsin-Madison Computer Sciences department Top 15 graduate CS program; public research university working with Dr. Yuhang Zhao Assistant Professor, HCI & Accessibility and her madAbility Lab Accessibility research lab at UW-Madison. I study human-computer interaction (HCI), particularly extended reality (AR/VR/XR), accessibility (a11y), and AI-powered interactive systems, to improve how people with disabilities access novel technology. I am also interested in how immersive video technologies (livestreams, 360° cameras, projection mapping) and computer graphics concepts (raytracing, real-time rendering) can be applied to fields like education, communication, esports, and healthcare.

Prior to Wisconsin, I achieved my Bachelor's of Science in Computer Science from The University of Texas at Austin Top 10 undergraduate CS program with certifications in Digital Arts & Media Undergraduate intersectional program, 19 credits including capstone project and immersive technologies Graduate program, interactive media and storytelling. Inaugural cohort. At UT I worked with Amy Pavel Researcher and Professor, HCI & Accessibility -- Now at UC Berkeley on livestream accessibility and Erin Reilly Professor, Immersive Media and Storytelling & Lucy Atkinson Professor, Environmental Communication on multiple projects including AR for skin cancer prevention.

Outside of research, I test new products for HP HyperX and Dell Alienware. I also enjoy longboarding, running, backpacking, language learning, moderating online communities, and tracking the music I listen to on last.fm.

News

Jan 24, 2026
Please note that I just had shoulder surgery, so responses may be delayed over the next few weeks.
Jan 15, 2026
Very relieved that my paper on accessibility guidelines for XR practitioners has been conditionally accepted to CHI 2026! We've been working on this project since before I started my PhD so it's great to see it finally in.
Jan 6, 2026
Very excited to announce that I have accepted a student researcher position at Google for this Summer 2026!! Looking forward to working with the Google Maps team on accessibility initiatives.
Dec 22, 2025
Completed some independent contracting work for Nonfiction Research on a healthcare initiative with fellow creatives. Nonfiction's site
Dec 19, 2025
Made some changes to this site, including a dark mode theme! Check it out with the sun/moon button in the header :)

Featured Research

Animated demonstration of custom object detection model finding objects in Rec Room VR

VRSight: AI-Driven Real-Time Scene Descriptions to Improve Virtual Reality Accessibility for Blind People

Daniel Killough, Justin Feng*, Zheng Xue "ZX" Ching*, Daniel Wang*, Rithvik Dyava*, Yapeng Tian, Yuhang Zhao


Using state-of-the-art object detection, zero-shot depth estimation, and multimodal large language models to identify virtual objects in social VR applications for blind and low vision people. *Authors 2-5 equally contributed to this work.

Examples of developers' development environments for XR Accessibility

XR for All: Understanding Developer Perspectives on Accessibility Integration in Extended Reality

Daniel Killough, Tiger F. Ji, Kexin Zhang, Yaxin Hu, Yu Huang, Ruofei Du, Yuhang Zhao


Analyzing developer challenges on integrating a11y features into their XR apps. Covering a11y features for people with visual, cognitive, motor, and speech & hearing impairments.

AROMA system showing a wearable camera monitoring cooking environment

AROMA: Mixed-Initiative AI Assistance for Non-Visual Cooking by Grounding Multi-modal Information Between Reality and Videos

Zheng Ning, Leyang Li, Daniel Killough, JooYoung Seo, Patrick Carrington, Yapeng Tian, Yuhang Zhao, Franklin Mingzhe Li, Toby Jia-Jun Li


AI-powered cooking assistant using wearable cameras to help blind and low-vision users by integrating non-visual cues (touch, smell) with video recipe content. The system proactively offers alerts and guidance, helping users understand their cooking state by aligning their physical environment with recipe instructions.

Mixed Reality Drift project image showing a person writing on a virtual page

Understanding Mixed Reality Drift Tolerance

Daniel Killough*, Ruijia Chen*, Yuhang Zhao, Bilge Mutlu


Evaluating effects of mixed reality's tendency to drift objects on user perception and performance of task difficulty.

Livestreaming Accessibility project image

Exploring Community-Driven Descriptions for Making Livestreams Accessible

Daniel Killough, Amy Pavel


Making live video more accessible to blind users by crowdsourcing audio descriptions for real-time playback. Crowdsourced descriptions with 18 sighted community experts and evaluated with 9 blind participants.