Daniel Killough

CS PhD Student

contact [at] dkillough.com

Profile photo of Daniel Killough

Hi, I'm Daniel !

I’m a PhD student at UW-Madison working with Yuhang Zhao studying human-computer interaction (HCI) in the madAbility Lab. My primary research interests include human-computer interaction (HCI), extended reality (AR / VR / MR / XR), accessibility (a11y), and intelligent systems. My current work uses AI techniques to make extended reality platforms more accessible to people with disabilities. My secondary research interests include computer graphics and immersive video (livestreams, 360° cameras, projection mapping), particularly in how they can be applied to education, AI-Assisted Learning (AIAL), communications (from advertising to foreign languages), esports, and healthcare.

Before Wisconsin, I earned my BS in CS from The University of Texas at Austin alongside certifications in Digital Arts & Media and immersive technologies. There, I worked with Amy Pavel on live video accessibility for screenreader users and Erin Reilly using augmented reality for young adult skin cancer prevention.


Outside of work, I track all the music I listen to on last.fm. I also enjoy longboarding, backpacking, language learning, achievement hunting, moderating online communities, and playing games with my friends.

Featured Projects

Projects in submission have had their titles changed to comply with the double-blind review process.
Names with an asterisk (*) denote equal contribution

Real-Time Virtual Reality Scene Descriptions

Daniel Killough, Justin Feng, Rithvik Dyava, Zheng Xue "ZX" Ching, Daniel Wang, Yapeng Tian, Yuhang Zhao


Using state-of-the-art object detection, zero-shot depth estimation, and multimodal large language models to identify virtual objects in social VR applications for blind and low vision people.

Evaluating Mixed Reality Drift Effects on End-Users

Ruijia Chen*, Daniel Killough*, Leo Cui, Victor Suciu, Bilge Mutlu


Evaluating effects of mixed reality's tendency to drift objects on user perception and performance of task difficulty.

XR Accessibility from a Developers' Point of View

Daniel Killough, Tiger F. Ji, Kexin Zhang, Yaxin Hu, Yu Huang, Ruofei Du, Yuhang Zhao


Analyzing developer challenges on integrating a11y features into their XR apps. Covering a11y features for people with visual, cognitive, motor, and speech & hearing impairments.

Exploring Community-Driven Descriptions for
Making Livestreams Accessible

Daniel Killough, Amy Pavel


Making live video more accessible to blind users by crowdsourcing audio descriptions for real-time playback. Crowdsourced descriptions with 18 sighted community experts and evaluated with 9 blind participants.

GazePrompt: Enhancing Low Vision People's Reading Experience with Gaze-Aware Augmentations

Ru Wang, Zach Potter, Yun Ho, Daniel Killough, Linda Zeng, Sanbrita Mondal, Yuhang Zhao


System using eyetracking to augment passages of text, supporting low vision peoples' reading challenges (e.g., line switching and difficult word recognition).


Get in Touch

contact@dkillough.com


© Daniel Killough 2024. Last modified 2024-10-14.
Made from scratch and hosted on GitHub Pages :)