I’m a PhD student at UW-Madison working with Yuhang Zhao studying
human-computer interaction (HCI) in the madAbility Lab.
My primary research interests include human-computer interaction (HCI), extended reality (AR / VR / MR
/ XR), accessibility (a11y), and AI-powered interactive systems.
I am also interested in how computer graphics and immersive video technologies (livestreams, 360° cameras,
projection mapping) can be applied to education, communication, esports, and healthcare.
I am currently working on a major project using fine-tuned computer vision and large language models to help make virtual reality devices more accessible to blind people; a short animated GIF showing one part of this system can be found below.
Before Wisconsin, I earned my BS
in CS from The University of Texas at Austin alongside
certifications in Digital Arts & Media and immersive technologies. There, I worked with Amy Pavel on live video accessibility for screenreader users and Erin Reilly using augmented reality for young adult skin cancer
prevention.
Outside of work, I track all the music I listen to on last.fm. I also enjoy longboarding, backpacking, language learning, achievement hunting, moderating online communities, and playing games with my friends.
Daniel Killough, Justin Feng, Rithvik Dyava, Zheng Xue "ZX" Ching, Daniel Wang, Yapeng Tian, Yuhang Zhao
Using state-of-the-art object detection, zero-shot depth estimation, and multimodal large language models to identify virtual objects in social VR applications for blind and low vision people.
Daniel Killough*, Ruijia Chen*, Yuhang Zhao, Bilge Mutlu
Evaluating effects of mixed reality's tendency to drift objects on user perception and performance of task difficulty.
Daniel Killough, Tiger F. Ji, Kexin Zhang, Yaxin Hu, Yu Huang, Ruofei Du, Yuhang Zhao
Analyzing developer challenges on integrating a11y features into their XR apps. Covering a11y features for people with visual, cognitive, motor, and speech & hearing impairments.
Daniel Killough, Amy Pavel
Making live video more accessible to blind users by crowdsourcing audio descriptions
for real-time playback. Crowdsourced descriptions with 18 sighted community experts and
evaluated with 9 blind participants.
Ru Wang, Zach Potter, Yun Ho, Daniel Killough, Linda Zeng, Sanbrita Mondal, Yuhang Zhao
System using eyetracking to augment passages of text, supporting low vision peoples'
reading challenges (e.g., line switching and difficult word recognition).