Hello 👋! My name is Xiaofan Ma (马霄凡), and I am a Master’s student in Interaction Design at Sun Yat-sen University, based in the School of Journalism and Communication.

My academic journey bridges Human-Computer Interaction (HCI) and Communication Studies, enabling me to investigate how AI technologies can empower human understanding, expression, and collaboration in complex real-world settings.

My current research focuses on AI-mediated communication in interdisciplinary teams, where I explore how AI Agent can reduce cognitive barriers, enhance mutual understanding, and foster alignment through persuasive strategies and knowledge boundary negotiation. I draw from theories in HCI, communication, and social cognition to design systems that amplify human capabilities and support smoother collaboration across disciplines.

Beyond collaborative AI, I am broadly interested in designing human-centered AI systems that support:

  • 🎨 cultural exploration and creative expression
  • 📊 multimodal storytelling and data visualization
  • 👵 inclusive interaction for older adults
  • ❤️ AI-driven narrative games for self-reflection and future thinking

Through my work, I strive to build empathetic, adaptive, and empowering AI systems—technologies that help people understand the world, collaborate meaningfully, and reflect more deeply on themselves 🌏.



🔥 News



📝 Publications

IEEE PacificVis 2025
More Than Beautiful

More Than Beautiful: Exploring Design Features, Practical Perspectives, and Implications of Artistic Data Visualization

  • Xingyu Lan, Yifan Wang, Lingyu Peng, Xiaofan Ma
  • Conference paper

  • This study provides the first systematic characterization of artistic data visualization through 220 data artworks and 12 interviews with artists.


ACM Multimedia 2025 (under review)
Speech-Rich Video Navigation

VisAug: Speech-rich Video Navigation System Based on AI-Generated Visual Augmentations

  • Baoquan Zhao, Xiaofan Ma, Qianshi Pang, Ruomei Wang, Fan Zhou, Shujin Lin*
  • [User Study]

  • This paper presents an AI-powered interface that enhances speech-rich video navigation via semantic keyword-based visual augmentations.


ZhuangShi (under review)
HuaChang Aesthetic

Layered Exploration and Experience of Chinese Ethnic Costume from a Cultural Gene Perspective

  • Xiaofan Ma, Lirong Yan, Weiping Zeng, Weijia Zhao, Huiyue Wu*
  • Submitted to Zhuangshi (《装饰》 ,Journal of Design)

  • [Project Website]

  • We present a layered cultural gene model and design an AI-enhanced system, Huashang Aesthetic, to support progressive cultural exploration and engagement with Chinese ethnic costume.



📖 Educations



💻 Internships

  • Conducted user research and system evaluation on digital participation mechanism among older adults in online communities.
  • Designed and deployed an AI agent for older adults' group chats, integrated into messaging platforms for real-time conversational support.
  • Investigated cross-cultural differences in older adults’ perceptions and emotional responses to VR-based experiences.



✨ Honors and Awards

  • May 2025Gold Winner 2025 MUSE Design Award
  • Dec 2024Silver Winner 2024 LONDON DESIGN AWARDS
  • Jun 2024Top 1% Shanghai Outstanding Graduate
  • Dec 2022Top 1% National Scholarship for Undergraduate Students
  • 2021 – 2024First Prize SISU Outstanding Student Scholarship