|
Haoqian Wu
Email: wuhaoqian@sync-xyz.com
Google Scholar
 / 
Github
I am the CEO & Co-founder of Sync, building the next generation of AI-native companionship products designed to enhance emotional connection, creativity, and intelligence in everyday life.
Previously, I founded MR Game Studio D-Fissure, leading a 10-person team to ship four indie games, one of which generated over ¥1M and was acquired by a major tech company.
Before founding Sync, I worked as an AI Research Scientist at
NetEase Fuxi Lab, where I developed AI avatar systems deployed in
Justice Mobile and Naraka: Bladepoint Mobile, serving tens of millions of users. Prior to that, I worked at
Kuaishou on 0-to-1 AI digital human initiatives.
I received my Master’s degree from
Zhejiang University, advised by
Prof. Xi Li, and my Bachelor’s degree in Computer Science from the same institution. I have published 5 top-tier papers in multimodal AI, 3D vision, and neural rendering.
|
|
Projects
|
Shira AI-native virtual companion
Website
A deeply emotional AI feline companion — not just a pet, but a virtual being who grows with you, remembers you,
cares about you, and builds a lasting bond. Designed as an AI-native emotional companion game with persistent memory and evolving personality.
|
PatPat Mixed-Reality rhythm music game
Website
A creative rhythm game where players tap on their real desk to generate beats in sync with dynamic music.
A physical + digital rhythm playground that blends tactile interaction with musical immersion.
|
Hey Mosquito! Mixed-Reality action game
Website
The ultimate mosquito-swatting MR experience — put on a headset and use your bare hands to swat incoming mosquitoes
from all directions.
|
|
Research
|
[5]  ICE: Interactive 3D Game Character Editing via Dialogue
Haoqian Wu,
Minda Zhao,
Zhipeng Hu,
Lincheng Li,
Weijie Chen,
Rui Zhao,
Changjie Fan,
Xin Yu
TMM, 2024 
paper /
project page
We propose an Interactive Character Editing framework (ICE) to achieve a multi-round dialogue-based refinement process.
|
[4]  Text-Guided 3D Face Synthesis -- From Generation to Editing
Yunjie Wu,
Yapeng Meng,
Zhipeng Hu,
Lincheng Li,
Haoqian Wu,
Kun Zhou,
Weiwei Xu,
Xin Yu
CVPR, 2024 
paper /
project page
We propose a unified text-guided framework from face generation to editing.
|
[3]  NeFII: Inverse Rendering for Reflectance Decomposition with Near-Field Indirect Illumination
Haoqian Wu,
Zhipeng Hu,
Lincheng Li,
Yongqiang Zhang,
Changjie Fan,
Xin Yu
CVPR, 2023 
paper /
project page
We introduce the Monte Carlo sampling based path tracing and cache the indirect illumination as neural radiance, enabling a physics-faithful and easy-to-optimize inverse rendering method.
|
[2]  Towards Unbiased Volume Rendering of Neural Implicit Surfaces with Geometry Priors
Yongqiang Zhang,
Zhipeng Hu,
Haoqian Wu,
Minda Zhao,
Lincheng Li,
Zhengxia Zou,
Changjie Fan
CVPR, 2023 
project page
We revise and provide an additional condition for the unbiased volume rendering. Following this analysis, we propose a new rendering method by scaling the SDF field with the angle between the viewing direction and the surface normal vector.
|
[1]  Condition-Aware Comparison Scheme for Gait Recognition
Haoqian Wu,
Tian Jian,
Yongjian Fu,
Bin Li,
Xi Li
TIP, 2020 
paper
We propose a condition-aware comparison scheme to measure gait pairs’ similarity via a novel module named Instructor. Also, we present a geometry-guided data augmentation approach (Dresser) to enrich dressing conditions. Furthermore, to enhance the gait representation, we propose to model temporal local information from coarse to fine.
|
|