profile photo

Xiangyue Liu

I'm a first-year Ph.D. student at HKUST, advised by Prof. Ping Tan. Before that, I was a visiting student (RA) at Tsinghua University, advised by Prof. Li Yi and Prof. Yang Gao. Previously, I completed my Master's from Beihang University.

My research focuses on deep learning and computer vision, with a strong emphasis on 3D/4D Editing/ Generation/ Reconstruction utilizing deep generative models. Early interests were visual SLAM and MVS.

Google Scholar  ·  Github  ·  Twitter  ·  Email

πŸ”₯ News

  • NEW [2024/02] One paper accepted to CVPR 2024 πŸŽ‰.
  • NEW [2023/09] I start pursuing my Ph.D. degree at HKUST.
  • [2022/06] Two papers accepted to ECCV 2022.
  • [2022/03] One paper accepted to CVPR 2022.
πŸ“‘ Publications
GenN2N: Generative NeRF2NeRF Translation
Xiangyue Liu, Han Xue, Kunming Luo, Ping Tan, Li Yi
CVPR 2024  [Project Page]  [Arxiv]  [Code]

We present GenN2N, a unified NeRF-to-NeRF translation framework for various NeRF translation tasks such as text-driven NeRF editing, colorization, super-resolution, inpainting, etc. GenN2N achieves all these NeRF editing tasks by employing a plug-and-play image-to-image translator to perform editing in the 2D domain and lifting 2D edits into the 3D NeRF space.

PontTuset Sobolev Training for Implicit Neural Representations with Approximated Image Derivatives
Wentao Yuan, Qingtian Zhu, Xiangyue Liu, Yikang Ding, Haotian Zhang, Chi Zhang
ECCV 2022  [Paper]  [Code]

We propose a training paradigm for Implicit Neural Representations (INRs) that encode image derivatives in addition to image values in the neural network.

PontTuset KD-MVS: Knowledge Distillation Based Self-supervised Learning for Multi-view Stereo
Yikang Ding, Qingtian Zhu, Xiangyue Liu, Wentao Yuan, Haotian Zhang, Chi Zhang
ECCV 2022  [Paper]  [Code]

We propose a novel self-supervised training pipeline for MVS based on knowledge distillation, termed KD-MVS, which mainly consists of self-supervised teacher training and distillation-based student training.

PontTuset TransMVSNet: Global Context-aware Multi-view Stereo Network with Transformers
Yikang Ding*, Wentao Yuan*, Qingtian Zhu, Haotian Zhang, Xiangyue Liu, Yuanjiang Wang, Xiao Liu
CVPR 2022  [Paper]  [Code]

We analogize MVS back to its nature of a feature matching task and therefore propose a powerful Feature Matching Transformer to leverage self- and cross- attention to aggregate long-range context information within and across images.

PontTuset Structure Reconstruction Using Ray-Point-Ray Features: Representation and Camera Pose Estimation
Yijia He*, Xiangyue Liu*, Xiao Liu, Ji Zhao
ICRA 2021  [Paper]

Straight line features have been increasingly utilized in visual SLAM and 3D reconstruction systems. The straight lines’ parameterization, parallel constraint, and coplanar constraint are studied in many recent works. In this paper, we explore the novel intersection constraint of straight lines for structure reconstruction.

🌏 Challenge
PontTuset 2nd Place Solution to Instance Segmentation of IJCAI 3D AI Challenge 2020 (2/559)
Kai Jiang*, Xiangyue Liu*, Zhang Ju*, Xiang Luo
IJCAI 2020 workshop  [Paper]

πŸ† Awards

  • [2021/06] Excellent Graduate in Beijing
  • [2020/08] 2nd in IJCAI 3D AI Challenge: Instance Segmentation (2/559) website
  • [2016/10] National College Students Innovation and Entrepreneurship Training Program

πŸŽ“ Educations

  • 2023.09 - future: Ph.D. in Electronic and Computer Engineering, Hong Kong University of Science and Technology
  • 2018.09 - 2021.06: M.Sc. in Software Engineering, Beihang University
  • 2014.09 - 2018.06: B.Eng. in Software Engineering, Northeast Normal University

πŸ‘©πŸ»β€πŸ’» Internships

  • 2021.08 - 2023.08: Tsinghua University, IIIS, supervised by Prof. Li Yi, and Prof. Yang Gao
  • 2022.04 - 2023.08: MEGVII Research, AIC, supervised by Weixin Xu, Yi Yang, and Dr. Shuchang Zhou
  • 2021.06 - 2022.04: MEGVII Research, 3D Vision, supervised by Haotian Zhang, and Xiao Liu
  • 2019.08 - 2020.04: MEGVII Research, SLAM&AR, supervised by Dr. Yijia He, and Xiao Liu

Last update: Apr. 2024