I received my Master’s degree in Artificial Intelligence from Fudan University (Sep. 2021 - Jun. 2024), where Prof. Tao Chen is my advisor. I am fortunate to work closely with Dr. Hongyuan Zhu from A*STAR, Singapore, and Dr. Gang Yu, Dr. Xin Chen, and Dr. Chi Zhang from Tencent. Before this, I obtained my Bachelor’s degree in Data Science and Big Data Technology also from Fudan University (Sep. 2017 - Jun. 2021).
My long-term research goal is to develop vision-language systems that can comprehend, reason, and envision the physical world.
📣 I am actively looking for researcher / Ph.D. opportunities. Please check out my resume here.
🔥 News
- Jul. 2024. 🎉🎉 Our M3DBench
, a dataset querying 3D LLMs with multi-modal prompts, is accepted to ECCV 2024.
- May. 2024. 🎉🎉 We release MeshXL
, a family of generative 3D foundation models for 3D meshes.
- May. 2024. 🎉🎉 I successfully defended my master’s thesis! [defense slides]
- Apr. 2024. 🎉🎉 Our state-of-the-art 3D dense captioning method Vote2Cap-DETR++
, is accepted to T-PAMI 2024.
- Feb. 2024. 🎉🎉 Our Large Language 3D Assistant, LL3DA
, is accepted to CVPR 2024.
- Jan. 2024. 🐧🐧 Join Tencent as a research intern, working on 3D generation.
- Oct. 2023. 🥇🥇 Win the Scan2Cap Challenge at ICCV 2023.
- Feb. 2023. 🎉🎉 Our Vote2Cap-DETR
paper is accepted to CVPR 2023.
📝 Selected Publications
I started my research on exploring how to use language for better 3D scene understanding (Vote2Cap-DETR and Vote2Cap-DETR++). Then, as large language models exhibits tremendous generalist potentials, I also explored whether LLMs can understand 3D (LL3DA and M3DBench). After that, I spent a wonderful half year exploring whether LLMs can speak 3D (MeshXL). Currently, I am working on both embodied AI and AIGC.
![sym](images/meshxl.png)
MeshXL: Neural Coordinate Field for Generative 3D Foundation Models
pre-print |
Sijin Chen, Xin Chen$^{\dagger}$, Anqi Pang, Xianfang Zeng, Wei Cheng, Yijun Fu, Fukun Yin, Yanru Wang, Zhibin Wang, Chi Zhang, Jingyi Yu, Gang Yu, Bin Fu, Tao Chen$^{\ddagger}$
- Propose a family of auto-regressively generative pre-trained foundation models for 3D mesh generation.
![sym](images/LL3DA.gif)
LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning
CVPR 2024 |
Sijin Chen, Xin Chen$^{\dagger}$, Chi Zhang, Mingsheng Li, Gang Yu, Hao Fei, Hongyuan Zhu, Jiayuan Fan, Tao Chen$^{\ddagger}$
paper | project | arXiv | github | youtube
- Propose a Large Language 3D Assistant that responds to both visual interactions and textual instructions in complex 3D environments.
![sym](images/vote2cap-detr++ arXiv.png)
Vote2Cap-DETR++: Decoupling Localization and Describing for End-to-End 3D Dense Captioning
T-PAMI 2024 |
Sijin Chen, Hongyuan Zhu, Mingsheng Li, Xin Chen, Peng Guo, Yinjie Lei, Gang Yu, Taihao Li, Tao Chen$^{\dagger}$
- Decoupled feature extraction and task decoding for 3D Dense Captioning.
![sym](images/vote2cap-detr cvpr2023.png)
End-to-End 3D Dense Captioning with Vote2Cap-DETR
CVPR 2023 |
Sijin Chen, Hongyuan Zhu, Xin Chen, Yinjie Lei, Gang Yu, Tao Chen$^{\dagger}$
paper | arXiv | github | youtube
- We address 3D Dense Captioning as a set prediction problem with parallel decoding.
- The first non-“detect-then-describe” framework for 3D Dense Captioning.
- 🥇 Winner of the Scan2Cap Challenge in the 3rd Language for 3D Scene Workshop at ICCV 2023. [talk]
🥇 Awards and Scholarships
- Apr. 2024. Award for Outstanding Graduate Student (rank 1/24).
- Oct. 2023. 1st place of the Scan2Cap Challenge in the 3rd Language for 3D Scene Workshop at ICCV 2023.
- Sep. 2023. National Scholarship (rank 1/46).
- Sep. 2022. 2nd prize of the Scholarship for Outstanding Students of Master’s Degrees.
- Sep. 2021. Award for the Scholarship for Outstanding Students of Master’s Degrees.
- Jun. 2021. 2nd prize of the Scholarship for Outstanding Students.
📖 Educations
- Sep. 2021 - Jun. 2024 (expected). Master student at Fudan University.
- Sep. 2017 - Jun. 2021. Bachelor student at Fudan University.