JJ (Jeong Joon) Park

jjparkcv (at) umich (dot) edu

I'm an assisant professor at the University of Michigan CSE (Office: BBB2717).
I'm broadly interested in computer vision, graphics, and artificial intelligence. My current research focus is on 3D/4D reconstruction and generative modeling and their applications to robotics, medical imaging, and scientific problems.

I'm looking for students and postdoc applicants!
Please refer to the note below for details. I encourage interested students to apply to the UMichigan CSE PhD program and mention my name in the application.

Prospective Students and Postdocs I'm mainly looking for PhD and postdoc applicants who satisfy one of the items listed below:
  • Physics background with deep learning experience
  • Robot learning background
  • Computational Neuroscience background with deep learning experience
  • Machine learning background
  • 3D/4D vision background (including medical imaging)
Postdocs: I am also looking for postdocs through (1) MICDE Research Scholars and (2) Schmidt AI in Science programs .

For UMichigan undergrads and masters' students please send an email with resume -- note that I expect a significant time commitment (>15hrs/week). Unfortunately, I will not be able to respond to all emails.
Current Students Liam Wang (NSF GRFP Fellow)
Zichen Wang
Paul Yoo
Lixuan Chen (Co-advised with Liyue Shen)
Ang Cao (co-advised with Justin Johnson)
Teaching Computer Vision (EECS 442), Winter 2024
Computer Graphics and Generative Models, Fall 2024

Publications


DiffusionPDE: Generative PDE-Solving Under Partial Observation

Jiahe Huang, Guandao Yang, Zichen Wang, Jeong Joon Park
To appear in NeurIPS 2024



This&That: Language-Gesture Controlled Video Generation for Robot Planning

Boyang Wang, Nikhil Sridhar, Chao Feng, Mark Van der Merwe, Adam Fishman, Nima Fazeli, Jeong Joon Park
In Submission



TC4D: Trajectory-Conditioned Text-to-4D Generation

S. Bahmani, X. Liu, Y. Wang, I. Skorokhodov, V. Rong, Z. Liu, X. Liu,
JJ Park, S. Tulyakov, G. Wetzstein, A. Tagliasacchi, D Lindell
ECCV 2024



4D-fy: Text-to-4D Generation Using Hybrid Score Distillation Sampling

S. Bahmani, I. Skorokhodov, V. Rong, G. Wetzstein, L. Guibas, P. Wonka, S. Tulyakov, J.J. Park, A. Tagliasacchi, D.B. Lindell
CVPR 2024



FAR: Flexible, Accurate and Robust 6DoF Relative Camera Pose Estimation

Chris Rockwell, Nilesh Kulkarni, Linyi Jin,
Jeong Joon Park, Justin Johnson, David F. Fouhey
CVPR 2024 (Highlight)



CurveCloudNet: Processing Point Clouds with
1D Structure

C. Stearns, D. Rempe, A. Fu, J. Liu, S. Mascha, JJ Park, D. Paschalidou, L. Guibas
CVPR 2024

Summary We introduce a new point cloud processing scheme which takes advantage of the curve-like structure inherent to modern depth sensors. While existing backbones discard the rich 1D traversal patterns, CurveCloudNet parameterizes the point cloud as a collection of polylines (a "curve cloud”), establishing a local surface-aware ordering on the points.



GeNVS: Generative Novel View Synthesis with
3D-Aware Diffusion Models

E. Chan*, K Nagano*, M Chan*, A. Bergman*, JJ Park*,
A. Levy, M. Aittala, S. Mello, T. Karras, G. Wetzstein
ICCV 2023 (Oral)

Summary We present a diffusion model for 3D-aware generative novel view synthesis from as few as a single input image. Our model samples from the distribution of possible renderings consistent with the input and is capable of rendering plausible novel views of unbounded regions.



CC3D: Layout-Conditioned Generation of Compositional 3D Scenes

Sherwin Bahmani*, Jeong Joon Park*, Despoina Paschalidou, Xingguang Yan, Gordon Wetzstein, Leonidas Guibas, Andrea Tagliasacchi
ICCV 2023

Summary We introduce a conditional generative model that synthesizes complex 3D scenes conditioned on 2D semantic scene layouts. Different from most existing 3D GANs that operate on aligned single objects, we focus on generating complex 3D scenes with multiple objects, by modeling the compositional nature of 3D scenes.



LEGO-Net: Learning Regular Rearrangements
of Objects in Rooms

Qiuhong Anna Wei, Sijie Ding*, Jeong Joon Park*, Rahul Sajnani, Adrien Poulenard, Srinath Sridhar, Leonidas Guibas
CVPR 2023




SinGRAF: Learning a 3D Generative Radiance Field
for a Single Scene

Minjung Son*, Jeong Joon Park*, Leonidas Guibas, Gordon Wetzstein
CVPR 2023




ALTO: Alternating Latent Topologies
for Implicit 3D Reconstruction

Zhen Wang*, Shijie Zhou*, Jeong Joon Park, Despoina Paschalidou, Suya You, Gordon Wetzstein, Leonidas Guibas, Achuta Kadambi
CVPR 2023



Generating Part-Aware Editable 3D Shapes
without 3D Supervision

Konstantinos Tertikas, Despoina Paschalidou, Boxiao Pan, Jeong Joon Park, Mikaela Angelina Uy, Ioannis Emiris, Yannis Avrithis, Leonidas Guibas
CVPR 2023




3D-Aware Video Generation

S. Bahmani, J. J. Park, D. Paschalidou, H. Tang,
G. Wetzstein, L. Guibas, L. V. Gool, R. Timofte
Transactions on Machine Learning Research (TMLR) 2023



StyleSDF: High-Resolution 3D-Consistent Image and Geometry Generation

R. Or-El, X. Luo, M. Shan, E. Shechtman, J. J. Park, I. Kemelmacher-Shlizerman
CVPR 2022 (Oral)



BACON: Band-limited Coordinate Networks for Multiscale Scene Representation

David Lindell, Dave Van Veen, Jeong Joon Park, and Gordon Wetzstein
CVPR 2022 (Oral)



Seeing the World in a Bag of Chips

Jeong Joon Park, Aleksander Holynski, Steve Seitz
CVPR 2020 (Oral)



WIRED Scientific American


DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation

Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, Steven Lovegrove
CVPR 2019 (Oral, Best Paper Award Finalist)

Surface Light Field Fusion

Jeong Joon Park, Richard Newcombe, Steve Seitz
3DV 2018 (Oral)