4D World Models:
Bridging Generation and Reconstruction

CVPR 2026 Workshop

CVPR 2026
100-300 Attendees
Half-Day Workshop

📋 About the Workshop

This workshop focuses on 4D reconstruction and generation, bridging the gap between reconstructing and generating dynamic virtual worlds. The workshop brings together researchers working on neural representations, dynamic scene understanding, and efficient rendering for AR/VR and robotics applications.

Workshop Topics:

Generative and Editable World: Text/image-driven 3D synthesis, structured representations, and semantic editing for interactive modeling
Efficient and Scalable World Rendering: Real-time rendering, feed-forward reconstruction, and efficient Gaussian Splatting for AR/VR robotics
Dynamic Scene Understanding and Data Capture: Spatio-temporal modeling with Radiance Fields, large-scale 4D datasets, and temporal coherence
Virtual Humans as Embodied Agents: Photo-realistic animatable avatars, immersive volumetric humans, and deformable interaction in worlds
Neural Representations: Neural Radiance Fields, 3D Gaussian Splatting, and hybrid methods for dynamic world models
Applications: XR experiences, robotics, autonomous driving, digital twins, and creative content creation

Broader Impact:

This workshop is highly relevant to the computer vision, graphics, and AI communities, as it unites several of the most active and rapidly evolving research directions: Radiance Fields, 3D Gaussian Splatting, dynamic scene reconstruction, and world models for perception and generation. These threads converge on a shared ambition to endow AI systems with the capacity to understand, reconstruct, and generate coherent representations of dynamic world. By bridging the traditionally distinct domains of reconstruction and generation, the workshop promotes a unified view of scene modeling that directly supports progress in embodied intelligence, simulation, and creative content creation.

📢 Call for Papers

We invite submissions on:

  • 4D Reconstruction and Generation: Methods for reconstructing or generating dynamic 3D scenes from multi-view videos, single videos, or text/image prompts
  • Neural Representations: Novel neural scene representations including NeRFs, 3D Gaussian Splatting, and hybrid approaches for dynamic scenes
  • Temporal Coherence: Techniques for maintaining consistency across time in dynamic 3D reconstructions and generations
  • Efficient Rendering: Real-time and low-latency rendering methods for 4D content in AR/VR and robotics
  • Human Avatars: Photo-realistic, animatable avatars and human performance capture
  • Applications: Novel applications in XR, robotics, autonomous driving, digital twins, and creative industries
  • Datasets and Benchmarks: New datasets, evaluation metrics, and benchmarks for 4D vision tasks
  • Other Relevant Topics: We also welcome submissions on related topics not explicitly listed above.

Submission Types:

  • Full Papers: Novel research contributions (up to 8 pages + references).
  • Dataset Papers: Novel datasets with benchmarks and baselines (up to 8 pages + references).

Important Dates:

  • Submission Opens: February 20, 2026
  • Submission Deadline: March 17, 2026 (11:59 PM AoE)
  • Author Notification: March 31, 2026
  • Camera-Ready Deadline: April 7, 2026
  • Workshop Date: CVPR 2026 (June 2026)

Submission Portal: Papers will be reviewed through CMT (link will be available shortly).

All submissions will undergo peer review. Accepted papers will be presented as spotlight talks at the workshop and will be published in the CVPR 2026 proceedings.

🎤 Invited Speakers

Distinguished researchers in 4D vision, neural rendering, and dynamic scene understanding

Peter Kontschieder
Peter Kontschieder
Meta Reality Labs
Research Director
Sara Fridovich-Keil
Sara Fridovich-Keil
Georgia Tech
Assistant Professor
Youngjoong Kwon
Youngjoong Kwon
Emory University
Assistant Professor
Qianqian Wang
Qianqian Wang
Harvard University
Assistant Professor
Jiahui Lei
Jiahui Lei
UC Berkeley
Post-Doc
Pratul Srinivasan
Pratul Srinivasan
Google DeepMind
Research Scientist

👥 Senior Organizers

Leading researchers organizing the workshop

Fernando De la Torre
Fernando De la Torre
CMU
Research Professor
Angela Dai
Angela Dai
TU Munich
Associate Professor
Srinath Sridhar
Srinath Sridhar
Brown University
Assistant Professor
Xiaowei Zhou
Xiaowei Zhou
Zhejiang University
Professor
Lingjie Liu
Lingjie Liu
University of Pennsylvania
Assistant Professor
Aayush Prakash
Aayush Prakash
Meta Reality Labs
Research Manager
Nikolaos Sarafianos
Nikolaos Sarafianos
Meta Reality Labs
Research Scientist
Jonathon Luiten
Jonathon Luiten
Meta Reality Labs
Research Scientist

Junior Organizers

Graduate students contributing to workshop organization

Chaerin Min
Chaerin Min
Brown University
PhD Student
Aashish Rai
Aashish Rai
Brown University
PhD Student
Angela Xing
Angela Xing
Brown University
Research Assistant
Xiaoyan Cong
Xiaoyan Cong
Brown University
PhD Student
Zekun Li
Zekun Li
Brown University
PhD Student
Rao Fu
Rao Fu
Brown University
PhD Student
Yiqing Liang
Yiqing Liang
Brown University
PhD Student