This workshop focuses on 4D reconstruction and generation, bridging the gap between reconstructing and generating dynamic virtual worlds.
The workshop brings together researchers working on neural representations, dynamic scene understanding, and efficient rendering for AR/VR and robotics applications.
Workshop Topics:
Generative and Editable World: Text/image-driven 3D synthesis, structured representations, and semantic editing for interactive modeling
Efficient and Scalable World Rendering: Real-time rendering, feed-forward reconstruction, and efficient Gaussian Splatting for AR/VR robotics
Dynamic Scene Understanding and Data Capture: Spatio-temporal modeling with Radiance Fields, large-scale 4D datasets, and temporal coherence
Virtual Humans as Embodied Agents: Photo-realistic animatable avatars, immersive volumetric humans, and deformable interaction in worlds
Neural Representations: Neural Radiance Fields, 3D Gaussian Splatting, and hybrid methods for dynamic world models
Applications: XR experiences, robotics, autonomous driving, digital twins, and creative content creation
Broader Impact:
This workshop is highly relevant to the computer vision, graphics, and AI communities, as it unites several of the most active and rapidly
evolving research directions: Radiance Fields, 3D Gaussian Splatting, dynamic scene reconstruction, and world models for perception and generation.
These threads converge on a shared ambition to endow AI systems with the capacity to understand, reconstruct, and generate coherent representations
of dynamic world. By bridging the traditionally distinct domains of reconstruction and generation, the workshop promotes a unified view of scene
modeling that directly supports progress in embodied intelligence, simulation, and creative content creation.