+ GaussianMorphing

GaussianMorphing:
Mesh-Guided 3D Gaussians for Semantic-Aware Object Morphing

Mengtian Li1,5, Yunshu Bai1, Yimin Chu1, Yijun Shen1,
Zhongmei Li3, Weifeng Ge4, Zhifeng Xie1,5, Chaofeng Chen2,
1Shanghai University, 2Wuhan University, 3East China University of Science and Technology
4Fudan University, 5Shanghai Engineering Research Center of Motion Picture Special Effects

Abstract

We introduce GaussianMorphing, a novel framework for semantic-aware 3D shape and texture morphing from multiview images. Unlike conventional approaches constrained to point clouds or correspondence-aligned untextured data, our approach leverages mesh-guided 3D Gaussian Splatting (3DGS) to achieve high-fidelity appearance and geometry representation. On the one hand, our unified mesh-guided Gaussian deformation strategy ensures geometrically consistent deformation by binding 3DGS points to reconstructed mesh patches while preserving texture fidelity through topology-aware constraints. On the other hand, the framework establishes unsupervised semantic correspondence by exploiting mesh topology as a geometric prior, while maintaining structural integrity through physically plausible point trajectory constraints. This integrated approach maintains both local geometric details and global semantic coherence throughout the morphing process without requiring labeled data. Experimental results show that GaussianMorphing outperforms prior 2D/3D morphing methods, with a color consistency (∆E) reduction of 22.2% and an EI reduction of 26.2% on our proposed TexMorph benchmark.

Method Overview

Method Overview

We propose GaussianMorphing, a novel framework for textured 3D morphing. The input consists of image sequences for the source object X and target object Y. To achieve edge-continuous and smooth results, we implement a mesh-guided strategy by extracting surface meshes from the 3D representations obtained by 3D Gaussian Splatting and bind Gaussian points to the mesh. Geometric features, derived from the mesh, are used to compute a correspondence matrix ΠXY , while texture features are extracted from the Gaussian points. A series of time steps t ∈ [0,1] is fed into the interpolator Ω, generating intermediate shapes X(t) and Y(t). The output is a high-quality, textured 3D morphing result, seamlessly blending geometry and appearance.(Up:3D morphing results with Blender. Down:geometric correspondence with Matplotlib)

TexMorph Benchmark

TexMorph Benchmark Image

Our new morphing benchmark, TexMorph, leverages high-precision synthetic object models crafted by artists and 3D object models captured from real scenes, forming a diverse dataset that spans multiple object categories.The dataset includes over ten categories of objects, such as synthetic and real-world collected (scanning model and photo) fruits, animals, furniture, vehicles, and more.We utilize this benchmark to conduct both qualitative and quantitative tests on the baselines mentioned below, evaluating the superiority of our method.

Result Gallery

Experiment

Experiment Comparison

Qualitative Comparison of Morphing Methods on the Benchmark Dataset. The benchmark dataset evaluates four methods: (1) DiffMorpher (Zhang et al. 2024b) and (2) FreeMorph (Cao et al. 2025) for image morphing, (3) NeuroMorph (Eisenberger et al. 2021) for 3D shape morphing without textures, (4) MorphFlow (Tsai, Sun, and Chen 2022) for textured multi-view results but lacking true geometric information, and (5) Our Method, which generates textured 3D morphing with geometric details from image inputs.