• Author(s) : Shangzhan Zhang, Sida Peng, Tao Xu, Yuanbo Yang, Tianrun Chen, Nan Xue, Yujun Shen, Hujun Bao, Ruizhen Hu, Xiaowei Zhou

The generation of materials for 3D meshes from text descriptions is an innovative approach presented in this research paper. Unlike traditional methods that focus on texture map synthesis, the proposed method introduces the generation of segment-wise procedural material graphs, offering high-quality rendering and substantial flexibility in editing. The key contribution lies in bypassing the need for extensive paired data by leveraging a pre-trained 2D diffusion model as a bridge between text descriptions and material graphs.

MaPa: Text-driven Photorealistic Material Painting for 3D Shapes

The unique approach involves decomposing a 3D shape into segments and employing a segment-controlled diffusion model to synthesize 2D images aligned with mesh parts. Based on these generated images, the method initializes the parameters of material graphs, which are then fine-tuned through a differentiable rendering module to produce materials that accurately reflect the textual description. Extensive experiments showcase the framework’s superior performance in photorealism, resolution, and editability when compared to existing methods.

The paper presents a novel and effective solution for generating materials for 3D meshes from text, offering improved visual quality and editing capabilities. By leveraging pre-trained models and differentiable rendering, the approach bypasses the need for large amounts of paired data, making it a more efficient and flexible solution. This research contributes to the advancement of 3D content creation and opens up exciting possibilities for text-driven material generation in various applications.