Toward a Universal Model for Shape from Texture


Dor Verbin    Todd Zickler

Harvard University


Input image

Output surface normals

Novel view of input image

Sample from output texture process

Shape and texture from a single image (left). The output is a 2.5D shape (two middle columns) and a flat-texture generative process (right). The approach succeeds for a wide variety of textures, including those for which previous methods break down.

Abstract: We consider the shape from texture problem, where the input is a single image of a curved, textured surface, and the texture and shape are both a priori unknown. We formulate this task as a three-player game between a shape process, a texture process, and a discriminator. The discriminator adapts a set of non-linear filters to try to distinguish image patches created by the texture process from those created by the shape process, while the shape and texture processes try to create image patches that are indistinguishable from those of the other. An equilibrium of this game yields two things: an estimate of the 2.5D surface from the shape process, and a stochastic texture synthesis model from the texture process. Experiments show that this approach is robust to common non-idealities such as shading, gloss, and clutter. We also find that it succeeds for a wide variety of texture types, including both periodic textures and those composed of isolated textons, which have previously required distinct and specialized processing.

Publication

Dor Verbin and Todd Zickler, "Toward a Universal Model for Shape from Texture", Conference of Computer Vision and Pattern Recognition (CVPR), 2020.

[paper]

[supplement]

[code]

Video

Data

Our synethetic dataset was generated in Blender by using cloth simulation. The texture images were mapped onto a square mesh and dropped onto a surface. After the simulation is done running, the result is rendered. Blender also enables extracting the ground truth surface normals by saving them into an .stl file (go to Export > Stl and make sure "Selection Only" is checked).

We provide two files below:

The file containing all images contains five directories corresponding to the four shapes in the paper plus one containing the original (flat) images. The file containing the Blender models has four directories corresponding to the four shapes. Each one contains a Blender file with an embedded python script which can be run to automatically render all images used in the paper. Each directory also contains an .stl file extracted from Blender, which stores the true shape. The script can also be used to generate the images from the paper, with shading and specular highlights. The sphere directory also contains the Blender files and embedded python scripts used to generate the images for Figures 7 and 8 from our paper. Note: In order to use the Blender files, the two zip files must be unzipped in the same directory (only the flat directory is used by the Blender files).

Citation

			@InProceedings{verbin2020sft,
			author = {Verbin, Dor and Zickler, Todd},
			title = {Toward a Universal Model for Shape From Texture},
			booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
			month = {June},
			year = {2020}
			}