In this paper, we introduce Point2Mesh, a technique for reconstructing a surface mesh from an input point cloud. Instead of explicitly specifying a prior that encodes the expected shape properties, the prior is defined automatically using the input point cloud, which we refer to as a self-prior. The self-prior encapsulates reoccurring geometric repetitions from a single shape within the weights of a deep neural network. We optimize the network weights to deform an initial mesh to shrink-wrap a single input point cloud. This explicitly considers the entire reconstructed shape, since shared local kernels are calculated to fit the overall object. The convolutional kernels are optimized globally across the entire shape, which inherently encourages local-scale geometric self-similarity across the shape surface. We show that shrink-wrapping a point cloud with a self-prior converges to a desirable solution; compared to a prescribed smoothness prior, which often becomes trapped in undesirable local minima. While the performance of traditional reconstruction approaches degrades in non-ideal conditions that are often present in real world scanning, i.e., unoriented normals, noise and missing (low density) parts, Point2Mesh is robust to non-ideal conditions. We demonstrate the performance of Point2Mesh on a large variety of shapes with varying complexity.
Point2Mesh is a technique for reconstructing a surface mesh from an input point cloud. This approach "learns" from a single object, by optimizing the weights of a CNN to deform some initial mesh to shrink-wrap the input point cloud:
The optimized CNN weights act as a prior, which encode the expected shape properties, which we refer to as a self-prior. The premise is that shapes are not random, and contain strong self-correlation across multiple scales.
Central to the self-prior is the weight-sharing structure of a CNN, which inherently models recurring and correlated structures and, hence, is weak in modeling noise and outliers, which have non-recurring geometries.
We thank Daniele Panozzo for his helpful suggestions. We are also thankful for help from Shihao Wu, Francis Williams, Teseo Schneider, Noa Fish and Yifan Wang. We are grateful for the 3D scans provided by Tom Pierce and Pierce Design. This work is supported by the NSF-BSF grant (No. 2017729), the European research council (ERC-StG 757497 PI Giryes), ISF grant 2366/16, and the Israel Science Foundation ISF-NSFC joint program grant number 2472/17.