SEGCloud: Semantic Segmentation of 3D Point Clouds
Lyne P. Tchapmi
Christopher B. Choy
Iro Armeni
JunYoung Gwak
Silvio Savarese

International Conference of 3D Vision (3DV) 2017
(Spotlight)



Abstract


3D semantic scene labeling is fundamental to agents operating in the real world. In particular, labeling raw 3D point sets from sensors provides fine-grained semantics. Recent works leverage the capabilities of Neural Networks (NNs), but are limited to coarse voxel predictions and do not explicitly enforce global consistency. We present SEGCloud, an end-to-end framework to obtain 3D point-level segmentation that combines the advantages of NNs, trilinear interpolation(TI) and fully connected Conditional Random Fields (FC-CRF). Coarse voxel predictions from a 3D Fully Convolutional NN are transferred back to the raw 3D points via trilinear interpolation. Then the FC-CRF enforces global consistency and provides fine-grained semantics on the points. We implement the latter as a differentiable Recurrent NN to allow joint optimization. We evaluate the framework on two indoor and two outdoor 3D datasets (NYU V2, S3DIS, KITTI, Semantic3D.net), and show performance comparable or superior to the state-of-the-art on all datasets.


Method Overview



SEGCloud: A 3D point cloud is voxelized and fed through a 3D fully convolutional neural network to produce coarse downsampled voxel labels. A trilinear interpolation layer transfers this coarse output from voxels back to the original 3D Points representation. The obtained 3D point scores are used for inference in the 3D fully connected CRF to produce the final results. Our framework is trained end-to-end.


Evaluation


We evaluate SEGloud on two indoor and two outdoor 3D datasets (NYU V2, S3DIS, KITTI, Semantic3D.net), and show performance comparable or superior to the state-of-the-art on all datasets.



Semantic3D.net Results


Click to see more datasets results ...

Paper

Supplementary Material



[Paper] [Supplementary] [Arxiv] [Bibtex]
3DV 2017 Spotlight.