Jens Garstka, Gabriele Peters
Publication:
All publications are available from Publications.
3D scene understanding is necessary for interaction in the environment. For that purpose the classification of 3D objects is essential. The detection of keypoints in 3D point clouds often constitutes the first step for an efficient and reliable classification of objects.
We proposed a fast and robust method for an automatic identification of keypoints in unstructured 3D point clouds and compared it with other state-of-the-art keypoint detection algorithms on a number of 3D objects. Our approach yielded a considerably faster computation and outperformed other methods in terms of relative repeatability of keypoints under rotation. Other advantages consist in its stable behavior in the presence of noise and a moderate scale invariance, which is particularly useful, if keypoints are supposed to be used with any arbitrary local descriptor to classify point clouds at different scales.
Possible fields of applications besides autonomous systems are 3D object retrieval or appropriate descriptions of 3D data in general.
1. Convert point cloud to a solid voxelized volume
This neither requires additional surface information or normals nor needs to approximate them.
2. Convolve voxelized volume with a spherical kernel
Two examples for convolution results. Shown voxels contain 3D points from the initial point cloud. Convolution values are color-coded: red: small, green: medium; blue: large.
3. From the intitial point cloud choose representatives of points for rare convolution responses as keypoints
This is done via building histograms of convolution results, clustering, and finding nearest neighbours to cluster centroids.
Two further 3D models from the Stanford Computer Graphics Laboratory, Dragon and Happy Buddha. Middle and Right: Point clouds with superimposed keypoint candidates. From red to green: stronger to weaker candidates. (Click on images for larger point clouds.)
We compared our approach with 4 other state-of-the-art keypoint detection algorithms in terms of repeatability under rotation and computation time. We picked a total of 50 point clouds from 5 object classes from the object data base of [Lai et al., 2011], each point cloud was rotated by 4 different angles, and on each resulting point cloud we ran our algorithm.
Examples for objects from the chosen classes "cap", "greens", "kleenex", "scissors", and "stapler" from [Lai et al., 2011]
Repeatability under Rotation
Relative repeatability (vertical axis), i.e., portion of detected keypoints in the rotated cloud, that falls into a neighbourhood (horizontal axis) of a keypoint in the unrotated cloud
Computation Time
The presented algorithm is designed to be highly parallelizable and can thus be executed fast. For example, under a standard system configuration we obtained the following average execution times for the computation of the keypoints of one 3D point cloud:
The proposed approach is robust to noise. We added noise to the point clouds at a level of 0.5 times the cloud resolution.