Photogrammetry mesh vs. laser scan cloud comparison
Posted: Wed Feb 03, 2021 12:18 pm
Hi,
We are researchers currently working on a project, and we would like to compare a point cloud created with a Leica laser scanner, with a photogrammetry model created by RealityCapture. Our goal is to establish how much the photogrammetry scan deviates from the laser scanned point cloud, and get some good metrics & insight about the differences. We are also fairly new to Cloudcompare, so we have a few questions which we were unable to properly answer based on the tutorials and prior forum posts.
What we have done so far is importing both models, and then computing the normals with preferred orientation. Then we align the laser scanned point cloud with the photogrammetry mesh (first manually, then with the fine registration tool), and we do the segmentation with both the cloud and the mesh selected to make sure that the same relevant part is cut from both models.
Then, we use the cloud-to-mesh feature to compute the differences. However, we get different results based on
- which model we select as „aligned” and as „reference” during registration
- which model we select as „compared” and „reference” during comparison
- if we create a point cloud by sampling the mesh via the „sample points on a mesh” function, and compare the sampled cloud with the laser scanned cloud instead of comparing with a mesh.
It is also our understanding that we should compute unsigned distances in order to get a good overview about the absolute differences between our compared mesh and cloud.
Also, our meshes are unclosed, so after computing the normals with the preferred orientation, the back of the mesh turns completely black. We assume that this means that the blacked-out portion is disregarded during computation.
We would like to know if our workflow is correct, and gain a better understanding about how picking and swapping the „reference” and „aligned/compared” models during registration/comparison affect the results. We are also curious if maybe we should create a point cloud by sampling the mesh and compare that to the laser scanned cloud instead. We are also unsure if our assumptions mentioned above are correct.
We appreciate any help you can provide.
We are researchers currently working on a project, and we would like to compare a point cloud created with a Leica laser scanner, with a photogrammetry model created by RealityCapture. Our goal is to establish how much the photogrammetry scan deviates from the laser scanned point cloud, and get some good metrics & insight about the differences. We are also fairly new to Cloudcompare, so we have a few questions which we were unable to properly answer based on the tutorials and prior forum posts.
What we have done so far is importing both models, and then computing the normals with preferred orientation. Then we align the laser scanned point cloud with the photogrammetry mesh (first manually, then with the fine registration tool), and we do the segmentation with both the cloud and the mesh selected to make sure that the same relevant part is cut from both models.
Then, we use the cloud-to-mesh feature to compute the differences. However, we get different results based on
- which model we select as „aligned” and as „reference” during registration
- which model we select as „compared” and „reference” during comparison
- if we create a point cloud by sampling the mesh via the „sample points on a mesh” function, and compare the sampled cloud with the laser scanned cloud instead of comparing with a mesh.
It is also our understanding that we should compute unsigned distances in order to get a good overview about the absolute differences between our compared mesh and cloud.
Also, our meshes are unclosed, so after computing the normals with the preferred orientation, the back of the mesh turns completely black. We assume that this means that the blacked-out portion is disregarded during computation.
We would like to know if our workflow is correct, and gain a better understanding about how picking and swapping the „reference” and „aligned/compared” models during registration/comparison affect the results. We are also curious if maybe we should create a point cloud by sampling the mesh and compare that to the laser scanned cloud instead. We are also unsure if our assumptions mentioned above are correct.
We appreciate any help you can provide.