M3C2 workflow questions

Feel free to ask any question here
jaschoen
Posts: 7
Joined: Wed Jun 14, 2023 5:06 pm

M3C2 workflow questions

Post by jaschoen »

Hi Daniel,

I am currently writing my master thesis about change detection of a local cliff in Northern Germany at the Baltic Sea, using the M3C2 algorithm. Ideally, I want to have some data with which I can do some rough volume calculation. I have read the papers on M3C2, the wiki and scoured the forum for information regarding the whole workflow. I feel kinda overwhelmed by it all and have some key questions which I hope you can help me with.

For an overview here is what I did so far. I have -
  1. created a georeferenced point cloud using SfM-MVS with Metashape that have normal and which I cleaned visually from noise
  2. Loaded that into CloudCompare, visually confirming that they overlap more or less completely
  3. Used ICP to register the clouds
  4. Did some testing with the M3C2 algorithm and have gotten some results
My main issues revolve around quality testing and interpreting my data. I have had the following questions so far:
  • Regarding normals: in some thread you talk about normal being “good”. What do you mean by that? What are “good” normal and how can I assess that?
  • Do I need to do vegetation removal? The cliff is rather active and at the time of surveying sparsely populated by mainly grass. Does it make sense to remove that with the CANUPO plugin?
  • Regarding ICP:
    1. Is ICP necessary when I already have georeferenced PCs? Visually, the two clouds align more or less exactly already.
    2. If it is necessary, how do I check the robustness and quality of results? In the summer school on Point clouds and change detection in the geosciences (I’m so bummed I missed that!) Dimitri Lague referenced SDDS as a tool to check exactly this and the quality of M3C2 as well. I’m afraid I didn’t understand SDDS too well, are you familiar with it and does it help me in this case? (https://clouds2022.sciencesconf.org/dat ... ne2022.pdf , page 14)
  • Regarding M3C2:
    1. Again, how can I assess the robustness of results using CloudCompare?
    2. Should I create a core point cloud to be used for M3C2?
    3. Reg. the parameter normals: since the PC from metashape already has normals, I just use those and not bother with changing the values, right?
    4. Reg. the parameter projection: how do I get the ideal value for this. Is it an educated guess? Do I calculate it? In the aforementioned presentation from Mr. Lague he says to have around 20 pts on both clouds to compute on. Do I derive it from the point density of the PCs?
    5. Reg. the parameter max depth: Is this an educated guess? Does it simply reflect the maximum possible/expected change?
    6. Reg. the parameter registration error: in some thread on this forum this is discussed. It is still unclear for me though if I input here the total error from the GCPs or, when applicable, the RMS from the ICP. Should those values be the same/similar?
This is quite a long message. I hope it’s clear enough and not too overwhelming.

Looking forward to a reply and thank you so much already for all the patient support you have provided thus far on this forum.

Best,
Janto
Attachments
m3c2 results with d = 0.5 and max depth = 10
m3c2 results with d = 0.5 and max depth = 10
m3c2_d0pt5_max10.jpeg (231.22 KiB) Viewed 5421 times
m3c2 results with d = 0.5 and max depth = 2
m3c2 results with d = 0.5 and max depth = 2
m3c2_d0pt5_max2.jpeg (226.69 KiB) Viewed 5421 times
daniel
Site Admin
Posts: 7388
Joined: Wed Oct 13, 2010 7:34 am
Location: Grenoble, France
Contact:

Re: M3C2 workflow questions

Post by daniel »

Wow, that's a long post! It will take me some time to go through it and attempt to reply. Don't hesitate to ping me again if I haven't replied after a few days.
Daniel, CloudCompare admin
jaschoen
Posts: 7
Joined: Wed Jun 14, 2023 5:06 pm

Re: M3C2 workflow questions

Post by jaschoen »

Will do! :) And yes, its a rather big post.
jaschoen
Posts: 7
Joined: Wed Jun 14, 2023 5:06 pm

Re: M3C2 workflow questions

Post by jaschoen »

ping
daniel
Site Admin
Posts: 7388
Joined: Wed Oct 13, 2010 7:34 am
Location: Grenoble, France
Contact:

Re: M3C2 workflow questions

Post by daniel »

An, still not the right time, sorry... Let's try again (or send me an email to admin[at]cloudcompare.org, it will be easier for me to keep track of it).
Daniel, CloudCompare admin
jaschoen
Posts: 7
Joined: Wed Jun 14, 2023 5:06 pm

Re: M3C2 workflow questions

Post by jaschoen »

No worries, sent you an email :)
jaschoen
Posts: 7
Joined: Wed Jun 14, 2023 5:06 pm

Re: M3C2 workflow questions

Post by jaschoen »

Heya, last try :)
daniel
Site Admin
Posts: 7388
Joined: Wed Oct 13, 2010 7:34 am
Location: Grenoble, France
Contact:

Re: M3C2 workflow questions

Post by daniel »

Ouch... I was in vacations. Sorry about that.
Daniel, CloudCompare admin
paul.leroy
Posts: 22
Joined: Tue Dec 01, 2020 1:21 pm

Re: M3C2 workflow questions

Post by paul.leroy »

Hi Janto,

It is almost a full M3C2 study that your are requesting! The answers are strongly related to your specific datasets. You can read [Bernard et al. 2021] https://esurf.copernicus.org/articles/9/1013/2021/ for a full study of how to set the M3C2 parameters in a specific case.


Normals:
If you have normals, you can try to use them. There is no mean in CloudCompare to visualize the normals as arrows for a quick check. You can do that with Open3D and Python for instance (export your file in PCD with CloudCompare before). If results are not good, you can make M3C2 compute them as in [Bernard et al. 2021]. In this article, the author uses the same data / different sampling method to set the M3C2 parameters. He has two clouds with different densities which may not be your case but the example is inspiring and the description is very precise.

Vegetation removal: if you can remove vegetation without removing too much points, it could be OK, but if the land is covered by grass it may be a problem. M3C2 takes into account the local roughness to compute the distance uncertainty and the change significance, so you can use these metrics to filter out unrelevent measurements.

ICP:
Be careful when using ICP to register two clouds between which you want to measure differences. ICP will try to match both clouds and it may remove or at least modify differences between them. But you are right, the first step is to have a good registration. To check the registration, you can measure the distances between the two clouds and have a closer look to areas where you are not expecting to see changes. You can also look for large trends which may indicate that there is a residual rotation or translation component not compensated by the registration. If you can isolate non changing areas manually with the segment tool, you can perform the ICP on them and then apply the resulting transformation to the whole cloud.

Core points:
Practical to have a regular measurement of the distances between clouds and to compute volumes in the case of vertical normals. Mandatory if you have very large clouds with many points, it avoids to compute the distances on all points. If you want to visualize the normals with Python, I recommand to build a set or regularly spaced core points before.

Projection scale:
You can derive it by measuring the density or simply launching M3C2 and looking at the numbers calculated by M3C2 during the processing (N1 and N2 are created as scalar fields). This is also explained in [Bernard et al. 2021].

max depth:
Yes, it reflects the maximum possible/expected change (use the smallest possible value, it speeds up the calculations).

registration error:
See also [Bernard et al. 2021].

Hope this answer will help you,

Paul
jaschoen
Posts: 7
Joined: Wed Jun 14, 2023 5:06 pm

Re: M3C2 workflow questions

Post by jaschoen »

Hi Paul,
thank you very much for taking the time for such a thorough reply and the pointer towards the paper of Bernard et al. 2021. It already helped me a great deal!
Best,
Janto
Post Reply