We then selected the image pair for further processing where the sum of both images’ Shannon entropy is the highest. Next, PhotoScan’s GCP detection algorithm was run on these two images, resulting in the GCPs being marked in the images. Next, the images were aligned using the alignCamera() function of PhotoScan’s Python API. This resulted in a sparse point cloud containing the camera positions and the detected feature points. In the next step, the image alignment would be optimized using the real-world GCP coordinates that were measured using the DGPS. Unfortunately, the GCP detection algorithm did not always correctly detect all GCPs. The GCPs that were not detected at all or were detected incorrectly (for most image pairs, the three GCPs placed farthest from the cameras were not detected correctly), therefore, needed to be selected manually in the PhotoScan software. This break in the automated chain necessitated a split of the processing script into two scripts, with the manual selection of all missed GCPs for all selected image pairs and each acquisition date performed manually between the two script runs. After the manual selection of the GCPs, the second script was run which then optimized the image alignment as explained above. In the final part of the PhotoScan script, the point cloud was built for the three quality settings medium, high, and ultra. When using the “ultra” quality setting, the original input images were used, whereas “high” and “medium” downscaled the images to half and quarter size, respectively. Finally, the point cloud was exported to a comma-separated-values (.CSV) file.