摘要:
深度语义分割目前已经被广泛应用于国土遥感监测和遥感解译生产领域,针对现有语义分割结果质量评价方法无法反映语义分割结果在空间几何特征上保持情况的客观问题,文章从遥感解译与测绘生产的实际需求出发,提出了一种顾及地学特征的遥感影像语义分割模型性能评价方法——连通相似性指数(connectivity similarity, CSIM),从遥感地物图斑连通相似性层面,将地学特征嵌入语义分割模型性能评价体系。该方法可以定量评估遥感影像语义分割结果与实际样本标签的图斑连通相似性程度,准确描述预测分类结果中图斑完整性的保持情况,从而更加客观地判断预训练模型是否适用于测绘生产的遥感解译工作。经过大量实践证实,该评价方法可以更好地实时监测和控制模型训练,有效地指导从预训练模型集合中选取最优性能的模型,准确地评估顾及地物几何特征的遥感影像预测结果的真实质量,对深度学习赋能遥感解译与测绘生产具有重要作用。
Abstract:
Deep semantic segmentation has been widely applied in land monitoring and interpretation based on remote sensing images. However, existing quality evaluation methods cannot reflect the preserved spatial geometric features of semantic segmentation results. Based on the practical demand for remote sensing image interpretation, surveying, and mapping, this study proposed a method for the performance evaluation of semantic segmentation models for remote sensing images considering geoscience features: the connectivity similarity index (CSIM). From the perspective of the connectivity similarity of surface feature spots in remote sensing images, the CSIM method embedded the surface features into the performance evaluation system of semantic segmentation models. The CSIM method allows for quantitatively evaluating the connectivity similarity of spots between the semantic segmentation results of remote sensing images and the actual sample labels, thus accurately describing the preserved spot integrity in the predicted classification results. Therefore, the CSIM method can objectively determine the applicability of a pre-training model to remote sensing image interpretation in surveying and mapping production. As substantiated by a lot of practice, the CSIM method can monitor and control the model training in real time, effectively guide the selection of the optimal pre-training model, and accurately evaluate the quality of remote sensing image interpretation results considering geoscience features. Therefore, the CSIM method is critical for deep-learning-enabled remote sensing image interpretation, surveying, and mapping.