RandLA-Net-based detection of urban building change using airborne LiDAR point clouds
-
摘要: 利用遥感手段对城市建筑物进行变化检测可以快速准确地获取建筑物覆盖的变化信息,但是单纯基于影像数据难以快速、准确地进行三维变化检测,且传统基于点云的方法自动化程度低、精度差。针对这些问题,文章使用机载激光雷达点云数据,引入RandLA-Net的点云语义分割方法,提高变化检测的精度与自动化程度,同时通过点云投影的方式,克服了点云无序性导致的2期数据间无法差分的问题。标准RandLA-Net算法使用点的位置与颜色信息作为特征,并主要用于街景级点云的语义分割。该研究则使用城市大尺度机载点云数据,结合固有的反射强度与影像赋予点云的光谱信息,探究不同特征信息对结果精度的影响。同时,实验中发现除点云强度和光谱等特征外,点本身的坐标信息同样重要,转化为相对坐标使结果精度提升明显。实验结果表明,使用RandLA-Net网络对建筑物提取与变化检测获得的结果明显优于传统方法,且验证了使用深度学习方法处理激光雷达数据进行建筑物提取与变化检测的可行性,可以实现可靠的建筑物三维变化检测。Abstract: Using remote sensing to detect changes in urban buildings can obtain the change information of building coverage quickly and accurately. However, it is difficult to detect 3D changes quickly and accurately based on image data alone. Moreover, conventional point cloud-based methods have low automation and poor precision. To address these problems, this study used the airborne LiDAR point clouds and employed the RandLA-Net’s point cloud semantic segmentation method to improve the accuracy and automation of change detection. Meanwhile, the failure in differentiating two-period data due to point cloud disorder was overcome through point cloud projection. The standard RandLA-Net method, with the location and color information of points as features, is mainly used for semantic segmentation of street-level point clouds. In this study, urban large-scale airborne point clouds combined with the inherent reflection intensity and the spectral information of point clouds given by images were used to explore the influence of different feature information on the precision of the results. Furthermore, it was found that in addition to the point cloud intensity and spectral features, the coordinate information of points is equally important and can be converted into relative coordinates to significantly improve the result precision. The experimental findings show that the results obtained using RandLA-Net are significantly better than those using conventional methods for building extraction and change detection. This study also verified the feasibility of using deep learning methods to process LiDAR data for building extraction and change detection, which can realize reliable 3D building change detection.
-
Key words:
- airborne LiDAR /
- point cloud /
- change detection /
- 3D semantic segmentation
-
-
[1] Murakami H, Nakagawa K, Hasegawa H, et al. Change detection of buildings using an airborne laser scanner[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 1999, 54(2):148-152.
[2] Vu T, Matsuoka M, Yamazaki F. LiDAR-based change detection of buildings in dense urban areas[C]// IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2004:3413-3416.
[3] Pang S, Hu X, Wang Z, et al. Object-based analysis of airborne LiDAR data for building change detection[J]. Remote Sensing, 2014, 6(11):10733-10749.
[4] Pirasteh S, Rashidi P, Rastiveis H, et al. Developing an algorithm for buildings extraction and determining changes from airborne LiDAR,and comparing with R-CNN method from drone images[J]. Remote Sensing, 2019, 11(11):1272.
[5] 曾静静, 张晓刚, 王刚. 基于LiDAR点云的城区地表变化检测[J]. 城市勘测, 2021(2):92-95.
[6] Zeng J J, Zhang X G, Wang G. Urban land surface change detection based on LiDAR point cloud[J]. Urban Geotechnical Investigation and Surveying, 2021(2):92-95.
[7] Matikainen L, Hyypp? J, Ahokas E, et al. Automatic detection of buildings and changes in buildings for updating of maps[J]. Remote Sensing, 2010, 2(5):1217-1248.
[8] Malpica J A, Alonso M C. Urban changes with satellite imagery and LiDAR data[J]. International Archives of the Photogrammetry,Remote Sensing and Spatial Information Science, 2010, 38(8):853-858.
[9] Du S, Zhang Y, Qin R, et al. Building change detection using old aerial images and new LiDAR data[J]. Remote Sensing, 2016, 8(12):1030.
[10] Shelhamer E, Long J, Darrell T. Fully convolutional networks for semantic segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(4):640-651.
[11] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J/OL]. arXiv, 2014(2015-04-10)[2022-10/15]. https://arxiv.org/abs/1409.1556 .
[12] Badrinarayanan V, Kendall A, Cipolla R. Segnet:A deep convolutional encoder-decoder architecture for image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12):2481-2495.
[13] He K, Zhang X, Ren S, et al. Deep residual learning for image reco-gnition[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016:770-778.
[14] Ronneberger O, Fischer P, Brox T. U-Net:Convolutional networks for biomedical image segmentation[C]// International Conference on Medical Image Computing and Computer-Assisted Intervention.Springer,Cham, 2015:234-241.
[15] Ren S, He K, Girshick R, et al. Faster R-CNN:Towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016, 39(6):1137-1149.
[16] Lin T Y, Dollár P, Girshick R, et al. Feature pyramid networks for object detection[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017:2117-2125.
[17] 谢奇芳, 姚国清, 张猛. 基于Faster R-CNN的高分辨率图像目标检测技术[J]. 国土资源遥感, 2019, 31(2):38-43.doi:10.6046/gtzyyg.2019.02.06.
[18] Xie Q F, Yao G Q, Zhang M. Research on high resolution image object detection technology based on Faster R-CNN[J]. Remote Sensing for Land and Resources, 2019, 31(2):38-43.doi:10.6046/gtzyyg.2019.02.06.
[19] 武宇, 张俊, 李屹旭, 等. 基于改进U-Net的建筑物集群识别研究[J]. 国土资源遥感, 2021, 33(2):48-54.doi:10.6046/gtzyyg.2020278.
[20] Wu Y, Zhang J, Li Y X, et al. Research on building cluster identification based on improved U-Net[J]. Remote Sensing for Land and Resources, 2021, 33(2):48-54.doi:10.6046/gtzyyg.2020278.
[21] 卢麒, 秦军, 姚雪东, 等. 基于多层次感知网络的GF-2遥感影像建筑物提取[J]. 国土资源遥感, 2021, 33(2):75-84.doi:10.6046/gtzyyg.2020289.
[22] Lu Q, Qin J, Yao X D, et al. Buildings extraction of GF-2 remote sensing image based on multi-layer perception network[J]. Remote Sensing for Land and Resources, 2021, 33(2):75-84.doi:10.6046/gtzyyg.2020289.
[23] 刘文雅, 岳安志, 季钰, 等. 基于DeepLabv3+语义分割模型的GF-2影像城市绿地提取[J]. 国土资源遥感, 2020, 32(2):120-129.doi:10.6046/gtzyyg.2020.02.16.
[24] Liu W Y, Yue A Z, Ji Y, et al. Urban green space extraction from GF-2 remote sensing image based on DeepLabv3+ semantic segmentation model[J]. Remote Sensing for Land and Resources, 2020, 32(2):120-129.doi:10.6046/gtzyyg.2020.02.16.
[25] 安健健, 孟庆岩, 胡蝶, 等. 基于Faster R-CNN的火电厂冷却塔检测及工作状态判定[J]. 国土资源遥感, 2021, 33(2):93-99.doi:10.6046/gtzyyg.2020184.
[26] An J J, Meng Q Y, Hu D, et al. The detection and determination of the working state of cooling tower in the thermal power plant based on Faster R-CNN[J]. Remote Sensing for Land and Resources, 2021, 33(2):93-99.doi:10.6046/gtzyyg.2020184.
[27] Guo Y, Wang H, Hu Q, et al. Deep learning for 3D point clouds:A survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020, 43(12):4338-4364.
[28] Qi C R, Su H, Mo K, et al. Pointnet:Deep learning on point sets for 3D classification and segmentation[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017:652-660.
[29] Wu B, Wan A, Yue X, et al. Squeezeseg:Convolutional neural nets with recurrent CRF for real-time road-object segmentation from 3D LiDAR point cloud[C]// 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018:1887-1893.
[30] Thomas H, Qi C R, Deschaud J E, et al. Kpconv:Flexible and deformable convolution for point clouds[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019:6411-6420.
[31] Hu Q, Yang B, Xie L, et al. RandLA-Net:Efficient semantic segmentation of large-scale point clouds[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020:11108-11117.
[32] Hackel T, Savinov N, Ladicky L, et al. Semantic3d.net:A new large-scale point cloud classification benchmark[J/OL]. arXiv, 2017(2017-04-12)[2022-10/15]. https://arxiv.org/abs/1704.03847 .
-
计量
- 文章访问数: 1103
- PDF下载数: 117
- 施引文献: 0