Skip to main content

High-precision deformation analysis of yingxian wooden pagoda based on UAV image and terrestrial LiDAR point cloud

Abstract

The monitoring of wooden pagodas is a very important task in the restoration of wooden pagodas. Traditionally, this labor has always been carried out by surveying personnel, who manually check all parts of the pagoda, which not only consumes huge manpower, but also suffers from low efficiency and measurement errors. This article evaluates the feasibility of combining portable 3D light detection and ranging (LiDAR) scanning and unmanned aerial vehicle (UAV) photogrammetry to perform these inspection tasks easily and accurately. The wooden pagoda's exterior picture and inside point cloud are acquired using a UAV and a LiDAR scanner, respectively. We propose a feature−based global alignment method to register the site point cloud. The error equation of the column of observed values is utilized as the beginning value of the feature constraint for global leveling. The beam method leveling model solves the spatial transformation parameters and the unknown point leveling values. Then, the Structure from Motion (SfM) algorithm of computer vision is used to realize the fusion of the dense point cloud of the exterior of the wooden pagoda generated from multiple non−measured images by global optimization and the LiDAR point cloud of the interior of the wooden pagoda to obtain the complete point cloud of the wooden pagoda, which makes the deformation monitoring of the pagoda more detailed and comprehensive. After experimental verification, the overall registration accuracy of the Yingxian wooden pagoda reaches 0.006 m. Compared with the scanning point cloud data in 2018, the model is more accurate and complete. By analyzing and comparing the data of the second floor of the wooden pagoda, we knew that the inclination of a second bright layer and a second dark layer is still developing steadily. Overall, the western outer trough inclines thoughtfully, and the column frame slopes from southwest to northeast. Some internal columns showed a negative offset in 2020, and the deformation analysis of a single column was realized by comparing it with the standard column model. The main contribution of this method lies in the effective integration of UAV images and point cloud data to provide accurate data sources for good modeling. This research will provide theoretical and methodological support for the digital protection of architectural heritage and GIS data modeling. The analysis results can provide a scientific basis for the restoration scheme design.

Introduction

The Yingxian wooden pagoda, also known as Buddha Palace Temple Sakya Pagoda, is located in the northwest corner of Yingxian County, Shanxi Province. Figure 1 shows a map with UTM coordinates of the wooden pagoda. It was built in the second year of Qing and Ning in Liao Dynasty (1056), which is the tallest and oldest existing wooden pagoda building globally with high historical and cultural value [1,2,3,4]. For a long time, Yingxian wooden pagoda has experienced many natural and anthropogenic damages [5]. The bearing capacity of wood is weakened, and some components are damaged, which causes the Wood Pagoda to have different degrees and torsion [6]. It is more and more severe with time [7,8,9,10]. Yingxian wooden pagoda is a typical representative of ancient wooden buildings in China. The protection of the wooden pagoda is also essential to work in the cultural heritage protection of old buildings [11].

Fig. 1
figure 1

Map showing the location of Yingxian Wood Pagoda. Data from SIWEIearth GS (2022) 738; Own elaboration based on fieldwork

To protect the safety of wooden pagoda more scientifically, we should formulate a reasonable protection scheme and comprehensively analyze the deformation of Yingxian wooden pagoda. By combining LiDAR scanning technology, low-altitude close-range photogrammetry and other surveying and mapping technologies, we can give full play to their respective advantages and better assist the protection of ancient architecture [12]: For example, digital measurement and modelling of the Forbidden City [13, 14], disease detection of Yun gang Grottoes [15, 16], etc.

LiDAR is increasingly used in the survey of architecture due to its advantages of high efficiency, high accuracy, and no contact with the body of the building. LiDAR scanning technology can obtain fine 3D information about ancient buildings, which is an essential means and tool for managing the cultural heritage of architecture [17,18,19]. The point cloud data obtained by the 3D LiDAR scanner is sometimes affected by the occlusion of objects, the complex structure of scanned objects and the angle of view of instruments. In practical engineering, the original point cloud data obtained after scanning will inevitably appear scanning missing. It will bring some difficulties to the subsequent point cloud data processing [20]. The missing point cloud can be repaired by the image data obtained from close-range photogrammetry of the UAV. Fusion of image point cloud and the LiDAR point cloud can get complete point cloud model information [21]. It provides an accurate data source for good modelling and better realizes digital protection of architectural heritage [22,23,24].

How to better achieve heterogeneous data fusion is the focus of this research. The traditional method of point cloud and image data fusion is to obtain image point cloud by solving the internal and external orientation elements of image data according to photogrammetry method, and then register it with LiDAR point cloud for data fusion [25]. This method needs to provide suitable initial values for iterative calculation. It is also not ideal for shooting close-range images with large inclination angles. The general approach generates image point clouds through dense image matching [26]. It then registers the image point cloud with LiDAR point cloud through feature matching. In this way, the missing LiDAR point cloud can be repaired. Generally, the methods of registration image point clouds include area-based and feature-based methods [27, 28]. Bernasconi et al. [29] used the area-based method. The grey image information in the window of a given template is directly processed according to a certain similarity measure, thus realizing image registration. The accuracy of registration is closely related to the choice of similarity. Wu, et al. [30] chose to use normalized cross-correlation (NCC) because NCC is sensitive to intensity differences between images, significantly non-linear radiation differences. It is difficult for NCC to be applied to the automatic registration of optical images. Huang et al. [31] used mutual information (MI). However, the registration method based on MI is computationally intensive and sensitive to the size of the template window. It is easy to fall into local optimum when converging, and error matching will occur. Ye et al. [32] proposed a shape descriptor called dense local self-similarity (DLSS) for the first time. The DLSS is a new similarity measure for image matching using shape features. Their experimental results show that the algorithm is superior to existing similarity measures (NCC and mutual information). However, if the image contains slight shape or contour information, the performance of this method may be degraded. Kim et al. [33] used matching by Tone Mapping (MTM), a new similarity measure in the field of computer vision. However, due to the significant non-linear radiation differences, the scanning point cloud and the image will present different grey information. It is difficult to register through the similarity between grey levels automatically. He et al. [34] calculated the image data into six density image point cloud data and used the barycenterization Bursa model to fine-register the two types of data and delete the overlapping regions. Given this, feature-based registration research is the current mainstream algorithm. The registration is realized by feature correspondence. After that, we applied a control point based on the nonrigid transformation refinement step to register the point clouds more precisely.

From the aforementioned, this research firstly defines the general and specific objectives in detail. Secondly, this paper reviews relevant and recent scientific literature research on regarding LiDAR technology and repair methods for missing point clouds. Next, the methodology Structure from Motion (SfM) algorithm of computer vision is used to realize the fusion of dense point clouds generated by global optimization of several non-measured images and ground LiDAR point clouds, which is explained in terms of: (i) Firstly, through image feature detection and matching, the position and orientation of the 3D camera is obtained by the SfM algorithm. Then the precise sparse point cloud is obtained by global optimization with bundle adjustment method. Finally, dense point clouds are obtained by overall optimization of internal and external parameters and encryption of point clouds; (ii) A global point cloud registration algorithm with multiple feature constraints is used to achieve fast and highly accurate alignment of the LiDAR point cloud; and (iii) According to the feature matching, the registration between the point cloud generated by the image and the ground LiDAR point cloud is completed to achieve the consistency of scale, thus achieving the effective fusion of LiDAR point cloud and digital photos. Finally, the structural analysis of the Yingxian wooden pagoda is described.

Material and methods

Fusion of LiDAR point cloud and image data based on scale consistency

The core problem of digital protection of architectural heritage is to build an exemplary geometric model of origin. The existing data acquisition methods are mainly ground 3D laser scanning and close-range photogrammetry. Their data results have their advantages and disadvantages. It is difficult to carry out detailed 3D reconstruction alone. First of all, we should solve the effective integration of LiDAR and unmeasured digital image data. In this project, the principle of computer vision is introduced. The SfM algorithm is used to realize the transformation from image to 3D point cloud. Through image feature detection and matching, the position of the 3D position of camera is obtained by the SfM algorithm. Then the precise sparse point cloud is obtained by global optimization with bundle adjustment method. Finally, the dense point clouds are obtained by overall optimization of internal and external parameters and encryption of point clouds. Then the registration between the point cloud generated by the image and the ground LiDAR point cloud is completed according to feature matching to achieve the consistency of scale, thus achieving the effective fusion of LiDAR and digital images. The overall workflow is shown in Fig. 2.

Fig. 2
figure 2

Overall technical workflow for the fusion of LiDAR and image point cloud data based on scale consistency

Image matching

The SfM algorithm is an offline algorithm for 3D reconstruction based on various collected unordered images. Before proceeding to the core algorithm structure-from-motion, some preparatory work is needed to select suitable images.

Firstly, the Scale-invariant feature transform (SIFT) feature detection operator extracts features from images, and matching between them is performed. To speed up the matching, a K-Dimensional (KD-tree) is established for feature descriptors. An Artificial Neural Network (ANN) optimization search algorithm is used to find the matching relationship of feature points for each image pair (I, J). The matching points are added to the candidate matching points set to participate in the subsequent operation. However, there may still be mismatching among the candidate matching points, so the Random Sample Consensus (RANSAC) algorithm is used to estimate the fundamental matrix robustly. The essential matrix is used to filter the matching points, thus obtaining better matching points. By counting the number of feature matches between image pairs, the image pair with the most significant digit is selected as the initial image pair. Then, the essential matrix between the initial image pairs is estimated, and the relative pose is solved by matrix decomposition. Thirdly, three-dimensional points are constructed by triangle intersection. Finally, a beam adjustment is performed to optimize the relative posture of the initial image pair and the obtained three-dimensional points.

3D point cloud reconstruction

  • Add a new image from the remaining photos, find the 2D and 3D corresponding points of the new image by matching points with the second image, and solve its projection matrix p. The pose of the new image is obtained by decomposing the matrix P, and the latest 3D points are reconstructed by triangle intersection with the second image. Finally, the initial image pair and the newly added image are optimized by beam adjustment.

  • Repeat the above step until all photos are added to the reconstruction process to obtain sparse point clouds. Then dense point cloud is obtained by a dense matching algorithm.

  • Given the lack of point clouds, we select control points from the dense point clouds generated by images and LiDAR point clouds to complete rough registration and acceptable registration through feature matching. In order to compensate for the uneven density of point clouds, coarse registration and accurate registration are automatically ended to obtain fused point cloud data and achieve the purpose of image and laser point cloud fusion. The specific method is shown in Fig. 3.

Fig. 3
figure 3

Point cloud generation from image data based on SFM process

Global point cloud registration algorithm with multiple feature Constraints

The iterable global registrment can be divided into three processes: data pre−processing, solving the initial parameters and global leveling. The specific technical route is shown in Fig. 4. The first step in processing point cloud data is to perform point cloud denoising, here is a bilateral filtering algorithm. Bilateral filtering algorithm is widely used in point cloud noise processing due to its simple, non−iterative, local characteristics and good edge retention. Bilateral filtering can be defined as follows:

$$\widehat{{p}_{i}}={p}_{i}+\lambda {n}_{i}.$$
(1)

p is a point in the point cloud data to be processed, n is the normal vector of the point, λ is the bilateral filter factor, and the calculation formula of λ is as follows:

Fig. 4
figure 4

The flowchart of point cloud registration algorithm

$$\lambda =\frac{\sum_{{P}_{j\in {N}_{k}}\left({P}_{i}\right)}{W}_{c}\left(\Vert {p}_{j}-{p}_{i}\Vert \right){W}_{s}\left(\Vert {<n}_{j},{n}_{i}>\Vert -1\right)<{n}_{i},{p}_{j}-{p}_{i}>}{\sum_{{P}_{j\in {N}_{k}}\left({P}_{i}\right)}{W}_{c}\left(\Vert {p}_{j}-{p}_{i}\Vert \right){W}_{s}\left(\Vert {<n}_{j},{n}_{i}>\Vert -1\right)}.$$
(2)

Among them, \(W_{c}\),\(W_{s}\) represent the spatial domain and frequency domain weight functions of the bilateral filter function, and \(\left\langle {n_{i} ,p_{j} - p_{i} } \right\rangle\) is the inner product of `\(n\) and \(p_{j} - p_{i}\).

As many registration stations tend to cause error accumulation, the accuracy of station-by-station registration will become increasingly lower. The registration quality of point cloud data is related to the accuracy of the subsequent overall. Multi-feature-based global registration is generally adopted for 3D scanning data of large and complex scenes. Firstly, available features in the point cloud are extracted for registration. A local station-by-station coarse registration provides the initial value parameters. Start from the base station and search outward for neighboring feature points with the same name. The cloud of each station is registered to the base station through the Rodrigues matrix, and the base station is gradually expanded outward. The rotation matrix of each station and the coordinates of the same name point are calculated as the initial value parameters of the overall adjustment. The initial value of the characteristic constraint is taken as the error equation of the observation value series, and the overall adjustment is carried out. The bundle adjustment model solves the spatial transformation parameters and unknown point adjustment values. The error of each constraint is checked, and when the error is less than the specified threshold, the registration result is output. If the error is too large, the weight of each constraint is recalculated through the weight function. The weight of the observation value is continuously revised in the iterative process until the accuracy requirement is met, the iteration is stopped, and the registration point cloud is output.

Local coarse registration utilizes the characteristic constraints between the base station and the registration station to perform station-by-station registration using the Rodrigues matrix. The Rodrigues matrix idea builds the coordinate transformation model using three antisymmetric elements instead of Euler angles. The parameters are solved separately, and the scale parameter is calculated first, then the rotation parameter, and finally the translation parameter. The antisymmetric matrix S, composed of the 3 independent parameters, constructs the Rodrigues matrix as follows:

$$R={\left(I-S\right)}^{-1}\left(I+S\right).$$
(3)

where I is the unit matrix, and S is the antisymmetric matrix composed of parameters a, b, and c.

$$S=\left[\begin{array}{ccc}0& -c& -b\\ c& 0& a\\ b& a& 0\end{array}\right].$$
(4)

Feature constraints can be points, lines, and surfaces. In the experiment, we use the center of the target paper as the feature and list the point error equation for the solution. According to the principle of coordinate transformation, three pairs of homonymous points in space that are not on a straight line can be solved for the spatial transformation parameters. The two stations' homonymous characteristic points \(X_{0} = \left( {x_{0} ,y_{0} ,z_{0} } \right)\) and \(X = \left( {x,y,z} \right)\), have the following relationship.

$${X}_{0}-\left(\lambda RX+\Delta X\right)=0.$$
(5)

where \(\lambda\) is the scale parameter, the scale is constant in the point cloud transformation, i.e., \(\lambda\)=1, \(\Delta X\) is the offset.

The characteristic constraint in the registration station will lead to poor accuracy of the overall leveling once there is a significant observation error. A selective−weight iterative method attenuates or eliminates the effect of coarse deviations. After checking the observation errors, the observations that exceed the threshold are reweighted using posteriori variance−based selective power iterative method weight function, as in Eq. 6.

$$P_{i,j}^{v + 1} = \left\{ \begin{gathered} p_{i}^{v + 1} = \frac{{\mathop {\sigma_{0} }\limits^{{\Lambda^{2} }} }}{{\mathop {\sigma_{i} }\limits^{{\Lambda^{2} }} }},T_{i,j} < F_{a,1,ri} \hfill \\ \frac{{\mathop {\sigma_{0} }\limits^{{\Lambda^{2} }} }}{{\mathop {\sigma_{i,j} }\limits^{{\Lambda^{2} }} }} = \frac{{\mathop {\sigma_{0} }\limits^{{\Lambda^{2} }} r_{i,j} }}{{V_{i,j}^{2} }},T_{i,j} < F_{a,1,ri} \hfill \\ \end{gathered} \right..$$
(6)

where the test quantity is \(T_{i,j} { = }\frac{{\mathop {\sigma_{i,j} }\limits^{{\Lambda^{2} }} }}{{\mathop {\sigma_{i} }\limits^{{\Lambda^{2} }} }}\), the test quantity \(F_{a,1,ri}\) is generally taken to be 4.13, equivalent to the significance level α = 0.1%, and the test efficacy β = 80%.

Multi-scale point cloud fusion

To better obtain abstract representation data features for a fusion of image point cloud and LiDAR point cloud, we use three-part matching metrics to judge the quality of feature selection, and first use Euclidean space distance of feature vector as the first feature distance, as shown in Eq. (7).

$${\mathrm{\varsigma }}_{1}={(\mathrm{p}}_{i},{\mathrm{q}}_{j})=\Vert {v}_{i}^{P}-{v}_{J}^{Q}\Vert .$$
(7)

The cosine similarity between feature vectors is used as the second feature distance, as shown in Eq. (8).

$${\mathrm{\varsigma }}_{2}\left({\mathrm{p}}_{i}{,\mathrm{q}}_{j}\right)=\frac{\left({v}_{i}^{P}-{v}_{J}^{Q}\right)}{\left({\Vert {v}_{i}^{P}\Vert }_{2}*{\Vert {v}_{J}^{Q}\Vert }_{2}\right)}.$$
(8)

Finally, we use the Gaussian curvature ratio of K nearest neighborhood as the third feature distance, as shown in Eq. (9).

$${\mathrm{\varsigma }}_{3}\left({\mathrm{p}}_{i}{,\mathrm{q}}_{j}\right)=\frac{{\mathrm{g}}_{i}^{P}}{{\mathrm{g}}_{j}^{Q}}.$$
(9)

In the above equation, \({\mathrm{v}}_{\mathrm{i}}^{\mathrm{p}}\) and \({\mathrm{v}}_{\mathrm{j}}^{\mathrm{Q}}\) are the eigenvectors of \({p}_{i}\) and \({q}_{j}\) respectively. \({p}_{i}\), \({ q}_{j}\) are the characteristic points of P and Q, \({\mathrm{g}}_{\mathrm{i}}^{\mathrm{p}}\) and \({\mathrm{g}}_{\mathrm{i}}^{\mathrm{q}}\) are the k neighborhood Gaussian curvatures of \({p}_{i}\) and \({q}_{j}\) respectively. According to the three feature matching conditions defined by feature parameters, the Euclidean distance between feature vectors is the Min \({\mathrm{\varsigma }}_{1}\), the cosine similarity between feature vectors is the Max \({\mathrm{\varsigma }}_{2}\), and the ratio of Gaussian curvature between neighbors is approximately 1, which is \({\mathrm{\varsigma }}_{3}\) approximately equal to 1. The feature point pairs (\({p}_{i}\), \({q}_{j}\)) screened by the matching conditions are preliminarily determined as the corresponding relationship between P and Q. The set \({\mathrm{K}}_{1}\) of feature matching point pairs is generated.

In order to improve the accuracy and computational efficiency of registration, and to effectively eliminate matching point pairs with similar features, the fine matching step with Euclidean distance constraint between point pairs is carried out. In the set K1, the Euclidean distance constraint between point pairs is used to test the point pairs and the distance constraint condition like Eq. (10).

$$\frac{\left|{\Vert {\mathrm{p}}_{i}-{\mathrm{p}}_{j}\Vert }_{2}-{\Vert {\mathrm{q}}_{i}-{\mathrm{q}}_{j}\Vert }_{2}\right|}{\left|{\Vert {\mathrm{p}}_{i}-{\mathrm{p}}_{j}\Vert }_{2}+{\Vert {\mathrm{q}}_{i}-{\mathrm{q}}_{j}\Vert }_{2}\right|}<\varepsilon .$$
(10)

Feature matching

Given the selected image point cloud and LiDAR point cloud features have no scale invariance, we use the geometric combination to screen the mismatching further to ensure the correct matching rate of point cloud features.

At first, three pairs of points marked as (\({h}_{1}\), \({j}_{1}\)), (\({h}_{2}\), \({j}_{2}\)), (\({h}_{3}\), \({j}_{3}\)) are randomly selected from a number of matching pairs of point cloud features. In two point clouds P and Q, triangles \({T}_{P}\) and \({T}_{q}\) are composed of \(\left\{{h}_{1}, {h}_{2},{h}_{3}\right\}\) and \(\left\{{l}_{1}^{j},{l}_{2}^{j},{l}_{3}^{j}\right\}\) respectively, and the three sides of the triangle are \(\left\{{l}_{1}^{h},{l}_{2}^{h},{l}_{3}^{ph}\right\}\) and \(\left\{{l}_{1}^{j},{l}_{2}^{j},{l}_{3}^{j}\right\}\) respectively. Calculate the proportional coefficient of the length corresponding to the side length as shown in Eq. (11)

$${\beta }_{i}=\frac{\Vert {a}_{i}\Vert }{\Vert {b}_{i}\Vert }.$$
(11)

If the side length relation satisfies Eq. (12), the point pairs are added to the matching pair set K, where ξ < 1 is the selected threshold.

$$\upxi <\frac{{\beta }_{i}^{2}}{{\beta }_{i}{\beta }_{k}}<\frac{1}{\xi },\forall \left\{i,j,k\right\}=\left\{\mathrm{1,2},3\right\}.$$
(12)

Global objective function optimization

In the solving process, the components in the point cloud transformation are solved by the global objective function. In order to eliminate the influence of point cloud mismatching caused by noise on the results, a solution optimization function is set. As shown in Eq. (13).

$$\upphi \left(\mathrm{s},\mathrm{R},\mathrm{t}\right)=\sum_{i=1}^{n}\rho (s*R{p}_{i}+t-{q}_{i})\uprho \left(x\right)=\frac{\mu *{x}^{2}}{\mu +{x}^{2}}.$$
(13)

In which \(\uprho \left(x\right)\) is a Geman-McClure function with scaling coefficient, which has better noise immunity than the mean square error. The value of parameter \(\mu\) makes \(\uprho \left(x\right)\) function strengthened and weakened by the influence of independent variable \(x\). Matching outliers can be invalidated as outliers.

Experimental

Monitoring scheme design

The scanning scheme design of wooden pagoda is divided into two parts, as shown in Fig. 5. One part is ground LiDAR scanning and total station control, and the other part is UAV close-range photogrammetry. Ground LiDAR scanning uses the Faro scanner to obtain the internal point cloud data of wooden pagoda. External data is received by close-range photogrammetry of UAV. The multi-view images obtained from UAV close-range photogrammetry are converted into point cloud data. Then the internal and external point cloud data are registered with multi-feature constraints, and coordinate transformation is carried out. The point cloud model with absolute coordinates is processed by denoising and segmentation. Finally, the 3D fine model of wooden pagoda is obtained. This data processing method improves the local and global geometric accuracy, convenient for fine-grained 3D analysis of wooden pagoda structure.

Fig. 5
figure 5

Technical route of monitoring data acquisition

We went to the field in October 2018 and 2020 for data collection at the Yingxian Wooden Pagoda. The temperature and wind at that time are shown in Table 1. The two data collection times were close, and the temperature and wind were comparable enough to ignore their effects.

Table 1 Weather comparison between 2018 and 2020 collection times

Data acquisition in the Pagoda

The FaroXD130 scanner is used for proximity scanning inside the pagoda. The scanner is lightweight and portable, so it is used for acquiring data inside the wooden pagoda. The specific parameter performance is shown in Table 2. Each floor of the Yingxian wooden pagoda is supported by 8 inner columns and 24 outer columns. In the scanning process, the inner and outer sides were scanned. Figure 6 shows the scene where we used the Faro scanner for data collection, and the scanning resolution was set to 600 × 1200 dpi to ensure that each column was scanned clearly. To facilitate data alignment, every two stations were connected by paper (Fig. 7), and the upper and lower 2 floors of the wooden pagoda were connected by feature points and feature surfaces on the stairs.

Table 2 Faro scanner various parameters
Fig. 6
figure 6

Faro scanner work site

Fig. 7
figure 7

The location distribution of the target paper for each layer and the target point clouds were scanned by Faro scanner

Data acquisition outside the Pagoda

A higher precision control network is required to precisely estimate the deformation of the Yingxian wooden pagoda, and the absolute coordinate system for monitoring the Yingxian wooden pagoda is constructed by control measures with a total station. Outside the pagoda, the control points are arranged as illustrated in Fig. 8. Fixed monitoring control points have been buried in the courtyard of Yingxian wooden pagoda, and target paper has been laid around the wooden pagoda according to the distribution of these control points. Target paper is easy to paste and clean up, and will not harm the wooden pagoda. Figure 9 show that we used a Riggle scanner and total station to obtain the coordinates of control points.

Fig. 8
figure 8

Schematic diagram of wooden pagoda external stigma monitoring station

Fig. 9
figure 9

We used a Rigel scanner and total station to obtain the coordinates of the external control points of the wooden pagoda

The Phantom 4 RTK has more substantial anti-magnetic interference and precise positioning capabilities, provides real-time centimeter-level positioning data, and significantly improves the absolute accuracy of image metadata. After the flight operation, users can directly calculate high-precision position information through the DJI cloud PPK service. The positioning system supports connection to D-RTK 2 high-precision GNSS mobile station and can be connected to NTRIP through a 4G wireless network card or Wi-Fi hotspot. The Phantom 4 UAV is also equipped with a new TimeSync system, which compensates the position of the optical center of the camera lens and the center point of the RTK antenna, reduces the time error between the position information and the camera, and provides more accurate position information for the image. The Phantom 4 RTK has a ground sampling distance (GSD) of up to 2.74 cm at 100 m flight altitude. Each camera lens is strictly process calibrated to ensure high-precision imaging. Distortion data is stored in the metadata of each photo, which is convenient for users to make targeted adjustments using post-processing software. The parameters of Phantom 4 RTK are shown in Table 3.

Table 3 Drone DJI Genie 4 RTK parameters

We used the drone with a high precision camera to take drone photographic measurements of the tower pagoda to ensure 80% overlap in navigation and 70% overlap in collateral direction. The flight route planning is shown in Fig. 10A. The UAV was manually controlled to take pictures of the wooden pagoda at different heights orthogonal to the pagoda. Figure 10B shows our aerial photogrammetry operation using the Phantom 4 drone to acquire images of the wooden pagoda.

Fig. 10
figure 10

A Flight route planning; B The UAV is in operation

Results and discussion

Dense image matching

Many multi-view images of the wooden pagoda were obtained by UAV multi-view tilt photogrammetry, and the digital image data with overlapping features were detected and matched. It aims to produce an excellent dense matching result using greatly overlapping images. Images with a large percentage of overlap (detailed stereo pairs) facilitate the dense matching process. Specific methods include image feature point detection and screening, feature point matching, epiploic geometric gross error elimination, etc. Then, image calibration, forward intersection, and beam adjustment were carried out to eliminate gross errors. Finally, the image point cloud is obtained.

A total of 702 images were taken, and the dense point cloud of the pagoda was generated by image dense matching as shown in Fig. 11, which can visually demonstrate the degree of damage to the appearance of the wooden pagoda and the degree of deformation of the structure.

Fig. 11
figure 11

Image generates sparse point cloud by feature matching and then generates dense point cloud by a dense matching algorithm

The Fig. 12A shows the program interface of our self-developed iterable overall registration algorithm. The iterable overall registration algorithm controls the error of observation correction within a specific threshold range by continuously weighting and unweighting the observation constraints until the registration is completed.

Fig. 12
figure 12

A The program interface of the iterable overall registration algorithm; B The overall internal point cloud of the Yingxian wooden pagoda after registration

A total of 90 stations were set up for the scan, and the total amount of data reached more than 100 GB. The registration was very difficult. We used our self-developed iterable high-precision registration algorithm to achieve fast and high-precision registration. The registration was divided into three steps, firstly, the point clouds of layers 1 to 3, and then the point clouds of layers 4 to 5. These two parts are finally registered again to reduce the accumulation of errors, using the site features between layers three and four. The overall accuracy of the registration is about 6 mm. The overall internal point cloud of the Yingxian wooden pagoda after registration is shown in Fig. 12B.

Fusion of image and LiDAR point cloud

We complete coarse registration by selecting control points from the dense point cloud and LiDAR point cloud generated by images and further complete acceptable registration by feature matching. We use the self-developed registration program to realize the fusion of image point cloud and LiDAR point cloud, as shown in Fig. 13.

Fig. 13
figure 13

A complete point cloud model of Yingxian wooden pagoda is obtained by internal and external registration

We set the target control points around the pagoda. We set up the whole coordinate system of monitoring through the complete station control survey. Then, the total point cloud of wooden pagoda is transformed into an absolute coordinate system through control points. At this point, we got the whole point cloud model.

Overall inclination analysis of wooden pagoda

We use a UAV and ground-based LiDAR to acquire the wooden pagoda's internal and external data. The outer point cloud is obtained by intensive matching of UAV images; the inner point clouds are registered using an iterable overall point cloud registration algorithm. Finally, the target control points register the internal and external point clouds to the absolute coordinate system. The point clouds are registered by solving the spatial transformation parameters through the Rodrigues matrix, and finally, the integral point cloud model with absolute coordinates is obtained. The average error of the registered point cloud is below 5.6 mm, the Standard deviation is about 0.0051 m, as shown in Fig. 14.

Fig. 14
figure 14

Alignment of point clouds from stations 1 to 58 using a holistic registration algorithm with an accuracy error of 5.6 mm

We divided the point cloud at 67.5° east by north and compared it with the point cloud blocked at the same angle in 2018. We found that, compared with the data in 2018, the column of the second layer with the inner column number M2N8 shifted by about 0.06 m, as shown in Fig. 15.

Fig. 15
figure 15

Comparison of vertical sectional elevation views of point clouds in 2018 and 2020

Analysis of overall torsion of wooden pagoda

Through structural cutting, we can quickly and accurately obtain the twisting posture of the Yingxian Wood pagoda. In the Wood pagoda point cloud model, we cut along the outermost edge of each layer and fit the outer edge, connect diagonal points to get diagonal lines and superimpose the data of each layer, as shown in Fig. 16. We compared the data in 2018 and found that the diagonals at the same position on each floor of the Wood pagoda do not coincide, so the Wood pagoda has torsional deformation. Compared with the bottom layer, the diagonal position and relative displacement of each are different. The offset from the fifth layer to the bottom layer increases by 0.1529 m, which indicates that the torsion of the Yingxian Wood pagoda is complicated, and the torsion state of the Wood pagoda continues. The overall torsion trend of the Yingxian Wood pagoda is clockwise from south to north on the west side and counterclockwise from south to north on the east side.

Fig. 16
figure 16

a Illustration of 18-year torsion of Wood pagoda; b Illustration of 20-year torsion of Wood pagoda

In the southwest of the second floor, the internal columns of M2N1, M2N2 and M2N8 have significant displacement. The deformation of the M2N2 column is the largest, the offset difference is 0.0708 m. The internal column of M2N5 is relatively stable, with the minor offset, and the offset difference is 0.0038 m, as shown in Table 4. The columns in the southwest have seriously deviated to the northeast, and the posts in the northeast are relatively stable.

Table 4 Comparison of column migration in the second and open layers

Single column tilt analysis

The columns of the wooden pagoda are basically cylindrical, so it is essential to detect the position of the center of the circle at the head and foot of the column to determine the deformation state of the column. We capture the complete point cloud of each column, and used the open-source point cloud processing software CloudCompare to slice the point clouds of the columns and intercept a very thin layer of point clouds at the head and foot positions of the columns, respectively. As shown in Fig. 17.

Fig. 17
figure 17

Column point cloud segmentation; a very thin layer of point cloud is excised from the position of column head and column foot respectively to observe the deformation of the column from different viewpoints

Point clouds of column heads and footers obtained by point cloud slicing can be used to extract geometric features of simple entities like circles using PCL's stochastic consistency RANSAC algorithm combined with least squares, show as Fig. 18.

Fig. 18
figure 18

Fitting circle in 2D coords projected onto fitting plane

We measured the inclination angle of the column in the front view and left view. while the point cloud data of the column edge are determined by two points to determine a straight line. The centerline of the column is obtained by connecting the centers of stigma and column foot. Meanwhile, the column's vertical line is made through the center of the column foot. The vertical line's angle formed with the centerline is regarded as the inclination angle of the column. Figure 19 shows the inclination angle of some columns on the second floor of wooden pagoda.

Fig. 19
figure 19

a, b Comparison between front view and left view of M2N3, M2N5 columns

By comparing the offset angles of each column in the front view of the second floor, we find that each column has different degrees of offset. The M2N8 column`s offset angle was 1.39° more than 2018 in the front view, and the M2N5 column is the most stable, with an offset angle difference of 0.03°, as shown in Table 5. Through the elaborate model, we can see the offset of the column more intuitively, as shown in Fig. 20. In the southwest, the column is in the form of stigma inclining in-ward and column foot climbing outward, and the octagon of inner and outer grooves stretches from southwest to northeast on the plane.

Table 5 Comparison of vertical migration angles of columns in the second and open floors
Fig. 20
figure 20

Comparison of frontal migration angles of fine model of inner column in the second floor

The offset angles under the left view of the second floor inner circle columns are shown in Table 6, the deviation angle of the M2N6 column changed the most, and the deviation angle difference reached 0.57°. The minimum deviation angle difference of the M2N5 column is − 0.09°, as shown in Fig. 21.

Table 6 Comparison of left-view offset angles of inner ring columns on the second floor
Fig. 21
figure 21

Left oblique comparison of two-layer columns

The internal columns M2N3, M2N4, M2N5, M2N6, M2N7, and M2N8 show negative deviation under different views since the upper and lower ends of the columns are not just connected or hinged with the beam but resting on it. The elaborate model established by point cloud can be displayed intuitively, Under the action of horizontal force, the action point of the resultant force is constantly changing. Unlike modern structures, such columns should be considered three-dimensional blocks. Under the coordination of the upper gravity and the rigid layer and the flexible layer, the bending moment caused by gravity is more significant than that caused by horizontal wind load when the column resists wind load, so it has the ability of self-resetting and can maintain stability. Figure 22 shows the offset characteristics of the M2N3 column, and its column head offset characteristics are obvious, which accords with the above analysis results.

Fig. 22
figure 22

Analysis of M2N3 column migration characteristics

Conclusion

The research objectives of this paper are to try to complete the change monitoring of Yingxian wooden pagoda in 2018 and 2020 by both image and LiDAR data to verify whether the high precision UAV photogrammetry and LiDAR can meet such fine monitoring needs. We made two visits to Yingxian Wooden Pagoda to collect data in 2018 and 2020. In this paper, we acquired a total of 702 images and more than 100 GB of point cloud data from 90 stations in 2020. And data fusion was performed.

In this paper, we introduced the principles of computer vision and used the SFM algorithm to realize the conversion of images to three-dimensional point clouds. First, we obtained the position of the 3D camera through image feature detection and matching, mainly through the SFM algorithm. Then used the beam method to global optimization to obtain an accurate sparse point cloud. Finally, we used an interpolation algorithm to get a dense point cloud. According to the feature matching, the registration of the dense point cloud of the image and the ground LiDAR point cloud were completed to achieve the consistency of the scale. This method made up for the lack of 3D LiDAR scanning data acquisition and prominent edge and corner information.

The experimental results proved that the average error of the registered point clouds was less than 5.6 mm with a standard deviation of about 5.1 mm, and the error of the Yingxian wooden pagoda model obtained by fusing ground LiDAR scanning and UVA photogrammetry was about 6 mm, which satisfied the accuracy requirement. Preliminary analysis indicated that the torsional state of the wooden pagoda as a whole was still continuing. The overall twisting trend of the Yingxian wooden pagoda was clockwise twisting from south to north on the west side and counter-clockwise twisting from south to north on the east side. The tilting of the columns of the second bright layer, which had the most severe tilt in the Yingxian Wooden Pagoda, continues. The inner columns M2N2 and M2N8 of the second bright layer tilted the most. The deformation of the M2N2 column was the largest, the offset difference was 0.0708 m. the column frame was inclined from southwest to northeast as a whole. In the front view of the second floor, The M2N8 column`s offset angle was 1.39° more than 2018 in the front view, and the M2N5 column is the most stable, with an offset angle difference of 0.03°. In the left view of the second floor, the offset angle difference of column M2N6 reached 0.57°, The minimum deviation angle difference of the M2N5 column is − 0.09°. The inner columns M2N3, M2N4, M2N5, M2N6, M2N7, and M2N8 had different degrees of negative offset in the front and left views, respectively. The upper gravity and the coordination of the rigid layer and the flexible layer have the ability to self-reset, and this statement is verified in the analysis of the offset characteristics. The overall tilt and reversal of the Wood Pagoda continue, and it is necessary to carry out protection intervention measures as soon as possible.

The fusion of LiDAR and non-measurement digital image data provided a data foundation for elaborate modeling. The deformation of the Yingxian wooden pagoda was analyzed based on the measured data, but the load-bearing capacity of each column and each floor and the stresses were not analyzed, and further research is needed.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Mi X, Meng X, Yang Q, Li T, Wang J. Analysis of the residual deformation of Yingxian wood Pagoda. Adv Civ Eng. 2020;2020:1–12. https://doi.org/10.1155/2020/2341375.

    Article  Google Scholar 

  2. Sha B, Xie L, Yong X, Li A. Hysteretic behavior of an ancient Chinese multi-layer timber substructure: a full-scale experimental test and analytical model. J Build Eng. 2021;43: 103163.

    Article  Google Scholar 

  3. Meng X, Li T, Yang Q. Lateral structural performance of column frame layer and dou-gong layer in a timber structure. KSCE J Civ Eng. 2019;23:666–77.

    Article  Google Scholar 

  4. Jiang Y, Li A, Xie L, Hou M, Qi Y, Liu H. Development and application of an intelligent modeling method for ancient wooden architecture. ISPRS Int J Geo-Inf. 2020;9:167.

    Article  Google Scholar 

  5. Ming G, Bingnan Y, Tengfei Z, Chaoyang C, Chen Z, Yunming L. Application of lidar technology in deformation analysis of wooden towers in Yingxian County. J Build Sci Eng. 2020;37(02):109–17.

    Google Scholar 

  6. Guo M, Zhao J, Pan D, Sun M, Zhou Y, Yan B. Normal cloud model theory-based comprehensive fuzzy assessment of wooden pagoda safety. J Cult Herit. 2022;55:1–10.

    Article  Google Scholar 

  7. Wu Y, Song X, Li K. Compressive and racking performance of eccentrically aligned dou-gong connections. Eng Struct. 2018;175:743–52.

    Article  Google Scholar 

  8. Xue J, Xu D, Qi L. Experimental seismic response of a column-and-tie wooden structure. Adv Struct Eng. 2019;22:1909–22.

    Article  Google Scholar 

  9. Xie QF, Zhang LP, Wang L, Zhou WJ, Zhou TG. Lateral performance of traditional Chinese timber frames: experiments and analytical model. Eng Struct. 2019;186:446–55.

    Article  Google Scholar 

  10. Xue J, Ma L, Dong X, Zhang X, Zhang X. Investigation on the behaviors of Tou-Kung sets in historic timber structures. Adv Struct Eng. 2020;23:485–96.

    Article  Google Scholar 

  11. Nieto-Julián JE, Antón D, Moyano JJ. Implementation and management of structural deformations into historic building information models. Int J Archit Herit. 2020;14:1384–97.

    Article  Google Scholar 

  12. Soti R, Abdulrahman L, Barbosa AR, Wood RL, Olsen MJ. Case study: post-earthquake model updating of a heritage pagoda masonry temple using AEM and FEM. Eng Struct. 2020;206: 109950.

    Article  Google Scholar 

  13. Zhang GQ, Gu A, Wei L. Regularity in distribution, and control, of pests in the hall of mental cultivation, the Forbidden City, Beijing, China. Herit sci. 2021;9:1–16.

    Article  Google Scholar 

  14. Dai J, Yang Y, Bai W. Shaking table test for the 1: 5 architectural model of Qin-an Palace with wooden frame structure in the Forbidden City. Int J Archit Herit. 2019;13:128–39.

    Article  Google Scholar 

  15. Meng T, Lu Y, Zhao G, Yang C, Ren J, Shi Y. A synthetic approach to weathering degree classification of stone relics case study of the Yungang Grottoes. Herit Sci. 2018;6:1–7.

    Article  Google Scholar 

  16. Hua W, Hou M, Qiao Y, Zhao X, Xu S, Li S. Similarity index based approach for identifying similar grotto statues to support virtual restoration. Remote Sens. 2021;13:1201.

    Article  Google Scholar 

  17. Muhadi NA, Abdullah AF, Bejo SK, Mahadi MR, Mijic A. The use of LiDAR-derived DEM in flood applications: a review. Remote Sens. 2020;12:2308.

    Article  Google Scholar 

  18. Chang L, Niu X, Liu T, Tang J, Qian C. GNSS/INS/LiDAR-SLAM integrated navigation system based on graph optimization. Remote Sens. 2019;11:1009.

    Article  Google Scholar 

  19. Moyano J, Nieto-Julián JE, Lenin LM, Bruno S. Operability of point cloud data in an architectural heritage information model. Int J Archit Herit. 2022;16:1588–607.

    Article  Google Scholar 

  20. Wang J, Xu K. Shape detection from raw lidar data with subspace modeling. IEEE Trans Vis Comput Graph. 2016;23:2137–50.

    Article  Google Scholar 

  21. Antón D, Medjdoub B, Shrahily R, Moyano J. Accuracy evaluation of the semi-automatic 3D modeling for historical building information models. Int J Archit Herit. 2018;12:790–805.

    Article  Google Scholar 

  22. Balsi M, Esposito S, Fallavollita P, Melis MG, Milanese M. Preliminary archeological site survey by UAV-borne lidar: a case study. Remote Sens. 2021;13:332.

    Article  Google Scholar 

  23. Murtiyoso A, Grussenmeyer P. Documentation of heritage buildings using close-range UAV images: dense matching issues, comparison and case studies. Photogramm Rec. 2017;32:206–29.

    Article  Google Scholar 

  24. Moyano J, Nieto-Julián JE, Bienvenido-Huertas D, Marín-García D. Validation of close-range photogrammetry for architectural and archaeological heritage: analysis of point density and 3D mesh geometry. Remote Sens. 2020;12:3571.

    Article  Google Scholar 

  25. Mohammadi H, Samadzadegan F. An object based framework for building change analysis using 2D and 3D information of high resolution satellite images. Adv Space Res. 2020;66:1386–404.

    Article  Google Scholar 

  26. Guo M, Sun M, Pan D, Huang M, Yan B, Zhou Y, et al. High-precision detection method for large and complex steel structures based on global registration algorithm and automatic point cloud generation. Measurement. 2021;172: 108765.

    Article  Google Scholar 

  27. Cheng L, Chen S, Liu X, Xu H, Wu Y, Li M, et al. Registration of laser scanning point clouds: a review. Sensors. 2018;18:1641.

    Article  Google Scholar 

  28. Liu S-F, Liang J, Gong C-Y, Pai W-Y. Registration method of point clouds using improved digital image correlation coefficient. Opt Eng. 2018;57: 113104.

    Article  Google Scholar 

  29. Bernasconi L, Chirici G, Marchetti M. Biomass estimation of xerophytic forests using visible aerial imagery: contrasting single-tree and area-based approaches. Remote Sens. 2017;9:334.

    Article  Google Scholar 

  30. Wu P, Li W, Song W. Fast, accurate normalized cross-correlation image matching. J Intell Fuzzy Syst. 2019;37:4431–6.

    Article  Google Scholar 

  31. Huang X, Qin R, Xiao C, Lu X. Super resolution of laser range data based on image-guided fusion and dense matching. ISPRS J Photogramm Remote Sens. 2018;144:105–18.

    Article  Google Scholar 

  32. Ye Y, Shen L, Hao M, Wang J, Xu Z. Robust optical-to-SAR image matching based on shape properties. IEEE Geosci Remote Sens Lett. 2017;14:564–8.

    Article  Google Scholar 

  33. Kim J, Lee S. Information measure-based tone mapping of outdoor LDR image for maximum scale-invariant feature transform extraction. Electron lett. 2020;56:544–5.

    Article  Google Scholar 

  34. He Y, Hu Z, Wu K, Wang R. A novel method for density analysis of repaired point cloud with holes based on image data. Remote Sens. 2021;13:3417.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to acknowledge the financial support of the National Science Foundation of China (Grant No. 41971350, 42171416), Beijing Advanced Innovation Centre for Future Urban Design Project (Grant No. UDC2019031724), Teacher Support Program for Pyramid Talent Training Project of Beijing University of Civil Engineering and Architecture (Grant No. JDJQ20200307). The opinions expressed in this study are those of the authors and do not necessarily reflect the views of the sponsor.

Funding

This research was supported by the National Science Foundation of China, Beijing Advanced Innovation Centre for Future Urban Design Project, Teacher Support Program for Pyramid Talent Training Project of Beijing University of Civil Engineering and Architecture.

Author information

Authors and Affiliations

Authors

Contributions

MG provided concepts and methodologies and carry out project management. MS write first drafts, work with data and software. DP: supervision. GW: supervision and visualize data. YZ, BY processing data manipulation software. ZF: data curation, supervision. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Mengxi Sun or Deng Pan.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Guo, M., Sun, M., Pan, D. et al. High-precision deformation analysis of yingxian wooden pagoda based on UAV image and terrestrial LiDAR point cloud. Herit Sci 11, 1 (2023). https://doi.org/10.1186/s40494-022-00833-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40494-022-00833-z

Keywords