Skip to main content

A high-precision automatic extraction method for shedding diseases of painted cultural relics based on three-dimensional fine color model

Abstract

In recent years, with the development of 3D digitization of cultural relics, most cultural sites contain a large number of fine 3D data of cultural relics, especially complex geometric objects such as painted cultural relics. At present, how to automatically extract surface damage information from the fine 3D color model of painted cultural relics and avoid the loss of accuracy caused by reducing the dimension using conventional methods is an urgentproblem. In view of the above issues, this paper proposes an automatic and high-precision extraction method for cultural relics surface shedding diseases based on 3D fine data. First, this paper designs a 2D and 3D integrated data conversion model based on OpenSceneGraph, a 3D engine, which performs mutual conversion between 3D color model textures and 2D images. Second, this paper proposes a simple linear iterative clustering segmentation algorithm with an adaptive k value, which solves the problem of setting the superpixel k value and improves the accuracy of image segmentation. Finally, through the 2D and 3D integrated models, the disease is statistically analyzed and labeled on the 3D model. Experiments show that for painted plastic objects with complex surfaces, the disease extraction method based on the 3D fine model proposed in this paper has improved geometric accuracy compared with the current popular orthophoto extraction method, and the disease investigation is more comprehensive. Compared with the current 3D manual extraction method in commercial software, this method greatly improves the efficiency of disease extraction while ensuring extraction accuracy. The research method of this paper activates many existing 3D fine data of cultural protection units and converts conventional 2D data mining and analysis into 3D, which is more in line with the scientific utilization of data in terms of accuracy and efficiency and has certain scientific research value, leading value and practical significance.

Introduction

Painted cultural relics are relics with color texture on the surface. As an ancient and vibrant art form, they are widely distributed worldwide. These works, with their unique style and color and the techniques used to create them, highlight local culture. These include European church murals, Indian Tanjore paintings, Chinese Mogao Grottoes murals and painted sculptures (Examples are shown in Fig. 1.) They are rich in variety and color, and have significant historical value and scientific research value [1].

Fig. 1
figure 1

Colored relics

However, with the passage of time and changes in the environment, many painted relics have suffered from different degrees of deterioration, such as shedding, cracking, and lifting. These common issues can irreversibly damage painted relics and accelerate their destruction. Therefore, investigating the diseases is a crucial part of relic preservation work. Investigation of diseases has developed from simple initial written and image records to digital records based on orthophotos. Although the application of disease investigation based on orthophotos is relatively mature, for complex objects such as painted sculptures, the conventional six orthographic plane projections (front, back, left, right, top, bottom) still cannot fully display all diseases, thereby affecting the quantity and accuracy of disease investigation. With the development of 3D modeling technology, 3D models are increasingly widely used in efforts to protect cultural relics. Currently, many museums have established 3D databases of cultural relics [2]; if 3D data can be directly used for automatic disease investigation, it would eliminate the need for creating 2D orthophotos and allow for the automatic extraction of the most accurate and comprehensive disease information. Therefore, t automatic disease investigation from 3D models of cultural relics holds significant research value.

Relics diseases are divided into these main types: shedding, armor lifting, cracks, smoking, and cracking. Among these types, shedding is the most common, especially prominent in painted and mural cultural relics. Its forms include deep loss, spot loss, and pigment layer loss, and it is typically characterized through differences in the local surface morphology and texture. The traditional disease investigation method involves only simple image and textual records of the diseases. In 2019, the walls of the Forbidden City frequently exhibited diseases such as hollowing, cracks and large-area shedding. Staff members took photographs for sampling and numbering, and recorded the type and approximate location of the damage [3]. This method cannot accurately record the location and geometric properties of the disease. Later, applying image processing technology for disease investigation is a more ideal approach (such as using manual delineation, transparent grids and CAD) [4]. However, vectorization is time-consuming, laborious, inefficient and limited in accuracy. Relics digitization is an effective way for permanently preserving relics information and is also an important means for global display. Methods for detecting defects on the surface of relics have also shifted from using onsite entities to using digital products [5].

Common digital images include orthographic images, hyperspectral images, and X-ray films. Hyperspectral imaging and X-rays can detect diseases that cannot be detected by the human eye. Kulkarni (2019) suggested that X-ray technology has important guiding significance for the study of historical paintings and outlined the application of digital processing technology in restoring and protecting cultural relics [6]. Hyperspectral imaging (HSI), which can identify raw material components and surface diseases, has been used in the field of cultural heritage (CH) for painting analysis [7,8,9]. Both traditional image segmentation algorithms and deep learning yield good results for hyperspectral images. However, hyperspectral technology is complex and expensive, and the instrument is not easy to carry. Moreover, it cannot be used for large cultural relics and is not universal.

Digital orthophoto maps (DOM) have the advantages of high precision and rich information. DOM can be directly used for image interpretation and measurement and have also been widely used in the digitization of cultural relics. Extracting diseases from DOMs of cultural relics has also become a common extraction method. In order to solve the low efficiency caused by manual extraction, predecessors proposed deep learning or image segmentation to extract diseases from DOM. In terms of deep learning, Hu (2021) used the YOLOv4 algorithm to train a dataset to automatically and rapidly identify diseases in orthophotos of murals [10]. In 2022, Yuan proposed an improved U-net network to automatically identify cracks and shedding on the walls of the Forbidden City. The network obtained good results in both recognition and extraction [11]. However, deep learning relies heavily on dataset. In the case of complex painted relics, due to the lack of sufficient disease samples, there will be many problems such as identification errors and loss.

Concerning image segmentation, this process is that of dividing an image into regions with different features and extracting regions of interest (ROI) [12]. Traditional image segmentation methods include threshold-, clustering- and region-based segmentation. These segmentation methods are based on pixels, focus on grayscale changes between images, and do not consider the spatial relationships between pixels. It is easy to causes problems such as oversegmentation, undersegmentation, and cannot accurately segment the edges of the target area. The superpixel segmentation method uses superpixels instead of pixels to represent features, thereby reducing the complexity of image processing [13]. Common superpixel segmentation methods include Superpixels Extracted via Energy-Driven Sampling (SEEDS) [14], Linear Spectral Clustering (LSC) [15] and Simple Linear Iterative Clustering (SLIC) [16]. Wang (2018) proposed a superpixel segmentation method that automatically extracts cultural relic disease information from orthographic images. This method performs SLIC image segmentation and affinity propagation (AP) clustering on orthographic images generated by 3D color models to automatically extract 2D geometric information of cultural relic diseases [17]. On this basis, Hu (2022) improved the SLIC algorithm to extract better disease edges [5]. Spain’s Sampietro-Vattuone (2021) used the Dstretch plug-in to identify pictographs and rock weathering on a 3D color model [18], and the essence is still image segmentation. Although image segmentation can extract regions of interest accurately and quickly, the extraction result is always 2D information. For some surfaces with large curvatures, extraction accuracy is greatly reduced.

With the development of digital photogrammetry, 3D laser scanning and 3D modeling technology becoming increasingly mature, the extraction of relic diseases is bound to shift from 2D to 3D. Terrestrial laser scanning (TLS) and aerial digital photogrammetry (ADP) techniques were used to protect and monitor the Moorish Castle in Portugal [19]. Hou (2016) used time-phase 3D laser scanning point cloud data to detect gold foil damage to stone relics [20]. Italian scholar Guerra (2020) extracted and classified the diseases of architectural sites through point cloud data and disease semantic features [21]. However, the point cloud can extract only some diseases with large geometric differences or large deformations, and some diseases that are reflected only in color differences are not extracted. In view of the above problems, Xia (2018) proposed a method of true 3D detection of diseases on the surface of relics [22]. Through the OpenSceneGraph (OSG) 3D rendering engine, the disease area is manually selected, the disease type is defined, and the geometric information of the disease is counted on the 3D model. However, this method only transforms 2D manual vectorization into 3D manual vectorization, which has low efficiency and accuracy and is affected by human factors.

In summary, the main methods for investigating diseases of painted relics include traditional image vectorization, image segmentation, deep learning, and 3D model surface disease vectorization. As shown in Fig 2 and Table 1.

Fig. 2
figure 2

Investigation methods for common cultural relic diseases. a Image vectorization, b Image segmentation, c Deep learning, d 3D vectorization

Table 1 Summary of common cultural relic extraction methods

The 3D color model itself is composed of triangular patches and texture images. Therefore, converting 2D and 3D data represents a breakthrough in 3D automatic disease extraction. Bolkas (2018) converted the 2D model into a 3D gray image by mapping the 3D image to the gray color and decomposed the surface of the 3D model via the wavelet function to obtain the multiscale information of the edge [23]. Zhang (2019) proposed extracting a trace from a 2D image of a rock wall, then linking the trace pixel with its corresponding point in the 3D point cloud, and then using the coordinate system transformation to complete the link between the pixel and the point cloud data [24]. The above method converts 2D data into 3D, which provides a new idea for automatically extracting information from 3D surface models.

At present, there are two problems in the extraction of the shedding disease of painted cultural relics: the extraction on the orthophoto will lose the geometric accuracy; vectorization on 3D models is inefficient. Firstly, this paper proposes a 2D-3D model projection transformation method, which can obtain the mutual transformation between 3D models and 2D pixels in any direction and automatically extract diseases from 3D data. Secondly, this paper proposes an SLIC segmentation algorithm with an adaptive K value to further improve the accuracy of disease edge extraction. In summary, By combining the 2D-3D projection transformation and the adaptive K value SLIC we can realize the automatic extraction of three-dimensional diseases, accurately and detailed statistics of the type, quantity and distribution of diseases, and form a three-dimensional disease labeling model. The cultural protection unit can detect diseases in time and take prevention and protection measures.

Methods

In this paper, a method for automatically extracting surface diseases from fine 3D color models of cultural relics is proposed. (1) Through 2D and 3D projection transformation, a 2D orthoimage of the 3D disease area with projection parameters is obtained. (2) Through improved SLIC segmentation and K-means clustering, the disease area in the orthoimage is automatically and accurately extracted. (3) Through back-projection transformation, the 2D pixels of the disease area are back-projected back into the 3D model to obtain the 3D disease geometric information on the surface of the 3D model. Figure 3 shows the technical roadmap. The following content will expand on the proposed three steps in detail.

Fig. 3
figure 3

Technology roadmap

Orthographic transformation

The 2D-3D data conversion method is a process of forward and backward projection transformation, which is implemented in OSG (OpenSceneGraph), a high-performance open-source 3D graphics engine used in virtual simulation, virtual reality, and scientific and engineering visualization. OSG is written in C++ with OpenGL as the underlying platform and runs on Windows, UNIX/Linux, Mac OSX, IRIX, Solaris, HP-UX, AIX and FreeBSD [25].

From the point of view of coordinate transformation, generating digital orthophotos is a process of converting the world coordinates of a 3D model into pixel coordinates; this process is also known as MVPW matrix transformation. It includes model transformation (M), viewport transformation (V), projection transformation (P), and window transformation (W).

  1. 1.

    The role of the model transformation is to convert the model from object coordinates to world coordinates. However, because the cultural relic 3D color model differs from the general digital ground model, the former model needs only the correct scale and its own relative coordinates. Therefore, the model does not need to be transformed after 3D laser point cloud scale correction and angle adjustment in the early stage.

  2. 2.

    The role of viewport transformation is to transform the model from world coordinates to camera coordinates. Under the camera coordinate system, the position of the model vertices in camera coordinates can be obtained by using the camera as a reference and its position as the origin [26]. In Fig 4, Mw denotes the world coordinate position of the artifact model, and Mv denotes the camera coordinate position of the model.

Fig. 4
figure 4

Viewport transformation

The viewport transformation formula as is shown in Eq. (1):

$$\left[\begin{array}{c}x\\ y\\ z\\ 1\end{array}\right]=\left[\begin{array}{cc}R& t\\ {0}^{r}& 1\end{array}\right]\left[\begin{array}{c}{X}_{p}\\ {Y}_{p}\\ {Z}_{p}\\ 1\end{array}\right]={M}_{1}\left[\begin{array}{c}{X}_{p}\\ {Y}_{p}\\ {Z}_{p}\\ 1\end{array}\right]$$
(1)
  1. 3.

    Projection transformation is used to transform 3D coordinate information into 2D coordinates. There are two types of projections: perspective projection and orthographic projection.

As shown in Fig. 5, this paper uses orthographic projection to construct a cut cuboid to cut the scene (taking the bounding box of the model and setting the radius to r) and draw it to the screen according to the original proportion of the cultural relic model without deformation. As is shown in Eq. (2):

Fig. 5
figure 5

Projection transformations

$$z\left[\begin{array}{c}X\\ Y\\ 1\end{array}\right]=\left[\begin{array}{cccc}f& 0& 0& 0\\ 0& f& 0& 0\\ 0& 0& 1& 0\end{array}\right]\left[\begin{array}{c}x\\ y\\ z\\ 1\end{array}\right]=P\left[\begin{array}{c}x\\ y\\ z\\ 1\end{array}\right]$$
(2)
  1. 4.

    The role of window transformation is to map the result of viewport cropping onto the screen, obtain the size of the screen display area through the viewport, and transform the data in the frame buffer area into pixels that can be displayed on the screen. Finally, the high-resolution orthophoto of the model is automatically obtained [26]. As is shown in Eq. (3):

$$\left[\begin{array}{c}u\\ v\\ 1\end{array}\right]=\left[\begin{array}{ccc}\frac{1}{dX}& 0& {u}_{0}\\ 0& \frac{1}{dY}& {v}_{0}\\ 0& 0& 1\end{array}\right]\left[\begin{array}{c}X\\ Y\\ 1\end{array}\right]$$
(3)

Summarizing the four steps, the following Eq. (4):

$$z\left[\begin{array}{c}u\\ v\\ 1\end{array}\right]=\left[\begin{array}{ccc}\frac{1}{dX}& 0& {u}_{0}\\ 0& \frac{1}{dY}& {v}_{0}\\ 0& 0& 1\end{array}\right]\left[\begin{array}{cccc}f& 0& 0& 0\\ 0& f& 0& 0\\ 0& 0& 1& 0\end{array}\right]\left[\begin{array}{cc}R& t\\ {0}^{r}& 1\end{array}\right]\left[\begin{array}{c}{X}_{w}\\ {Y}_{w}\\ {Z}_{w}\\ 1\end{array}\right]$$
(4)

A high-resolution orthophoto is obtained via MVPW matrix transformation. Finally, the transformation matrix is saved and can be used for 2D pixel coordinates returned to 3D spatial coordinates. The square region with side length r is transformed into an image of width W and height H, so the image resolution as is shown in Eq. (5):

$$DPI=\frac{W\times H}{{r}^{2}}$$
(5)

It is possible to obtain an orthophoto of the diseased area that is greater than or equal to the resolution of the original model orthophoto by setting the W and H sizes. This ensures that there is no loss in accuracy.

Image segmentation clustering

Image complexity

Image complexity is the degree of difficulty in discovering or extracting a target of interest in an image [27]. The gray-level co-occurrence matrix can appropriately measure the complexity of an image. The gray-level co-occurrence matrix (GLCM) is a method used to describe texture by studying the spatial correlation properties of gray levels [28]. Several parameters can be used to characterize the gray-level co-occurrence matrix; those with typical metric characteristics are energy (E), contrast (Con, denoted by CCon), and correlation (Cor, denoted by CCor). The number of superpixels K is inseparably related to image complexity.

The image complexity \({\text{T}}_{\text{k}}\) as is shown in Eq. (6):

$${\text{T}}_{\text{k}}={\text{E}}_{\text{Ent }}+{\text{C}}_{\text{Con}}-\text{E}-{\text{C}}_{\text{Cor}}$$
(6)
$${\text{E}} = \sum\limits_{{{\text{i}} = 0}}^{{{\text{m}} - 1}} {\sum\limits_{{{\text{j}} = 0}}^{{{\text{n}} - 1}} {{\text{Q}}^{2} } } \left( {{\text{i}},{\text{j}},{\text{d}},\theta } \right) $$
(7)
$${\text{C}}_{\text{con}}=\sum\limits_{\text{i}=0}^{\text{m}-1}\sum\limits_{\text{j}=0}^{\text{n}-1}[{(\text{i}-\text{j})}^{2}\text{Q}(\text{i},\text{j},\text{d},\uptheta ) $$
(8)
$$\begin{array}{c}{\text{C}}_{\text{cor}}=\sum\limits_{\text{i}=0}^{\text{m}-1}\sum\limits_{\text{j}=0}^{\text{n}-1}\frac{\text{i}\times \text{j}\times \text{p}(\text{i},\text{j},\text{d},\uptheta )-{\text{u}}_{1}\times {\text{u}}_{2}}{{\text{d}}_{1}^{2}{\text{d}}_{2}^{2}}\end{array}$$
(9)

where \({\text{Q}}^{2}(\text{i},\text{j},\text{d},\uptheta )\) is obtained by normalizing the grayscale coproduction matrix, m denotes the number of pixels in the x-axis direction of the image in the plane, n denotes the number of pixels in the y-axis direction of the plane where the image is located, and θ denotes the direction of the generation of the grayscale coproduction matrix, which can be taken to be 0°, 45°, 90°, or 135°, as is shown in Eqs. (10) and (11):

$$\left\{\begin{array}{c}{\text{u}}_{1}=\sum\limits_{\text{i}=0}^{\text{m}-1}\sum\limits_{\text{j}=0}^{\text{n}-1}\text{p}(\text{i},\text{j},\text{d},\uptheta )\\ {\text{ u}}_{2}=\sum\limits_{\text{j}=0}^{\text{n}-1}\text{j}\sum\limits_{\text{i}=0}^{\text{m}-1}\text{p}(i,j,d,\theta )\end{array}\right.$$
(10)
$$\left\{\begin{array}{c}{\text{d}}_{1}^{2}=\sum\limits_{\text{i}=0}^{\text{m}-1}{(\text{i}-{\text{u}}_{1})}^{2}\sum\limits_{\text{j}=0}^{\text{n}-1}\text{p}(i,j,d,\theta )\\ {\text{ d}}_{2}^{2}=\sum\limits_{\text{i}=0}^{\text{m}-1}{(\text{j}-{\text{u}}_{2})}^{2}\sum\limits_{\text{j}=0}^{\text{n}-1}\text{p}(i,j,d,\theta )\end{array}\right.$$
(11)

where u1 and u2 are the mean values, d1 and d2 denote the variance, and p (i, j, d, θ) denotes the pixel information represented by the element in the ith row and jth column of the grayscale covariance matrix. Through many experiments, we establish the empirical formula for the adaptive K value as is shown in Eq. (12):

$$\text{K}=\lceil (\text{x}+\text{y})/{\text{T}}_{\text{k}}\rceil $$
(12)

where k denotes the number of presegments for superpixel segmentation via SLIC, x denotes the width of the image, y denotes the height of the image, \({\text{T}}_{\text{k}}\) denotes the complexity of the image itself, and \(\lceil \rceil \) denotes upper rounding [27].

Improved SLIC algorithm

The SLIC segmentation algorithm, which was proposed in 2010, is a simple and easy-to-implement algorithm that transforms color images into 5-dimensional feature vectors in CIELAB color space and XY coordinates, then constructs a distance metric for the 5-dimensional feature vectors, and performs local clustering of image pixels [16]. The SLIC algorithm generates compact, approximately uniform superpixels and has a high overall rating in terms of computing speed and object contour preservation; additionally, the superpixel shape has a high overall rating with good segmentation results. However, the goodness of the SLIC segmentation results is related to the initial settings of the number of superpixels K and the compactness m. In the current algorithm, the value of K still needs to be customized, and a value of K that is too small can easily lead to undersegmentation, whereas a value that is too large can easily lead to oversegmentation [27]. To address the above problems, this paper proposes an SLIC segmentation algorithm that adapts the K value according to the image complexity and applies it to segment and perform edge extraction of the diseased region; this algorithm solves the problem of setting the initial K value and improves segmentation accuracy.

The specific steps in the algorithm are as follows:

  1. (1)

    Initialize the clustering center. The grayscale covariance matrix is calculated according to image complexity, the number of adaptive superpixels K is set, and the seed points are distributed uniformly within the image. The size of each superpixel is N/K, where N is the number of image pixels, and the adjacent seed step size is approximately \(\text{S}=\text{sqrt}(\text{N}/\text{K})\).

  2. (2)

    Reselect the clustering center within the \(\text{n}\times \text{n}\)-neighborhood of the clustering center (generally n = 3). The grayscale gradient of all the pixel points within the domain is calculated, the gradient values of the pixels within each domain are traversed, and finally, the pixel point with the smallest value is used as the adjusted clustering center.

  3. (3)

    Using the adjusted clustering center as the search center and 2 times the center spacing as the neighborhood search range, construct a center identity for each original pixel point of the segmented image to determine the possible clustering centers to which each pixel point belongs.

  4. (4)

    Calculate the distance metric. This distance includes color distance and spatial distance. For each searched pixel point, the distance between it and the seed point is calculated separately. The distance as is seen in Eqs. (1315):

    $${\text{d}}_{\text{c}}=\sqrt{{({\text{l}}_{\text{j}}-{\text{l}}_{\text{i}})}^{2}+{({\text{a}}_{\text{j}}-{\text{a}}_{\text{i}})}^{2}+{({\text{b}}_{\text{j}}-{\text{b}}_{\text{i}})}^{2}}$$
    (13)
    $${\text{d}}_{\text{s}}=\sqrt{{({\text{x}}_{\text{j}}-{\text{x}}_{\text{i}})}^{2}+{({\text{y}}_{\text{j}}-{\text{y}}_{\text{i}})}^{2}}$$
    (14)
    $$ {\text{D}}\prime = \sqrt {\left( {\frac{{{\text{d}}_{{\text{c}}} }}{{{\text{N}}_{{\text{c}}} }}} \right)^{2} + \left( {\frac{{{\text{d}}_{{\text{s}}} }}{{{\text{N}}_{{\text{s}}} }}} \right)^{2} } $$
    (15)

In Eqs. (1315), dc represents the color distance, \(ds\) represents the spatial distance, and Ns represents the maximum spatial distance within a class (defined as Ns = S for each cluster). The maximum color distance Nc varies by both pictures and clusters; therefore, we take compactness m to be a fixed constant, and the final distance metric D' as is shown in Eq. (16):

$$ {\text{D}}\prime = \sqrt {\left( {\frac{{{\text{d}}_{{\text{c}}} }}{{\text{m}}}} \right)^{2} + \left( {\frac{{{\text{d}}_{{\text{s}}} }}{{\text{s}}}} \right)^{2} } $$
(16)

Since each pixel point is searched by multiple seed points, each pixel point is given a distance from the surrounding seed points, and the seed point corresponding to the minimum value is taken as the clustering center of that pixel point.

  1. (5)

    Optimize the number of iterations. The process continues iterating until the error converges. Generally, the number of iterations taken is 10 for the best results.

  2. (6)

    Enhance the interconnectivity between segmented superpixels. If the superpixel segmentation size is too small or if the original region belongs to the same type of region, too many discontinuous superpixel phenomena occur; at this point, it is necessary to connect the segmented superpixel with neighboring superpixel processing.

The purpose of superpixel segmentation is to reduce the complexity of image processing by using superpixels instead of pixels to represent image features [16], so the above improved SLIC segmentation is used for preprocessing in the next step of image segmentation.

K-means clustering

K-means clustering is an unsupervised clustering analysis algorithm. This data partitioning method is essentially based on Euclidean distance measurement. Dimensions with large means and variances decisively affect the clustering results of the data [28]. Therefore, it is very important to normalize and unify the data before clustering, especially when processing the features of each dimension. In addition, outliers may more greatly impact the calculation of the mean value, thereby resulting in a shift in the cluster center. Therefore, it is best to filter out these noise points before clustering. Like cells, superpixels generated via SLIC segmentation are compact and neat, and the neighborhood features are easy to express to reduce image noise. K-means clustering of the segmented results can yield better clustering results. The main reasons for choosing K-means are as follows: (1) The principle is relatively simple, the implementation is very easy, and the convergence speed is fast; (2) the clustering effect is better; and (3) the outliers that affect the clustering results are removed via SLIC segmentation.

The main steps of the K-means algorithm are as follows: (1) select the number of clusters k; (2) calculate the distance of each sample point to the cluster center; (3) update the cluster center according to the newly divided clusters; and (4) repeat the previous steps until the cluster center does not move [28]. The flow of the K-means algorithm is shown in Fig. 6.

Fig. 6
figure 6

K-means clustering

The images were subjected to adaptive k-value SLIC segmentation and k-means clustering to obtain the pixel coordinates of the diseased areas in the orthophoto.

Inverse projection transformation

How to convert the 2D pixel coordinates of the disease extracted after image segmentation and clustering into 3D spatial coordinates is an important part of the research in this paper. The orthophoto obtained via projection transformation can save customized projection parameters, i.e., the MVPW projection matrix, which can make the 3D model and the 2D pixels establish a mutual conversion relationship. In this paper, we use the inverse projection method to automatically obtain the 3D coordinates of the artifacts’ diseases by performing the inverse operation of the MVPW matrix on the 2D pixel coordinates to obtain the geometric information of the disease surface. The screen pixel coordinates (u, v) are converted into the projection plane coordinates (x, y); the conversion process is shown in Fig. 7.

Fig. 7
figure 7

Inverse window transformation: a pixel coordinate system and b projected coordinate system

In Fig. 7, \({\text{O}}_{\text{uv}}\) is the pixel coordinate origin; \({\text{O}}_{\text{xy}}\) is the projection plane coordinate origin; w and h are the length and height of the orthographic image, respectively; and r is the minimum enclosing box radius of the model, which is determined when setting up the cropping rectangle for orthogonal projection. The window transformation in the MVPW projection transformation yields the following equation:

$$\left\{\begin{array}{c}x=\frac{(\text{u}-\frac{\text{w}}{2})\times 2\text{r}}{\text{w}}\\ y=-\frac{(\text{v}-\frac{\text{h}}{2})\times 2\text{r}}{\text{h}}\end{array}\right.$$
(17)

where \((\text{u},\text{v})\) is the pixel coordinate value and \((\text{x},\text{y})\) is the projection plane coordinate value.

To back-project the projected plane coordinates back to the 3D model coordinates, the projected coordinates are first transformed into 3D coordinates; i.e., the above (x, y) becomes (x, y, 0), and then the intersection point (X, Y, Z) is obtained by using the z-axis as the normal to the 3D model. The intersection process is shown in Fig. 8a. The red points in the figure are the projection plane points, and the purple points are the intersections of the red points with the 3D model in the Z-axis direction.

Fig. 8.
figure 8

2d Returns 3d: a Inverse projection transformation. b Inverse viewport transformation

The 3D coordinates of the returned disease are then subjected to the inverse process of the viewport transformation to obtain the real 3D coordinate information of the model disease, as is shown in Fig. 8b. Finally, the 3D point cloud of the disease is greedily triangulated to obtain geometric information, such as the area and perimeter of the 3D disease area.

Evaluation of image segmentation accuracy

Superpixel algorithm evaluation is an important part of superpixel research, and the common metrics for measuring algorithms now include edge accuracy and edge recall [13]. None of the above metrics can be separated from the basic confusion matrix, which is also called the error matrix. The confusion matrix is drawn with the statistical information on the number of categories predicted by the model as the horizontal axis and statistical information on the number of real labels as the vertical axis [30].

In Table 2, TP indicates that both the true value and the detected value are diseased regions [30]. FN indicates that the true value is a diseased region and that the detected value is a nondiseased region; FP indicates that the true value is a nondiseased region but the detected value is a diseased region; and TN indicates that both the true value and the detected value are nondiseased regions.

Table 2 Confusion matrix

In this paper, the manually extracted diseased edges processed in Photoshop are used as the true values, and the edge accuracy and edge recall are selected as the evaluation criteria of the images to verify the correctness of the method in this paper. The formulas are shown in Eqs. (18, 19):

$$\text{Precision}=\frac{\text{TP}}{\text{TP}+\text{FP}}$$
(18)
$$\text{Recall}=\frac{\text{TP}}{\text{TP}+\text{TN}}$$
(19)

Experimental results and analysis

Experimental data and environment

Experimental data

The experimental object of this paper is a replica of a Chinese Song Dynasty painted Guanyin wood carving with a height of approximately 0.5 m and a width of approximately 0.3 m, as is shown in Fig. 9. It has many shedding and cracking diseases throughout the body, among which shedding is the most typical, and there is obvious pigment layer shedding on the arm, back and base. In this experiment, we take the shedding disease of Guanyin painted sculpture as an example to complete the 3D disease automatic extraction experiment.

Fig. 9
figure 9

Chinese Song Dynasty painted Guanyin wood carving

Experimental environment

In this paper, the research method is implemented in the Visual Studio 2019 environment by writing code and configuring the OSG, Point Cloud Library (PCL), and Open source Computer Vision library (OpenCV) to realize the functions of forward and reverse projection for 2D -3D data, improve the SLIC, and visualize the experimental results.

Automatic extraction of 3D disease

3D to 2D

In the first step, Canon EOS 5D Mark III camera was used to collect images for the color model. The resolution of the image film was 240 dpi, and a Canon EOS 5D Mark III camera was used to collect painted sculpture images with a resolution of 240 dpi. To obtain a fine and high-quality 3D model, a sufficient number of overlapping images needs to be captured from all directions. After image acquisition is completed, 3D reconstruction software is used for processing [31] to generate a 3D model. The reconstruction software used in this experiment is ContextCapture. Then, the size of the model is adjusted, and the 3D digital model is finally obtained in OBJ format. OBJ files support mainly polygonal models, and the complete color model also contains mtl files and img mapping files.

The second step is to manually select the disease area of the 3D color model. In this experiment, three obvious and typical shedding diseases, which were located on the arm, back and base, were selected. The three diseases were named A, B and C, and their locations are shown in Fig. 10.

Fig. 10
figure 10

The approximate location and cutting part of the defect. a Sample1 b Sample2 c Sample3

The third step is to read, display and rotate the pan and zoom the model by calling the OSG open-source function through VS2019. To realize the MVPW matrix transformation, this experiment uses the camera function in OSG to implement RTT (render to textures), which renders the surface of the disease from the normal direction to the texture; the key parameters that need to be set up in this step include the projection matrix, the observation matrix and the viewport matrix. The projection matrix contains the top, bottom, left, right, front, and back sizes of the orthographic projection cropping rectangle, and the above six parameters are all set to r (r is the minimum enclosing sphere radius of the model) in the experiment. The observation matrix contains the viewpoint position, projection center and viewport direction, and the viewpoint position is set at (0,0,1), i.e., the Z-axis direction of the projection; the projection center is the center of mass of the model vertex, and the viewport direction is set to the Y-direction. The viewport matrix contains the window starting point and window size; the window starting point is set to (0,0) in the lower corner of the screen, and the window size is set to (1080, 1080). The three key matrices are important parameters for the return of pixel coordinates to 3D coordinates. Figure 11 shows how to project from a 3D model to a 2D plane.

Fig. 11.
figure 11

3D model for orthographic images

The r values of the three lesions were 9.4680 mm, 8.9910 mm, and 9.5120 mm, which were calculated from the original image resolution (150 dpi) and were within 1.0 mm, thus accounting for at least 5.9055 pixels.

After the above three steps, the final resolution is not less than that of the original image of the orthophoto (as shown in Fig. 12).

Fig. 12
figure 12

Orthographic image of the disease area. a Sample1 b Sample2 c Sample3

Disease extraction

After the orthophoto is obtained, the grayscale coproduction matrix of the image is first calculated to obtain the number of adaptive super pixels K for different diseased areas, and the calculated numbers of super pixels K for the orthophotos of the three diseased areas are Ka = 672, Kb = 745, and Kc = 504. Then, SLIC segmentation is performed with the number of super pixels obtained by the calculation as the initial value. Finally, the results are subjected to K-means clustering to distinguish the disease area from the nondiseased area, and an accurate edge of the disease is obtained; the results are shown in Fig. 13 below.

Fig. 13
figure 13

Disease extraction. a Sample1 b Sample2 c Sample3

2D to 3D

After the above steps, the diseased surface regions are separated from the nondiseased regions to obtain the pixel coordinates of the diseased regions. The pixel coordinates (u, v) are first transformed to (x, y, 0) to obtain the XOY plane coordinates of the diseased region. The Z-projection is intersected to obtain the 3D disease coordinates, and the conversion process is shown in Fig. 14. The conversion result is shown in Fig. 15.

Fig. 14
figure 14

2D pixel to 3D point cloud

Fig. 15
figure 15

Disease 3D point cloud. a Sample1 b Sample2 c Sample3

Finally, the automatically extracted 3D disease coordinates are subjected to inverse viewport transformation, that is, the Fig. 8b process; then, the triangular patches are gridded to obtain the triangular patches of the disease, and the area and perimeter of the disease area are obtained.

Comparison experiment

To verify the correctness of the research method, this paper conducts comparative experiments from multiple perspectives. First, from the perspective of the image segmentation algorithm, the extraction accuracies of different superpixel segmentation algorithms and different numbers of superpixels k are compared. Second, we compare several common cultural relic extraction methods to verify the superiority of this method. Finally, we compare the difference between two-dimensional and three-dimensional extraction under different bending degrees by setting custom three-dimensional data.

Different superpixel segmentation algorithms

To verify the superiority of SLIC segmentation, we compare it with common superpixel segmentation algorithms, including the SEEDS algorithm, the LSC algorithm and the SLIC0 algorithm; we use the manually extracted disease edges in Photoshop as the true values. The following superpixel segmentation methods use the adaptive K value calculated from the image complexity as the number of superpixels to ensure the comparability of the experiments, as shown in Fig. 16. The above resulting edge recall is shown in Fig. 17.

Fig. 16
figure 16

Different superpixel segmentation algorithms

Fig. 17
figure 17

Accuracies of different superpixel segmentation algorithms. a recall, b precision

The analysis of the above figures reveals that although the SLIC0 algorithm can divide the superpixels more compactly and regularly, the edges do not fit well enough, thus resulting in the worst segmentation accuracy. The Seeds algorithm results in uneven blocks of superpixels, and the segmentation accuracy is much lower than that of the method used in this paper. Only the LSC algorithm increases the number of superpixels to a certain number; the accuracy is comparable to that of the SLIC algorithm, but the overall accuracy is not as good as that of the SLIC algorithm. The SLIC segmentation method has the best segmentation effect for more complex boundaries and smaller areas of painted shedding disease.

SLIC for different values of k

To verify the superiority of the adaptive k-value SLIC method, we set the same compactness m and take k = 180, k = 360, k = 720, and k = 1300 for SLIC segmentation, as shown in Fig. 18.

Fig. 18
figure 18

SLIC for different values of k

The above figure shows that undersegmentation occurs when the k value is too small, and the edges of some disease areas are not close enough; when the k value is too large, oversegmentation is likely to occur, and the disease edges will appear jagged. The k value segmentation result calculated from the image grayscale covariance matrix avoids the above two problems, and the edges of the disease are more fitted and smoother.

The edge recall is calculated for different values of k, as shown in Fig. 19. The experimental results show that the segmentation accuracy is highest for the adaptive k value proposed in this paper.

Fig. 19
figure 19

Comparison of SLIC segmentation accuracy for different k values

Comparison of different disease extraction methods

To reflect the advantages of the proposed method in terms of extraction accuracy and efficiency, we chose the current common disease extraction methods for comparative experiments. These methods include CAD vectorization, image segmentation, 3D vectorization and 3D automatic extraction. Since the edge accuracy extracted by deep learning is based on the image segmentation results as the true value and the more important evaluation criterion of deep learning is recognition accuracy rather than the recall rate of the edge, the deep learning method is not included in comparative experiments. Image segmentation accuracy can be evaluated via the edge recall rate, and the degree of overlap of 3D objects can be evaluated via the point cloud registration score. However, there is no standard for evaluating the relationship between 2D images and 3D models. According to the evaluation system used in Reference [5], the area, perimeter and time of disease extraction were determined, and the differences in the extraction results obtained via different methods were compared. With respect to the selection of true values, two-dimensional image segmentation commonly uses PS manual extraction as true values, and three-dimensional point cloud segmentation commonly uses the manual segmentation of point clouds as true values. Therefore, it is reasonable to use the three-dimensional vectorization method [22] as the true value to extract the disease in the three-dimensional color model. Finally, the extraction efficiency is based on the extraction time. As shown in Fig. 20.

Fig. 20
figure 20

Comparison of different methods

In the above picture, 2D and 3D vectorization are based on ancient architectural color painting diseases and icons, and the edge points of the shedding diseases are selected for the experimental objects. The image segmentation is directly selected by the software PS magic wand tool, and the color is given. 3D vectorization is a process of manually selecting points to draw and mark disease legends on a 3D vectorization platform.

Finally, the extraction result of this method is the disease 3D point cloud. Because the point cloud is highly colored, it presents colors from red to blue, thus indicating that the extraction result exists in 3D space. From the above picture, it is not difficult to see that the vectorization method is used for both 2D and 3D data, and the extracted boundary is not as smooth and consistent as the extraction result of the method proposed in this paper.

In the Table 3, bold characters represent better experimental results in each type. For example, on the face and perimeter, the closer to the 3D vectorization, the higher the accuracy of the experimental method. In terms of time, the shorter the time, the higher the accuracy. Table 3 and Fig 21 show that in terms of the area and perimeter of the disease extracted by the four extraction methods, three-dimensional vectorization is close to the proposed method, whereas two-dimensional vectorization is close to image segmentation. The reason is obvious. The 2D extraction result is a plane, whereas the 3D extraction result is a 3D surface, and the surface area perimeter is greater than the plane area perimeter. This finding also proves that compared with traditional 2D extraction, our proposed 3D automatic extraction method improves one dimension, so extraction accuracy is improved by the dimension. From the extraction time point of view, whether 2D or 3D, vectorization is time-consuming and laborious. In contrast, the image segmentation algorithm and the 3D automatic extraction method used in this paper take no more than 5 s to extract a piece of disease and thus has higher extraction efficiency. According to the above results, the proposed method has higher extraction accuracy and efficiency. Three-dimensional automatic disease extraction considers the extraction efficiency of 2D image segmentation and the accuracy of 3D vector extraction and compensates for the lack of automatic extraction methods for 3D surface model data.

Table 3 Statistics of the disease extraction results
Fig. 21
figure 21

Different methods for extracting the histogram results

Disease extraction for different curvatures

In the experiment discussed in Section “Comparison of different disease extraction methods”, we found that the difference between the area and perimeter of the disease extracted by the 2D and 3D extraction methods was not large. This is because the number of diseases on Avalokitesvara is relatively small, and the selected disease area also tends to be flat. Therefore, this paper customizes a plane with texture. By setting the bending degree, disease surfaces with different bending degrees are obtained, and the image segmentation method and 3D automatic extraction method are carried out. In the experiment, a square surface with a plane size of 2 × 2 cm was set, and 0°, 60°, 120° and 180° surfaces were set on the surface. When the bending angle is 0°, the surface is a plane, and when the bending angle is 180°, the surface is semicylindrical. The surface area of the model is constant; that is, the theoretical area and perimeter of the 3D disease of the four custom models are equal. The customized experimental object is shown in the Fig 22.

Fig. 22
figure 22

Bending of the disease surface at different angles

The disease extraction results are show in Table 4.

Table 4 The 2D and 3D extraction results for surfaces with different curvatures

As the above chart data shows (Fig. 23), the larger the curvature of the 3D surface is, the greater the difference in surface geometric information after projection of the 2D image. When the 3D surface is completely flat, the 3D model is called a 2.5D plane. At this point, whether the disease is extracted from 2D or 3D data, the area and perimeter are equal. When the curvature gradually increases, the difference rate between 2D and 3D extraction also increases. In real cultural heritage, except for calligraphy and painting works, a 2.5D plane is almost impossible, thus indicating that extracting diseases from 3D data is very important.

Fig. 23
figure 23

The perimeter difference in the disease extraction area at different bending angles

3D labeling and statistics of diseases

We encapsulate the method proposed in this paper into an executable program through the C++ language and embed it into the 3D detection system for cultural relic surface diseases in Reference [22] with Qt to realize three-dimensional automatic extraction and statistical drawing of cultural relic diseases. First, the three-dimensional model of cultural relics is imported, and then the corresponding disease legend is selected. Finally, the location of the disease area is roughly selected with the mouse and completed by pressing the enter key. As shown in Fig. 24.

Fig. 24.
figure 24

3D detection system for cultural relic surface diseases

At this point, we can quickly and accurately extract the three-dimensional diseases of cultural relics through the software platform and carry out the corresponding legend drawing and geometric information statistics on the disease area. This function provides convenience for cultural relic protection workers to better understand and manage cultural relic diseases and greatly helps cultural relic protection work.

Discussion

This paper use two- and three-dimensional data conversion and adaptive k-value SLIC segmentation to automatically extract accurate shedding disease edges from Guanyin painted sculpture model and obtain important geometric information, including the area and perimeter. Compared with methods of relic disease extraction base on 2D images, such as the methods in References [4, 5]. The method in this paper applies 3D data, and the extraction result is 3D disease which solves the problems of geometric accuracy loss and incomplete information investigation caused by dimension reduction. Compared with the three-dimensional vectorization method of application software, This paper is implemented in code and written into a self-developed experimental platform, which almost realizes automatic extraction. This approach not only avoids human error but also greatly improves efficiency. Moreover, this paper uses the adaptive k value SLIC algorithm. Compared with those of SEEDS, LSC and SLIC0, the extraction accuracy is improved, and the problem of setting the initial value k is solved.

It is undeniable that the method in this paper still provides some improvement. First, in selecting the disease area, it is still necessary to manually select the approximate position. Currently, many scholars have also verified through many experiments that deep learning can not only mark and classify diseases from images but also accurately extract disease edges. However, deep learning increases complexity and requires many training data. The research objects of this paper are painted sculptures. There are no good data samples. Therefore, we choose the traditional image segmentation algorithm based on the 2D-3D conversion model to extract disease. It is a new idea to extract diseases through deep learning combined with the 2D-3D conversion model; It is also the focus of our next research. Second, the image segmentation algorithm used in this paper has many other options. The K-means algorithm used in this paper has the advantages of noise robustness and fast convergence. For experimental objects with a large difference between the edge and background of the local area disease, the K-means algorithm can meet the clustering segmentation requirements. Moreover, after superpixel segmentation, noise can be removed from an image; this benefit also compensates for the shortcomings of the K-means algorithm. For some simple diseases, even adaptive binary segmentation algorithms can be used. We can select the appropriate image segmentation algorithm according to the characteristics of the disease area itself to extract the disease. In addition, our method is not applicable to all diseases, such as smoking and mildew. There is no obvious disease edge, and the distribution is discrete. The method proposed in this paper cannot extract and count the extracted results. Finally, for shedding diseases with large areas and large curvature changes (such as large area diseases and those on painted arms), due to the limitation of screen projection, the method proposed in this paper cannot identify diseases completely at one time, and the solution to this kind of problem is being studied.

Although we selected only the typical shedding diseases of painted sculptures for the experiments, the proposed method is also applicable to some diseases with obvious color differences and closed edges, such as cracks and mud spots. In addition to cultural relic diseases, various surface information on 3D models of cultural heritage (such as portrait characters, texts, and patterns) can be used to realize 2D extraction to 3D extraction. Other types of 3D surface regions of interest, such as mountain fire areas, mountain vegetation area statistics, and medical skin surface extraction, may also refer to the idea of extracting 3D surface information from 2 and 3D data conversion, as used in this paper.

Conclusion

Digital images are widely used in cultural heritage, but the images are always 2D data. In the case of some complex relics, such as curved murals, church domes, ancient architectural paintings, ancient sites and painted sculptures, geometric accuracy is lost due to the reduction in dimensions. Moreover, an increasing number of cultural relics have complete 3D color models, but there is no corresponding automatic extraction method for 3D surface information. In view of the above problems, this paper proposes a method for automatically extracting 3D lesions on cultural relics. First, a 2D-2D data conversion model is designed; this model can realize the mutual conversion between the 3D model and 2D image and transfer the automatic extraction problem to the 2D image to carry out the extraction, thereby solving the problem of loss in accuracy in 2D extraction. Second, adaptive k-value SLIC segmentation is proposed to solve the superpixel k-value setting problem and improve image segmentation accuracy. Finally, through the 2D and 3D integrated models, the disease of the 3D model is statistically analyzed and marked on the software platform. By comparing several common superpixel segmentation methods and SLIC with different numbers of superpixels, we verify the superiority of the SLIC segmentation method with adaptive k values. Moreover, compared with several common cultural relic disease extraction methods, the proposed method has advantages in accuracy and efficiency. The proposed method has achieved good results in both extraction accuracy and extraction efficiency. The software platform can be used to mark and count the extracted results. Using a software platform to mark and count three-dimensional defects can provide an important basis for the restoration of cultural relics. Moreover, the platform can also be used for cultural relic health assessment and digital virtual restoration of cultural relics. Our research method provides a direction for the development of cultural heritage surface information extraction from 2 to 3D, thus, this method has great scientific value and practical significance.

Availability of data and materials

All the data generated or analyzed during this study are included in this published article.

References

  1. Wang Y, Wu X. Current progress on murals: distribution, conservation and utilization. Herit Sci. 2022;11(1):61.

    Article  CAS  Google Scholar 

  2. Bent GR, Pfaff D, Brooks M, Radpour R, Delaney J. A practical workflow for the 3D reconstruction of complex historic sites and their decorative interiors: Florence as It Was and the church of Orsanmichele. Herit Sci. 2022;10(1):118.

    Article  Google Scholar 

  3. Gao T. Survey on the deterioration of the walls of historic architecture in the palace museum. J Gugong Stud. 2019;01:511–22.

    Google Scholar 

  4. Fang MZ, Wang YM, Hou ML. Disease investigation of mural paintings in collections based on ArcGIS Engine. J Beijing Univ Civil Eng Arch. 2010;26(01):10–3+19.

    Google Scholar 

  5. Hu C, Huang X, Xia G, Wang Y, Liu X, Ma X. High precision automatic extraction of cultural relic diseases based on improved SLIC and AP clustering. Int Arch Photogramm Remote Sens Spatial Inf Sci. 2022;XLIII-B2-2022:801–7.

    Article  Google Scholar 

  6. Nikhil MK. Digital image processing for art restoration and conservation. J Emerg Technol Innov Res. 2019;6(4):96–100.

    Google Scholar 

  7. Liang H. Advances in multispectral and hyperspectral imaging for archaeology and art conservation. Appl Phys A. 2012;106(2):309–23.

    Article  CAS  Google Scholar 

  8. Alfeld M, de Viguerie L. Recent developments in spectroscopic imaging techniques for historical paintings—a review. Spectrochim Acta, Part B. 2017;136:81–105.

    Article  CAS  Google Scholar 

  9. Fischer C, Kakoulli I. Multispectral and hyperspectral imaging technologies in conservation: current research and potential applications. Stud Conserv. 2006;51:16–23.

    Article  Google Scholar 

  10. Hu CM, Dong YX, Xia GF, Liu X. An automatic detection method of the mural shedding disease using YOLOv4. Proc. SPIE 12129, International Conference on Environmental Remote Sensing and Big Data (ERSBD 2021). 2021;35.

  11. Yuan Q, He X, Han X, Guo H. Automatic recognition of craquelure and paint loss on polychrome paintings of the Palace Museum using improved U-Net. Herit Sci. 2023;11(1):65.

    Article  Google Scholar 

  12. Yu Y, Wang C, Fu Q, Kou R, Huang F, Yang B, Yang T, Gao M. Techniques and challenges of image segmentation: a review. Electronics. 2023;12(5):1199.

    Article  Google Scholar 

  13. Luo XG, Lü JR, Peng ZM. Recent research progress of superpixel segmentation and evaluation. Laser Optoelectron Prog. 2019;56:53–63.

    Google Scholar 

  14. Van den Bergh M, Boix X, Roig G, Van GL. SEEDS: superpixels extracted via energy-driven sampling. Int J Computer Vis. 2015;111(3):298.

    Article  Google Scholar 

  15. Chen J, Li Z, Huang B. Linear spectral clustering superpixel. IEEE Trans Image Process. 2017;26(7):3317–30.

    Article  PubMed  Google Scholar 

  16. Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell. 2012;34(11):2274–82.

    Article  PubMed  Google Scholar 

  17. Wang Y. High-precision automatic extraction and system construction of cultural relics disease. [Master’s thesis, Beijing University of Civil Engineering and Architecture]. 2019.

  18. Sampietro-Vattuone MM, Peña-Monné JL. Application of 2D/3D models and alteration mapping for detecting deterioration processes in rock art heritage (Cerro Colorado, Argentina): a methodological proposal. J Cult Herit. 2021;51:157–65.

    Article  Google Scholar 

  19. Glória Gomes M, Tomé A. A digital and non-destructive integrated methodology for heritage modelling and deterioration mapping. The case study of the Moorish Castle in Sintra. Dev Built Environ. 2023;14:100145.

    Article  Google Scholar 

  20. Hou M, Li S, Jiang L, Wu Y, Hu Y, Yang S, Zhang X. A new method of gold foil damage detection in stone carving relics based on multi-temporal 3D LiDAR point clouds. ISPRS Int J Geo-Inf. 2016;5(5):60.

    Article  Google Scholar 

  21. Maria GG, Rosella AG. Standard quantification and measurement of damages through features characterization of surface imperfections on 3D models: an application on Architectural Heritages. Procedia CIRP. 2020;88:515–20.

    Article  Google Scholar 

  22. Xia GF, Hu CM, Wang YM. Research on true three-dimensional detection method for surface disease of cultural relics. China cultural Relics Scientific Research. 2018;89–96.

  23. Bolkas D, Vazaios I, Peidou A, Vlachopoulos N. Detection of rock discontinuity traces using terrestrial LiDAR data and space-frequency transforms. Geotech Geol Eng. 2018;36(3):1745–65.

    Article  Google Scholar 

  24. Zhang P, Zhao Q, Tannant DD, Ji T, Zhu H. 3D mapping of discontinuity traces using fusion of point cloud and image data. Bull Eng Geol Environ. 2019;78(4):2789–801.

    Article  Google Scholar 

  25. Wang R. OpenSceneGraph 3D rendering engine design and practice. Beijing: Tsinghua University Press; 2009.

    Google Scholar 

  26. Hengge Technology. OSG Camera—Basic [EB/OL]. (2020-11-12) [2023-06-29]. http://www.henggetec.com/?mod=news_detail&id=37.

  27. Hou Z, Zhao M, Yu W, Ma S. Color image segmentation based on SLIC and watershed algorithm. Opto-Electron Eng. 2019;46(6): 180589.

    Google Scholar 

  28. Haralick RM, Shanmugam K, Dinstein I. Textural features for image classification. IEEE Trans Syst Man Cybern. 1973;SMC-3(6):610–21.

    Article  Google Scholar 

  29. Sinaga KP, Yang M-S. Unsupervised K-means clustering algorithm. IEEE Access. 2020;8:80716–27.

    Article  Google Scholar 

  30. Zhang A. Remote sensing principles and application problem solving. Science press; 2016.

    Google Scholar 

  31. Rushikesh B, Masoud Z-N, Ebrahim E, Javad S. A state-of-the-art review of automated extraction of rock mass discontinuity characteristics using three-dimensional surface models. J Rock Mech Geotech Eng. 2021;13(4):920–36.

    Article  Google Scholar 

Download references

Funding

This research was supported by the National Natural Science Foundation of China (42171416) and the National Natural Science Foundation of China (41401536).

Author information

Authors and Affiliations

Authors

Contributions

CH and GF conceived the presented idea and proposed experimental suggestions. XH conducted and refined the analysis process and wrote the manuscript. XL and XM are responsible for proposing amendments to the manuscript and highlighting the research significance of the paper. All the authors approved the final manuscript.

Corresponding author

Correspondence to Xiangpei Huang.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, C., Huang, X., Xia, G. et al. A high-precision automatic extraction method for shedding diseases of painted cultural relics based on three-dimensional fine color model. Herit Sci 12, 300 (2024). https://doi.org/10.1186/s40494-024-01411-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40494-024-01411-1

Keywords