Skip to main content

Automatic calibration of crack and flaking diseases in ancient temple murals

Abstract

Many precious ancient murals are seriously deteriorated due to long-term environmental influences and man-made destructions. How to effectively protect ancient murals and restore these murals’ original appearance has become an urgent problem for field experts. Modern computer technology makes it possible to virtually restore the deteriorated areas in ancient murals. However, most existing mural restoration approaches require manual calibration of the deteriorated areas, which is very difficult and time-consuming. It has been noticed that the earth layer flaking and cracks are the most common problems of ancient temple murals. This paper proposes an automatic calibration method for the earth layer flaking and cracking deterioration of murals by taking temple murals from the Ming Dynasty in Zhilin Temple as the study object. First, we extract the texture and line features of the deteriorated murals by using multi-dimensional gradient detection in the HSV space. Then, a guided filter operation is employed to highlight the disease (deteriorated) areas and meanwhile suppress other unwanted areas, which helps to extract the flaked areas or cracked lines from the digital murals. The filtered images are segmented by using an automatic threshold to obtain the initial masks of the mural disease areas. Next, we use a two-dimensional tensor voting technique to connect the discontinuous edge curves of the extracted disease areas. Afterwards, the masks of flaking and cracking areas can be generated after morphological processing. Finally, we obtained the calibration results by adding the masks to the original digital murals. Experimental results show that our method can rapidly and accurately calibrate the cracks and the earth layer flaking diseases in the ancient murals. As compared to existing calibration approaches, our method can achieve better performance in subjective visual quality and objective evaluation metrics. Moreover, the method does not need human-computer interaction. This research work provides a solid foundation for the following virtual and practical restoration of ancient murals.

Introduction

Ancient murals are precious treasures of human cultural heritages, which record large amounts of historical, cultural, religious and artistic information, and vividly depict the social and religious features of various ethnic groups in a certain historical period. They have important values for studying folk customs, religion and art of ethnic groups in human history. However, these extremely precious ancient murals have suffered from natural degradation and man-made damage, such as cracks, scratches, earth layer flaking, mold corrosion and other diseases. These diseases not only have a negative impact on the visual appreciation of mural content, but also cause losses to the cultural and artistic values of ancient murals. In order to preserve the ancient murals and restore their original appearances as soon as possible, we cannot postpone their protection and restoration works any longer.

Recently, researchers all over the world have made various progress in studying and preserving murals. He et al. [1, 2] studied the bacterial, fungal, and microbial communities in grotto murals and tomb murals, providing valuable data for preventing and restoring microbial damage to murals. Bomin et al. [3] conducted a scientific investigation into the polymeric materials used in mural restoration, providing relevant theoretical and technical preparation for the aging of Dunhuang murals, advanced removal of invalid materials, and secondary protection. Liu et al. [4] have studied the Daxiong Hall of Fengguo Temple (Yixian County, Liaoning Province) eave murals. They investigated existing structural characteristics, the nature of cracks, the deformation conditions, environmental factors of the murals, as well as the formation process of secondary cracks. Using multi-disciplinary analysis, Sakr et al. [5] studied the murals in the tomb of Kubitt Havasettka in Aswan, Upper Egypt, and evaluated the factors that contributed to the murals’ degradation. These researchers studied ancient murals from different perspectives, and established the foundation for future research.

As modern computer technology advances, researchers utilize intelligent information processing techniques to virtually restore ancient murals. Xu et al. [6] proposed a feature-aware digital mural restoration model that is effective in restoring high-frequency texture details. Cao et al. [7] proposed a consistency-enhanced GAN to achieve mural restoration, which can repair mural images with complex information and strong texture structures. Mol et al. [8] presented a combined textural and structural reconstruction technique of ancient murals. It outperforms other reconstruction techniques in terms of image quality and computing efficiency. Jiao et al. [9] proposed an improved block-matching mural restoration algorithm that can obtain better texture restoration of murals. Cao et al. [10] developed a restoration method for sootiness murals based on dark channel prior and Retinex by bilateral filter. Their method can reduce the influence of sootiness on indoor murals. Li et al. [11] presented the generation-discriminator network model based on artificial intelligence techniques. The model successfully repaired murals with dot-like damage and complicated texture structure. In recent years, the mural research community continue to make progress in the study of virtual restoration of ancient murals. For the virtual restoration of murals, it is first necessary to calibrate the disease regions of murals. After that, the marked disease regions will be digitally restored. Currently, some researchers primarily used publicly available mask datasets to imitate the damaged areas of the murals, and some marked the damaged areas of the murals manually. However, it is a time-consuming and labor-intensive work to manually calibrate large area murals with various diseases.

Automatic calibration methods of mural disease regions utilize intelligent algorithms to reduce restorers’ workload and will not cause permanent damage to the murals. They can lay a good foundation for the digital mural restoration. In the research on the digital calibration of ancient murals, three categories of calibration methods have been proposed: region growth-based methods, morphological-based methods, and clustering-based methods. As a calibration work based on region-growing, Cao et al. [12] conducted in-depth analysis and research on the color, saturation, chromaticity, and luminance of the flaking regions and proposed an improved region-growing algorithm. Although automatic calibration of the flaking regions was achieved, lots of experiments were required to determine the threshold range for this method. Jaidilert et al. [13] proposed a semi-automatic scratch detection method for the Thai murals by using the region-growing algorithm and morphological operations. However, this method requires the user to provide a small number of seed points, which complicates the labeling of damaged areas in the murals. As a calibration work based on morphological segmentation, Yang et al. [14] applied a multi-scale morphological edge gradient detection to extract edge information of mural cracks. Wu et al. [15] designed multi-scale structural elements to calibrate mural diseases by use of morphological high-hat and low-hat operators. These approaches are mainly used to identify the crack features in ancient murals. As a calibration work based on clustering segmentation, Zhang et al. [16] performed RGF filtering on the free-cropped murals to smooth the image details and retain their edges. Their work provides good initialization conditions for the damaged area segmentation, and then conducts image clustering to automatically calculate the damaged area mask. This approach is more suitable for the murals with apparent differences between the damaged and intact areas. Zhang et al. [17] used the local optimal hierarchical clustering as similarity measurement to extract the disease information mask. However, these methods need human-computer interaction, and suffer from low calibration accuracy and high computational complexity. Thus, the research on automatic calibration of the damaged areas of murals is still in the exploratory stage.

In this paper, we propose a novel and efficient method to calibrate cracks and earth layer flaking areas of ancient murals. We collect 62 high-resolution images of Zhilin Temple murals as the study object. The main procedures of the proposed method are as follows. We first use multi-dimensional gradient detection to extract the disease features of the murals, and then use guided filter to enhance the disease features. After automatic threshold segmentation, we obtain the initial masks of the disease areas. Finally, we apply tensor voting and morphological hole filling to generate the complete masks of the disease areas, and thereafter achieve automatic calibration of the mural diseases. The proposed automatic calibration method can significantly save time as compared to manual calibration. Moreover, it also achieves high calibration accuracy of mural diseases.

Proposed method

Feature analysis of mural diseases

Many ancient murals have been discovered and so far preserved in China. According to archaeological classification, ancient murals can be divided into five major categories: tomb murals, temple murals, grotto murals, hall murals, and cliff murals. Among these murals, temple murals are widely distributed, numerous and have the highest artistic achievements in Yunnan, China. Zhilin Temple is located in Jianshui, a county in the south of Yunnan Province, China. This temple was first built in the Yuanzhen period (1295–1297) of the Yuan Dynasty and underwent major renovations in the Qing Dynasty, followed by repairs in 1998 and 2018 [18]. As an ancient building in China, the Zhilin Temple hall was approved by the State Council to be listed in the Sixth Batch of national key cultural relics protection units in 2006 [19]. The existing main hall of Zhilin temple has a tall and majestic appearance. There are two Buddhist murals in the main hall, both of which are of the same size (2.45 meters long and 2.02 meters wide). One is Sakyamuni’s Lecturing and the other is Peacock Ming King’s Dharma Meeting, both of which are of special protection value.

However, due to environmental changes and human factors, the original appearance of these murals has deteriorated to varying degrees, and the most serious damage is the earth layer flaking. Figure 1 shows an example of the diseases in the Zhilin Temple murals. Region one (marked by the red box) has some cracks, whereas Region two (marked by the green box) contains some earth layer flaking areas. The cracks looks like long and thin strips, of which the color and shape are similar to the painting lines. The earth layer flaking have comparatively large areas of damage. Moreover, the disease regions have irregular texture characteristics and complex background interference. This inevitably brings great challenges to the automatic disease identification and calibration of the digital murals.

Fig. 1
figure 1

An example of the diseases in the Zhilin Temple murals

Fig. 2
figure 2

Overall workflow of the proposed method

Fig. 3
figure 3

An example of the original and weighted images in the HSV space. a Original RGB image; b Original HSV image; c Weighted HSV image

Overview of the proposed method

Taking the Zhilin Temple murals as our study object, we propose a novel method to detect and calibrate the diseases in the murals. This work will contribute to the mural’s protection and provide a good reference for future mural repairs. Figure 2 illustrates the overall workflow of the proposed method for automatic disease identification and calibration. The method contains five major procedures: (1) We extract the texture and line characteristics as the multi-dimensional gradient maps from the ancient murals by using a multi-dimensional gradient detection technique; (2) A guided filter is applied to the multi-dimensional gradient maps to enhance the intensity of disease regions and simultaneously suppress the non-disease regions; (3) To obtain the initial masks, we perform automatic threshold segmentation and brightness enhancement operation; (4) A two-dimensional tensor voting is employed in the initial masks to improve the continuity of crack structures and perform a morphological hole filling operation to generate the complete masks; and (5) We add the complete masks to the original images and obtain the final calibration results of disease regions.

Disease features extraction

After careful observation, we find that the main disease types affecting the content of the Zhilin Temple murals are cracks and falling-offs (the earth layer flaking). The disease regions of the murals have irregular texture characteristics and complex background interference from the drawing lines and color patches, which pose difficulties for discriminating the disease regions from the surrounding intact areas. However, we also notice that the earth layer flaking regions have higher saturation than the intact mural areas, and the borders of the disease regions have more obvious gradient changes. Therefore, we consider utilizing a multi-dimensional gradient detection technique [20] to extract the disease features of the Zhilin Temple murals. To begin with, we convert the original mural images from the RGB color space to the HSV color space, where \({\varvec{H}}\), \({\varvec{S}}\) and \({\varvec{V}}\) denote the hue, saturation, and value component, respectively. In the HSV space, an image \({\varvec{f}}(x,y)\) can be expressed as \({\varvec{f}}(x,y) = {\left[ {{\varvec{H}}(x,y),{\varvec{S}}(x,y),{\varvec{V}}(x,y)} \right] ^\mathrm{T}}\), where \(\varvec{H}(x,y)\), \(\varvec{S}(x,y)\) and \(\varvec{V}(x,y)\) denote the pixel matrices of the \({\varvec{H}}\), \({\varvec{S}}\) and \({\varvec{V}}\) components, respectively. Since the disease features of murals are more prominent in the \({\varvec{S}}\) component, weighting the three components will conduce to the extraction of disease regions. According to the differential characteristics of the mural disease regions in each component, we obtain a weighted image \({\varvec{W}}(x,y)\) :

$$\begin{aligned} \begin{aligned} {\varvec{W}}(x,y) \!=\! {\left[ {{\omega _H} \!\cdot \! {\varvec{H}}(x,y),{\omega _S} \!\cdot \! {\varvec{S}}(x,y),{\omega _V} \!\cdot \! {\varvec{V}}(x,y)} \right] ^\mathrm{T}} \end{aligned} \end{aligned}$$
(1)

where \({\omega _H}\), \({\omega _S}\) and \({\omega _V}\) are the weight coefficients of \({\varvec{H}}\), \({\varvec{S}}\) and \({\varvec{V}}\) components, respectively. Figure 3 shows the RGB image, the original HSV image and the weighted HSV image of a mural. Figure 3b gives the result of converting the RGB mural image into the HSV color space. We can clearly observe the drawing lines of the mural. Figure 3c shows the weighted HSV image. It can be seen that the drawing lines of the mural have been heavily suppressed.

Then, we compute the x and y derivatives of the three components of the weighted images \({\varvec{W}}(x,y)\) by using the Sobel operators, and obtain the vectors \({{\varvec{u}}_1}\) and \({{\varvec{u}}_2}\):

$${{\varvec{u}}_1} = \frac{{\partial H}}{{\partial x}}{\varvec{h}} + \frac{{\partial S}}{{\partial x}}{\varvec{s}} + \frac{{\partial V}}{{\partial x}}{\varvec{v}}$$
(2)
$${{\varvec{u}}_2} = \frac{{\partial H}}{{\partial y}}{\varvec{h}} + \frac{{\partial S}}{{\partial y}}{\varvec{s}} + \frac{{\partial V}}{{\partial y}}{\varvec{v}}$$
(3)

where \({\varvec{h}}\), \({\varvec{s}}\) and \({\varvec{v}}\) are the unit vectors along with the H, S and V axes in the HSV color space, respectively. Next, we compute three dot products parameters by use of the vectors \({{\varvec{u}}_1}\) and \({{\varvec{u}}_2}\):

$${g_{xx}} = {{\varvec{u}}_1} \cdot {{\varvec{u}}_1} = {\varvec{u}}_1^{\mathrm{T}}{{\varvec{u}}_1} = {{\left| {\frac{{\partial H}}{{\partial x}}} \right| }^2} + {{\left| {\frac{{\partial S}}{{\partial x}}} \right| }^2} + {{\left| {\frac{{\partial V}}{{\partial x}}} \right| }^2}$$
(4)
$${g_{yy}} = {{\varvec{u}}_2} \cdot {{\varvec{u}}_2} = {\varvec{u}}_2^{\mathrm{T}}{{\varvec{u}}_2} = {{\left| {\frac{{\partial H}}{{\partial y}}} \right| }^2} + {{\left| {\frac{{\partial S}}{{\partial y}}} \right| }^2} + {{\left| {\frac{{\partial V}}{{\partial y}}} \right| }^2}$$
(5)
$${g_{xy}} = {{\varvec{u}}_1} \cdot {{\varvec{u}}_2} = {\varvec{u}}_1^{\mathrm{T}}{{\varvec{u}}_2} = \frac{{\partial H\partial H}}{{\partial x\partial y}} + \frac{{\partial S\partial S}}{{\partial x\partial y}} + \frac{{\partial V\partial V}}{{\partial x\partial y}}$$
(6)

The direction of the maximum rate of change of the weighted image \({\varvec{W}}(x,y)\) is given by the angle function \({\varvec{\theta }}(x,y)\) :

$${\varvec{\theta }}(x,y) = \frac{1}{2}{\tan ^{ - 1}} \left[ {\frac{{2g_{{xy}} }}{{(g_{{xx}} - g_{{yy}} )}}} \right]$$
(7)
Fig. 4
figure 4

Comparison of the extraction results between the original HSV image and the weighted HSV image. a Multi-dimensional gradient detection map of the original HSV image; b Multi-dimensional gradient detection map of the corresponding weighted HSV image

where the element of \({\varvec{\theta }}(x,y)\) is the angle of each pixel, which is used to calculate the gradient. Afterwards, according to Ref. [26], the gradient image \({{\varvec{G}}_\theta }(x,y)\) can be computed as:

$$\begin{aligned} {{\varvec{G}}_\theta }(x,y) \!=\! {\left[ \! \begin{array}{l} \frac{1}{2}\left( {{g_{xx}} + {g_{yy}}} \right) + \frac{1}{2}\left( {{g_{xx}} - {g_{yy}}} \right) \cos 2{\varvec{\theta }}(x,y)\\ + {g_{xy}}\sin 2{\varvec{\theta }}(x,y) \end{array} \right] ^{\frac{1}{2}}} \end{aligned}$$
(8)

In the process of extracting mural disease features, we select the maximum value on each pixel point in the gradient image \({\varvec{\theta }}(x,y)\) to obtain the result of multi-dimensional gradient detection \({{\varvec{f}}_{{\mathrm{mg}}}}\):

$$\begin{aligned}{{\varvec{f}}_{{\mathrm{mg}}}} = \max ({{{\textbf {G}}}_\theta }(x,y))\end{aligned}$$
(9)

where \({{\varvec{f}}_{{\mathrm{mg}}}}\) is the final output image of the original mural image after multi-dimensional gradient detection. It is a grayscale map that contains the disease features of the murals.

Figure 4 shows the results of multi-dimensional gradient detection on the original HSV image and the weighted HSV image. We zoom in on the same local area in Fig. 4a and b. It can be seen that not only can the diseased areas of the mural be extracted after the weighting process, but the drawing lines of the mural are also obviously suppressed.

Disease regions enhancement

In the multi-dimensional gradient map of mural diseases, there still remain some drawing lines with lower intensity than the disease regions. They will probably reduce the accuracy of mural disease calibration. In order to better identify the mural disease regions, we employ a guided filter technique to further suppress the unwanted interference of these drawing lines. Guided filter [21], also known as edge-preserving smoothing filtering, can preserve sharp edges and details in an image and meanwhile suppress the noise heavily. It has been widely used in image enhancement [22], fusion [23], dehazing [24], denoising [25], and other related fields.

In a \({\omega _k}\) window centered at the pixel k, the filtered output image \({{\varvec{q}}_i}\) is a linear transformation of the guidance image \({{\varvec{I}}_i}\). The filter utilizes the guidance image for processing the edges in the input image.

$$\begin{aligned} \begin{aligned} {{\varvec{q}}_i} = {a_k}{{\varvec{I}}_i} + {b_k},\forall i \in {\omega _k} \end{aligned} \end{aligned}$$
(10)

where \({a_k}\) and \({b_k}\) are constant linear coefficients in \({\omega _k}\). We take the derivative of Eq. (10) and obtain \(\Delta {{\varvec{q}}_i} = {a_k}\Delta {{\varvec{I}}_i}\). It can be seen that the output image has a similar gradient to the guidance image. When we take the multi-dimensional gradient detection result \({{\varvec{f}}_{{\mathrm{mg}}}}\) as the guidance map, the sharp edge information can be effectively preserved. Thus the disease feature region can be enhanced. To solve the linear coefficients \({a_k}\) and \({b_k}\), we compute the cost function \(E({a_k},{b_k})\) by minimizing the difference between the input image and the output image:

$$\begin{aligned} \begin{aligned} E({a_k},{b_k}) = \sum \limits _{i \in {\omega _k}} {({{({a_k}{{\varvec{I}}_i} + {b_k} - {{\varvec{p}}_i})}^2} + \varepsilon a_k^2)} \end{aligned} \end{aligned}$$
(11)

where \({{\varvec{p}}_i}\) is the input image at the pixel i , and \(\varepsilon\) is a regularization parameter that controls the smoothness of the filtered image. The values of \({a_k}\) and \({b_k}\) in Eq.(11) can be expressed as

$$\begin{aligned} \begin{aligned} {a_k} = \frac{{\frac{1}{{\left| \omega \right| }}\sum \limits _{i \in {\omega _k}} {{{\varvec{I}}_i}{{\varvec{p}}_i} - {\mu _k}{A_k}} }}{{\sigma _k^2 + \varepsilon }},{b_k} = {A_k} - {a_k}{\mu _k} \end{aligned} \end{aligned}$$
(12)

where \({\mu _k}\) and \(\sigma _k^2\) are respectively the mean value and the variance of the guidance image in \({\omega _k}\), \({A_k} = \frac{1}{{\left| \omega \right| }}\sum \limits _{i \in {\omega _k}} {{{\varvec{p}}_i}}\) is the mean of the input image \({{\varvec{p}}_i}\) in \({\omega _k}\), and \(\left| \omega \right|\) denotes the number of pixels in \({\omega _k}\). Note that a pixel i belongs to all overlapping sliding windows that contain the pixel. It is necessary to average all possible values to obtain the filtered output image \({{\varvec{q}}_i}\), which is computed by

$$\begin{aligned} \begin{aligned} {{\varvec{q}}_i} = \overline{{a_i}} {{\varvec{I}}_i} + \overline{{b_i}} \end{aligned} \end{aligned}$$
(13)

where \(\overline{{a_i}}\) and \(\overline{{b_i}}\) are the mean values of the patches \({a_k}\) and \({b_k}\), respectively. In the case where \(\varepsilon > 0\), if the guidance image is almost constant within \({\omega _k}\), we have \({a_k} \approx 0\), \({b_k} \approx {u_k}\), \({{\varvec{q}}_i} = {u_k}\). This means that an average filtering operation is performed on the input image \({{\varvec{p}}_i}\). If the guidance image changes a lot in \({\omega _k}\), that is \({\sigma _k} \gg \varepsilon\), we have \({a_k} \approx 1\), \({b_k} \approx 0\), \({{\varvec{q}}_i} = {{\varvec{I}}_i}\). Thus, the value of the input image \({{\varvec{p}}_i}\) is unchanged, and the detailed information of the image is retained.

In this work, we take the multi-dimensional gradient detection result image \({{\varvec{f}}_{{\mathrm{mg}}}}\) as the guidance image and the input image of the guided filter. In this case, the guided filter can suppress the painting lines and retain the disease characteristics of the murals.

Acquisition of the initial masks

After guided filtering, the painting lines in the image are effectively suppressed. We use an automatic threshold segmentation method to generate the initial mask of the mural disease areas. The Otsu thresholding algorithm [26] can segment the grayscale image \({{\varvec{q}}_i}\) into two parts: foreground and background. We obtain the optimal threshold t when the interclass variance between foreground and background is the largest.

$$\begin{aligned} \begin{aligned} \mathop {\arg \max }\limits _{1 \le t < L} \left[ {{P_{\mathrm{f}}}(t){{({m_{\mathrm{f}}}(t) - m)}^2} + {P_{\mathrm{b}}}(t){{({m_{\mathrm{b}}}(t) - m)}^2}} \right] \end{aligned} \end{aligned}$$
(14)

where \({P_{\mathrm{f}}}(t)\) and \({P_{\mathrm{b}}}(t)\) are the proportion of foreground and background for threshold segmentation. \({m_{\mathrm{f}}}(t)\) and \({m_{\mathrm{b}}}(t)\) are the mean values of pixels in the foreground and background, respectively. m is the mean value of pixels in the whole grayscale image \({{\varvec{q}}_i}\), and L denotes the number of gray levels.

We separate the disease region of the mural from the intact area by using automatic threshold segmentation. The initial mask \({{\varvec{I}}_{{\mathrm{mask}}}}\) of the disease region of the mural is calculated as

$$\begin{aligned} \begin{aligned} {{\varvec{I}}_{mask}} = \left\{ {\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {{{\varvec{q}}_i}},&{}{{{\varvec{q}}_i} \ge t} \end{array}}\\ {\begin{array}{*{20}{c}} 0,\quad&{}{{{\varvec{q}}_i} < t} \end{array}} \end{array}} \right. \end{aligned} \end{aligned}$$
(15)

where \({{\varvec{q}}_i}\) denotes the output image of the guided filter, and t is the threshold value.

Acquisition of complete masks

Tensor voting (TV) is a perceptual grouping algorithm that can extract visually salient features (such as curves and junctions) from sparse, noisy, binary data in 2-D or 3-D space [27, 28]. The TV algorithm has been widely used in image segmentation and crack detection in various research fields [29,30,31,32,33].

After automatic threshold segmentation, the disease regions in the murals are extracted. However, we find that there are many discontinuous edge curves in the initial masks, particularly in the regions of crack fragments. Tensor voting can infer the complete edge curves from sparse edge curves. In this work, we employ tensor voting to infer and connect these discontinuous curvilinear structures. The tensor voting theory includes tensor coding, tensor voting, and tensor decomposition.

Fig. 5
figure 5

Schematic diagram of 2-D stick tensor voting

In a 2-D space, each pixel of the initial mask \({{\varvec{I}}_{{\mathrm{mask}}}}\) is encoded as a second-order symmetric positive semi-definite tensor \({\varvec{T}}\). \({\varvec{T}}\) is mapped to a matrix \({\left( {{{\varvec{A}}_{ij}}} \right) _{2 \times 2}}\), of which the eigenvalues are \({\lambda _1}\) and \({\lambda _2}\) (\({\lambda _1} \ge {\lambda _2} \ge 0\)), and the corresponding eigenvectors are \({{\varvec{e}}_1}\) and \({{\varvec{e}}_2}\). Theoretically, the tensor \({\varvec{T}}\) can be decomposed as a linear combination of stick tensor and ball tensor.

$$\begin{aligned} \begin{aligned} {\varvec{T}}&= {\lambda _1}{\varvec{e}}_1 {\varvec{{e}}_1 ^\mathrm{T}} + {\lambda _2}\varvec{{e}}_2 {\varvec{{e}}_2 ^\mathrm{T}} \\&= ({\lambda _1} - {\lambda _2})\varvec{{e}}_1 {\varvec{{e}}_1 ^\mathrm{T}} + {\lambda _2}(\varvec{{e}}_1 {\varvec{{e}}_1 ^\mathrm{T}} + \varvec{{e}}_2 {\varvec{{e}}_2 ^\mathrm{T}}) \end{aligned} \end{aligned}$$
(16)

where \(\left( {{\lambda _{\mathrm{1}}}{\mathrm{- }}{\lambda _{\mathrm{2}}}} \right) {{\varvec{e}}_1}{\varvec{e}}_1^{\mathrm{T}}\) and \({\lambda _{\mathrm{2}}}\left( {{{\varvec{e}}_1}{\varvec{e}}_1^{\mathrm{T}} + {{\varvec{e}}_2}{\varvec{e}}_2^{\mathrm{T}}} \right)\) are the stick tensor and the ball tensor, respectively. \(\left( {{\lambda _{\mathrm{1}}}{\mathrm{- }}{\lambda _{\mathrm{2}}}} \right)\) and \({\lambda _{\mathrm{2}}}\) are the saliency of the corresponding stick tensor and the ball tensor, respectively.

Fig. 6
figure 6

Flow chart of acquisition of the complete masks

First, each point in the initial mask \({{\varvec{I}}_{{\mathrm{mask}}}}\) is encoded as a ball tensor. Then, the ball tensor voting is performed to obtain the initial orientation, and the initial orientation is set as the stick voting field. Afterward, a stick voting is performed by casting the votes from each stick token to all the pixels. For a 2-D stick tensor, its voting obeys the following rules. Figure 5 illustrates a schematic diagram of 2-D stick tensor voting, in which the two tensors \({\varvec{O}}\) and \({\varvec{P}}\) are called voter and receiver, respectively. The voting from \({\varvec{O}}\) to \({\varvec{P}}\) can be defined as

$$\begin{aligned} \begin{aligned} \left\{ {\begin{array}{*{20}{c}} {V(P) = {\mathrm{DF}}(s,k,\sigma ){{\varvec{e}}_p}{\varvec{e}}_p^\mathrm{T}} \hfill \\ {{\mathrm{DF}}(s,k,\sigma ) = {e^{ - \left( {\left( {{s^2} + c{k^2}} \right) /{\sigma ^2}} \right) }}} \end{array}} \right. \end{aligned} \end{aligned} \hfill \\$$
(17)

where \({{\varvec{e}}_p}\) is the normal vector of tensor \({\varvec{P}}\). \({\mathrm{DF}}(s,k,\sigma )\) is the decay function of tensor voting. k is the curvature. s is the arc length. \(\sigma\) is a free parameter that controls the scale of the voting field. c is a constant that is defined as: \(c = - 16\log (0.1) \times (\sigma - 1)/{{\mathrm{\pi }}^2}\). After voting on each point of the initial mask \({{\varvec{I}}_{{\mathrm{mask}}}}\), \({\varvec{T}}\) generates a new tensor field \({{\varvec{T}}_{\mathrm{s}}}\):

$$\begin{aligned} \begin{aligned} {{\varvec{T}}_{\mathrm{s}}} = \left( {{\lambda _{{\mathrm{s1}}}}{\mathrm{- }}{\lambda _{{\mathrm{s2}}}}} \right) {{\varvec{e}}_{s1}}{\varvec{e}}_{s1}^{\mathrm{T}} + {\lambda _{{\mathrm{s2}}}}\left( {{{\varvec{e}}_{{\mathrm{s1}}}}{\varvec{e}}_{{\mathrm{s1}}}^{\mathrm{T}} + {{\varvec{e}}_{{\mathrm{s2}}}}{\varvec{e}}_{{\mathrm{s2}}}^{\mathrm{T}}} \right) \end{aligned} \end{aligned}$$
(18)

Since this work uses tensor voting to infer and connect the discontinuous boundary curves, we take the stick tensor saliency map \({{\varvec{S}}_{{\mathrm{map}}}} = {\lambda _{{\mathrm{s1}}}}{\mathrm{- }}{\lambda _{{\mathrm{s2}}}}\) as the tensor voting result. Then, the union operation is executed on the initial mask \({{\varvec{I}}_{{\mathrm{mask}}}}\) and the stick tensor saliency map \({{\varvec{S}}_{{\mathrm{map}}}}\) to generate the mask \({{\varvec{C}}_{{\mathrm{mask}}}}\) with continuous edge curves:

$$\begin{aligned} \begin{aligned} {{\varvec{C}}_{{\mathrm{mask}}}} = {{\varvec{I}}_{{\mathrm{mask}}}} \cup {{\varvec{S}}_{{\mathrm{map}}}} \end{aligned} \end{aligned}$$
(19)

where \(\cup\) is a union operation. Next, we perform a morphological hole filling operation on the mask \({{\varvec{C}}_{{\mathrm{mask}}}}\) to obtain the complete mask. Finally, the complete mask is added to the original mural to achieve automatic calibration of the disease regions. Note that the flow chart of acquisition of the complete masks by use of tensor voting is illustrated in Fig. 6.

Fig. 7
figure 7

Intermediate results of each step of our algorithm. a Original mural images; b Multi-dimensional gradient maps of mural diseases; c Guided filtering maps; d Initial masks; e Complete masks; f Calibration results

Figure 7 shows four examples of the experimental results for each step of the proposed algorithm. Figure 7a gives the input images that contain irregular texture and complex drawing lines. The results of the multi-dimensional gradient detection are given in Fig. 7b. It can be seen that the multi-dimensional gradient detection technique can highlight the disease regions from the murals. Meanwhile, it can suppress the interference of color patches similar to the disease regions. However, there still remain the interference of drawing lines. Figure 7c presents the guided filter results of Fig. 7b. It can be observed that the disease regions look more conspicuous after the guided filter operation. Figure 7d shows the initial masks of the disease regions that are obtained by using automatic threshold segmentation and brightness enhancement operation. It can be seen that the initial masks roughly calibrate the disease regions of the murals. Figure 7e gives the complete masks that are generated by performing the tensor voting and morphological hole filling on the initial masks. It can be seen that tensor voting can connect the broken crack pixels. This will improve the accuracy of disease region calibration. Figure 7f shows the calibration results of the disease regions that are generated by adding the complete masks to the original mural images. It can be seen that the overall calibration results are satisfactory.

Experimental design

In this paper, we apply the proposed method to the ancient murals of Zhilin Temple in Jianshui County, Yunnan Province, China. The computer hardware environment for all experiments in this paper is Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz processor, 8GB RAM, Windows system, and Matlab2018 simulation platform. We collect a dataset consisting of 62 Zhilin Temple ancient murals with sizes ranging from 350 \(\times\) 350 px to 1944 \(\times\) 2169 px. These mural images have various hues, different degree of damages, and diverse diseases.

In the experiments, the weights of three components of the weighted HSV mural images are \({\omega _H} = 0.1\), \({\omega _S} = 0.8\) and \({\omega _V} = 0.1\). The guided filter process needs to determine the regularization parameter \(\varepsilon\) and the filtering window radius r. We evaluate the sensitivity of the parameters \(\varepsilon\) and r to the performance of our algorithm. Figure 8 shows the results of the guided filter with various sets of parameters. The grayscale images are the results of the guided filter whereas the color images are the corresponding disease calibration results. It can be observed that when the values of \(\varepsilon\) and r increase, the grayscale images become smoother. Thus, some of the crack diseases will be overly suppressed. As can be seen from the calibration results, our algorithm achieves better results in the calibration of mural diseases when the regularization parameter and window radius are set to 0.01 and 8, respectively.

Fig. 8
figure 8

Calibration results corresponding to different values of parameters r and \(\varepsilon\)

The scale parameter \(\sigma\) of tensor voting determines the range of voting field. The larger the range of voting field, the stronger the connectivity between two pixel points. First, we keep \(\varepsilon =0.01\), \(r=8\) and vary \(\sigma\) in the range of 2 to 20. Then, we choose nine scale parameters of varying sizes to determine the parameter values in Eq. (17). The corresponding mural disease calibration results and the runtime are shown in Fig. 9 . It can be observed that as the scale parameter (\(\sigma\) increases, the calibration area will become large, and the algorithm requires more computational time. When the scale parameter \(\sigma <8\), it can be seen from the zoomed-in patches that the calibration results of some crack disease areas are discontinuous. When the scale parameter \(\sigma =8\), our algorithm can accurately calibrate the mural crack diseases and retain the shapes and edges of the crack diseases. When the scale parameter \(\sigma >8\), the crack diseases are over-calibrated. Thus, we cannot observe clear shapes and edges of the crack diseases from the calibration results. In this work, we set the scale parameter \(\sigma =8\).

Fig. 9
figure 9

Comparison of the calibration results and the computational time for different scale parameters

The main diseases of the Zhilin Temple murals are the cracks and the earth layer flaking damage. The color and structure of the cracking damage is similar to the painting lines of the mural contents. The earth layer flaking damage has similar color characteristics to the intact background areas of the murals. These diseases pose great challenges to an automatic algorithm to calibrate the disease regions. This work attempts to automatically identify and calibrate the crack and flaking regions of the Zhilin Temple murals, and compares qualitatively and quantitatively (in terms of visual perception, user evaluation, calibration accuracy and speed) with the approaches in literatures [12, 13, 16]. It is worth noting that Ref. [12] needs to artificially set the range of several threshold parameters based on the grayscale histogram of the disease region. Ref. [13] needs to set the seed point of the area growing through human-computer interaction. Ref. [16] needs to manually select the initial region of the mural disease and use it as a condition to calculate the entire disease region of the murals.

Results and discussion

Fig. 10
figure 10

Examples of automatic crack calibration. a Original mural image with cracks; b Ground-truth images; c Complete masks; d Automatic calibration results

Fig. 11
figure 11

Examples of earth layer flaking calibration. a Original mural images; b Ground-truth images; c Complete masks; d Automatic calibration results

Fig. 12
figure 12

Calibration results of small area diseases. a Original images; b Ground-truth images; c Complete masks; d Automatic calibration results

In this section, we select several images of Zhilin Temple murals with cracks and the earth layer flaking damage, and employ the proposed algorithm to calculate the masks for the mural diseases. The calculated disease masks and corresponding calibration results for the disease areas of the murals are shown in Figs. 10, 11 and 12. We also provide the ground-truth images of mural disease areas that were manually calibrated by several experts from the Yunnan Museum Ancient Culture Research Centre. The ground-truth image can be used as a general reference for the computation of the evaluation metrics.

Figure 10 shows the calibration results of the proposed algorithm for mural crack diseases. Figure 10a shows three original mural images with crack diseases. It can be seen that the mural crack diseases are similar to the painting lines in color and structure. Figure 10b shows the human annotated ground-truth mural diseases. The white pixels in Fig. 10c indicate the masks of the disease areas that are calculated by the proposed algorithm. Figure 10d shows the automatic identification and calibration results of the crack disease areas (marked in white color). These results show that the overall calibration effect of the crack areas is satisfactory.

Figure 11 gives several examples of the earth layer flaking calibration results that are generated by our proposed algorithm. Figure 11a shows four original ancient temple mural images with earth layer flaking diseases. It can be seen that the deterioration areas of earth layer flaking are more evident than those of the cracks. Figure 11c shows the corresponding masks of the disease areas that are computed by the proposed algorithm. Figure 11d shows the automatic calibration results of the earth layer flaking disease areas. It can be noticed that the overall calibration results of the earth layer flaking damage areas conform to human visual perception, even if the flaking diseases have similar color patches to the background of the murals.

Figure 12 shows four examples of our proposed algorithm for calibrating small disease areas in the murals. The final calibration results verify that our algorithm can accurately calibrate the crack diseases, and meanwhile discriminate them from the painting lines of the mural contents. Moreover, the algorithm can successfully find some tiny earth layer flaking areas in the murals.

Visual effect comparison

Fig. 13
figure 13

Comparison of the calibration results of different algorithms. a The whole mural; b Part of the mural

Fig. 14
figure 14

Comparison of the calibration results of different algorithms for crack diseases. a Origin images; b Reference images for disease calibration; c Calibration results of Cao et al. [12]; d Calibration results of Jaidilert et al. [13]; e Calibration results of Zhang et al. [16]; f This article’s calibration results

Fig. 15
figure 15

Comparison of the calibration results of another group of murals. a Origin images; b Reference images for disease calibration; c Calibration results of Cao et al. [12]; d Calibration results of Jaidilert et al. [13]; e Calibration results of Zhang et al. [16]; f This article’s calibration results

In this subsection, we conduct the experments to verify the visual effect of the proposed method. We compare our method with other three approaches in Refs. [12, 13, 16], and the experimental results are shown in Figs. 13, 14 and 15.

Figure 13 shows the calibration results of four comparitive algorithms for a Zhilin Temle mural, in which local areas of the results have been maginified. Note that the calibration approach of Cao et al. [12] uses the chroma, saturation, and color characteristics of the flaking regions to perform threshold segmentation. For the images with low color contrast between the flaking regions and the intact background regions, this approach can hardly determine the threshold distribution range of the flaking regions. As can be seen, the approach of Ref. [12] cannot adequately calibrate the damage regions of the mural. The approach of Ref. [13] needs to select some seed points to calibrate the damage regions. It incorrectly identifies the mural’s painting lines as cracks. Reference [16] utilizes the GrabCut model to segment the local damage regions of the mural, and uses K-means clustering to perform disease calibration on the entire mural. We can see that the appraoch of Ref. [16] misidentifies some background regions as the disease regions when they have similar color. This experiment show that our algorithm can accurately calibrate the earth layer flaking and the cracks in the murals.

Table 1 Results of 10 testers’ ratings of the calibration results of four algorithms

Figure 14 gives two mural images with serious crack diseases and the calibration results of the four algorithms. It can be seen from Fig. 14a that there is no great color disparity between the cracks and their surrounding areas. In Fig. 14c and d, we notice that the approaches proposed in the Refs. [12] and [13] misidentifies some background areas as the crack diseases. It can be seen from Fig. 14e that the approach of Ref. [16] has poor performance on mural disease calibration, which fails to find some crack diseases of the murals. Figure 14f shows the calibration results of the proposed algorithm. Since our algorithm utilizes both color saturation and gradient characteristics of the disease regions, it can detect the crack diseases successfully. Furthermore, by using the tensor voting technique, our algorithm can make the calibrated cracks look more continuous.

Figure 15 compares the calibration results of another group of murals. It can be seen that Ref. [12] and Ref. [16] misidentify lots of painting areas (e.g., the cassocks) as the disease regions. Ref. [13] obtains sparse-looking results that cannot adequately calibrate the disease regions of the murals. By comparison, our algorithm achieves the best results that show much resemblance to the reference image for disease calibration.

User evaluation

In this subsection, we conduct the user evaluation on the calibration results of all comparitive algorithms. We randomly choose 20 original mural images and their corresponding calibration results for this test. Ten volunteers are invited to observe and rate the calibration results for all comparative algorithms. The user ratings are classified into three levels: unsatisfactory(\({\mathrm{\times }}\)), basically satisfactory(\(-\)), and satisfactory(\(\surd\)).

The rating results from the ten volunteers are given in Table 1. It can be seen that our algorithm obtains nine satisfactory ratings, except for one basically satisfactory score, which achieves much better performance than other three comparative approaches. This test shows that the proposed algorithm has powerful ability of disease calibration and can meet user requirements.

Quantitative evaluation

In the previous subsection, the experimental results prove that our algorithm achieves visually satisfactory results. In this subsection, we also compare our algorithm with other three approaches of mural disease calibration by use of three objective evaluation metrics: Precision, Recall, and F-measure [34, 35]. Precision measures the exactness of mural disease calibration, whereas Recall describes the completeness of mural disease calibration. F-Measure is the harmonic mean of Precision and Recall. The range of F-measure is [0, 1], and the algorithm achieves the best mural disease calibration results when the F-measure value is equal to 1 and the worst results at 0. The definitions of Precision, Recall, and F-measure are as follows:

$$Precision = \frac{{TP}}{{TP + FP}}$$
(20)
$$Recall = \frac{{TP}}{{TP + FN}}$$
(21)
$$F{\text{-}}measure = \frac{{2 \times Precisoin \times Recall}}{{Precisoin + Recall}}$$
(22)

where TP (true positive) means that disease areas in the ground truth are correctly calibrated as disease pixels. FP (false positive) means that non-disease areas in the ground truth are mistakenly identified as disease areas. FN (false negative) means that disease areas in the ground truth are incorrectly recognized as non-disease areas. The evaluation procedure employs pixel-by-pixel comparisons. High scores of the three metrics means an algorithm achieves good calibration results.

Table 2 Comparison of the computional time of different algorithms

We collect 62 images of the Zhilin Temple murals as the test dataset, and evaluate the calibration performance of four comparitive algorithms by use of the three quantitative metrics. The comparison results are illustrated in Fig. 16. It can be seen that our proposed method achieves the highest scores on all three evaluation metrics. The approach of Cao et al. [12] has the worst performance for this test. Note that our proposed method obtain much higher score of the F-measure than other three comparitive algorithms.

Fig. 16
figure 16

Comparison of the performance of four algorithms

Calibration speed comparison

In this subsection, we conduct a comparison of the computational time of the four algorithms. For this test, we randomly select six mural images with various sizes. Then, the four comparitive approaches are used to calculate the masks of the disease areas of the murals, and their computational time for each image is recorded in Table 2. All algorithms are operated in the same experimental environment. As can be seen, the approach proposed by Cao et al. [12] takes the longest time consumption when dealing with large areas of flaking deterioration. Since the approach of Ref. [12] needs to perform region growth on the R, G, and B channels, it is not suitable for real-time calibration of ancient murals. Although the approach of Jaidilert et al. [13] can rapidly calculate the masks of the mural diseases, its calibration results is somewhat poor. As compared with other three approaches, our algorithm takes less compuational time while achieving better calibration results.

Conclusion

Automatic identification and calibration of the diseases in ancient murals is a challenging task. The main challenge is to accurately calculate the mask of the mural disease areas. A calibration algorithm of mural diseases can provide the masks of disease regions for digital mural restoration. It also reduces the manual works of mural protection. This paper focuses on automatic calibration of the cracks and earth layer flaking deterioration of the Zhilin Temple murals. First, we carefully observe the main types of the diseases affecting the original appearance of murals, and in-depth analyze the characteristics of the mural disease regions. Then we propose an algorithm based on multi-dimensional gradient detection, guided filter and tensor voting to calibrate the cracks and the flaking deterioration of ancient murals. Experiments over 62 Zhilin Temple murals show that the proposed algorithm can accurately calibrate the cracks and earth layer flaking diseases of these murals. It can provide the accurate disease masks for digital mural restoration. As compared with three existing approaches of mural disease calibration, our algorithm does not need human-computer interaction, and achieves better results in terms of visual effect, objective evaluation, and calculation speed.

It should be mentioned that the painting styles and damage conditions vary significantly for the murals in different places. Therefore, the algorithm of mural disease calibration is studied and designed for specific deteriorated murals. Although this paper takes the Ming Dynasty Murals in Zhilin Temple as the research object, the research work can provide a technical idea and reference for the protection of ancient murals in other places of China and all over the world. In the future, on the basis of the work in this paper, we will further study the automatic calibration technology for the disease areas of ancient murals in other places. According to the characteristics of mural diseases in different places, we will consider utilizing the prior knowledge to guide the calibration algorithm and explore a more generalized mural disease calibration model.

The proposed algorithm has been proved to be efficient in calibrating cracks and falling-off diseases in ancient murals. However, it has no learning capability to adapt to various mural deterioration. In the future, we will consider developing a more intelligent calibration system based on machine learning or deep learning algorithms.

Availability of data and materials

The datasets used and/or analyzed in the current study are available from the corresponding author by reasonable request.

Abbreviations

TV:

Tensor voting

2-D:

Two-dimensional

HSV:

Hue, saturation, value

RGB:

Red, green, blue

References

  1. He D, Wu F, Ma W, et al. Insights into the bacterial and fungal communities and microbiome that causes a microbe outbreak on ancient wall paintings in the Maijishan Grottoes. Int Biodeterior Biodegrad. 2021;163: 105250. https://doi.org/10.1016/j.ibiod.2021.105250.

    Article  CAS  Google Scholar 

  2. Ma W, Wu F, Tian T, et al. Fungal diversity and its contribution to the biodeterioration of mural paintings in two 1700-year-old tombs of China. Int Biodeterior Biodegrad. 2020;152: 104972. https://doi.org/10.1016/j.ibiod.2020.104972.

    Article  CAS  Google Scholar 

  3. Bomin S, Huabing Z, Binjian Z, et al. A scientific investigation of five polymeric materials used in the conservation of murals in Dunhuang Mogao Grottoes. J Cult Herit. 2018;31:105–11. https://doi.org/10.1016/j.culher.2018.01.002.

    Article  Google Scholar 

  4. Liu C, He Y, Li Q, et al. Study on the causes of secondary cracks of the eave wall mural of Daxiong Hall at Fengguo Temple in Yixian, Liaoning, China. Herit Sci. 2021;9(1):1–14. https://doi.org/10.1186/s40494-021-00601-5.

    Article  Google Scholar 

  5. Sakr A, Tawab NA, Mahmoud A, et al. New insights on plasters, pigments and binder in mural paintings of the Setka tomb (QH 110), Elephantine, Aswan, Upper Egypt. Spectrochim Acta Part A Mol Biomol Spectrosc. 2021;263: 120153. https://doi.org/10.1016/j.saa.2021.120153.

    Article  CAS  Google Scholar 

  6. Xu H, Kang J-m, Zhang J-w. Digital mural inpainting method based on feature perception. Comput Sci. 2022;49:217–23 (in Chinese).

    Google Scholar 

  7. Cao J, Zhang Z, Zhao A, et al. Ancient mural restoration based on a modified generative adversarial network. Herit Sci. 2020;8(1):1–14. https://doi.org/10.1186/s40494-020-0355-x.

    Article  Google Scholar 

  8. Mol VR, Maheswari PU. The digital reconstruction of degraded ancient temple murals using dynamic mask generation and an extended exemplar-based region-filling algorithm. Herit Sci. 2021;9(1):1–18. https://doi.org/10.1186/s40494-021-00604-2.

    Article  Google Scholar 

  9. Jiao LJ, Wang WJ, Li BJ, Zhao QS. Wutai mountain mural inpainting based on improved block matching algorithm. J Comput Aid Design Comput Graph. 2019;31:119–25 (in Chinese).

    Google Scholar 

  10. Cao N, Lyu S, Hou M, et al. Restoration method of sootiness mural images based on dark channel prior and Retinex by bilateral filter. Herit Sci. 2021;9(1):1–19. https://doi.org/10.1186/s40494-021-00504-5.

    Article  CAS  Google Scholar 

  11. Li J, Wang H, Deng Z, et al. Restoration of non-structural damaged murals in Shenzhen Bao’an based on a generator-discriminator network. Herit Sci. 2021;9(1):1–14. https://doi.org/10.1186/s40494-020-00478-w.

    Article  CAS  Google Scholar 

  12. Cao J, Li Y, Cui H, et al. Improved region growing algorithm for the calibration of flaking deterioration in ancient temple murals. Herit Sci. 2018;6(1):1–12. https://doi.org/10.1186/s40494-018-0235-9.

    Article  Google Scholar 

  13. Jaidilert S, Farooque G. Crack detection and images inpaintingmethod for Thai mural painting images 2018 IEEE 3rd international on image, vision and computing (ICIVC). IEEE. 2018;2018:143–8. https://doi.org/10.1109/ICIVC.2018.8492735.

    Article  Google Scholar 

  14. Yang T, Wang S, Pen H, et al. Automatic identification and inpainting of cracks in mural images based on improved SOM. J Tianjin Univ Sci Technol. 2020;53(9):932. https://doi.org/10.11784/tdxbz201907054.

    Article  Google Scholar 

  15. Wu M, Wang HQ, Li WY. Research on multi-scale detection and image inpainting of Tang dynasty tomb murals. Comput Eng Sci. 2016;52:169–74 (in Chinese).

    Google Scholar 

  16. Hao-yuan Z, Dan XU, Hai-ni LUO, et al. Multi-scale mural restoration method based on edge reconstruction. J Graph. 2021;42(4):590.

    Google Scholar 

  17. Zhang Z, Shui W, Zhou M, Xu B, Zhou H. Research on disease extraction and inpainting algorithm of digital grotto murals. Appl Res Comput. 2021;38(8):2495–24982504 (in Chinese).

    Google Scholar 

  18. Xiong Zhengyi. Yunnan Jianshui refers to the main hall of Lin Temple. Cult Relics. 1986;07:47–9 (in Chinese).

    Google Scholar 

  19. Li S, Wang M, Huang B, Wang F, Qiu J. Study on wood species identification and configuration of wood components in the hall of Jianshui Zhilin temple. Sci Conserv Archaeol. 2020;32(03):91–8 (in Chinese).

    Google Scholar 

  20. Di Zenzo S. A note on the gradient of a multi-image. Comput Vision Graph Image Process. 1986;33(1):116–25. https://doi.org/10.1016/0734-189X(86)90223-9.

    Article  Google Scholar 

  21. He K, Sun J, Tang X. Guided image filtering. European conference on computer vision. Berlin: Springer; 2010. p. 1–14.

    Google Scholar 

  22. Pashaei E. Medical image enhancement using guided filtering and chaotic inertia weight black hole algorithm. In 2021 5th international symposium on multidisciplinary studies and innovative technologies (ISMSIT). IEEE;2021:37–42. https://doi.org/10.1109/ISMSIT52890.2021.9604701.

  23. Chen G, Wang S, Shang K. Infrared and visible image fusion based on rolling guided filter and ResNet101. In 2021 international conference on electronic information engineering and computer science (EIECS). IEEE. 2021;2021:248–51. https://doi.org/10.1109/EIECS53707.2021.9588013.

    Article  Google Scholar 

  24. Soni B, Mathur P. An improved image dehazing technique using CLAHE and guided filter. In 2020 7th international conference on signal processing and integrated networks (SPIN). IEEE. 2020;2020:902–7. https://doi.org/10.1109/SPIN48934.2020.9071296.

    Article  Google Scholar 

  25. Singh H, Kommuri SVR, Kumar A, et al. A new technique for guided filter based image denoising using modified cuckoo search optimization. Expert Syst Appl. 2021;176: 114884. https://doi.org/10.1016/j.eswa.2021.114884.

    Article  Google Scholar 

  26. Otsu N. A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern. 1979;9(1):62–6.

    Article  Google Scholar 

  27. Guy G, Medioni G. Inference of surfaces, 3D curves, and junctions from sparse, noisy, 3D data. IEEE Trans Pattern Anal Mach Intell. 1997;19(11):1265–77. https://doi.org/10.1109/34.632985.

    Article  Google Scholar 

  28. Medioni G, Tang C K, Lee M S. Tensor voting: theory and applications. Proceedings of RFIA. 2000. https://doi.org/10.1109/TPAMI.2011.250.

  29. Li B, Wang KCP, Zhang A, et al. Automatic segmentation and enhancement of pavement cracks based on 3D pavement images. J Adv Transp. 2019. https://doi.org/10.1155/2019/1813763.

    Article  Google Scholar 

  30. Soni PK, Rajpal N, Mehta R. Road network extraction using multi-layered filtering and tensor voting from aerial images. Egypt J Remote Sens Space Sci. 2021;24(2):211–9. https://doi.org/10.1016/j.ejrs.2021.01.004.

    Article  Google Scholar 

  31. Liu K, Yan H, Meng K, et al. Iterating tensor voting: a perceptual grouping approach for crack detection on el images. IEEE Trans Autom Sci Eng. 2020;18(2):831–9. https://doi.org/10.1109/TASE.2020.2988314.

    Article  Google Scholar 

  32. Li Z, Lin A, Yang X. Left ventricle segmentation by combining convolution neural network with active contour model and tensor voting in short-axis MRI. In 2017 IEEE international conference on bioinformatics and biomedicine (BIBM). IEEE. 2017;2017:736–9. https://doi.org/10.1109/BIBM.2017.8217746.

    Article  Google Scholar 

  33. Liu Z, Xiao X, Zhong S, et al. A feature-preserving framework for point cloud denoising. Comput Aided Des. 2020;127: 102857. https://doi.org/10.1016/j.cad.2020.102857.

    Article  Google Scholar 

  34. Davis J, Goadrich M. The relationship between Precision-Recall and ROC curves. In proceedings of the 23rd international conference on Machine learning. 2006;233–40. https://doi.org/10.1145/1143844.1143874.

  35. Peng C, Yang M, Zheng Q, et al. A triple-thresholds pavement crack detection method leveraging random structured forest. Constr Build Mater. 2020;263: 120080. https://doi.org/10.1016/j.conbuildmat.2020.120080.

    Article  Google Scholar 

Download references

Acknowledgements

None.

Funding

This research was supported by the National Natural Science Foundation of China (Grant No. 62166048, Grant No. 61263048), by the Applied Basic Research Project of Yunnan Province (Grant No. 2018FB102), and by the Postgraduate Research and Innovation Foundation of Yunnan University (2021Y258).

Author information

Authors and Affiliations

Authors

Contributions

All authors listed have made a substantial, direct, and intellectual contribution to the work and approved it for publication. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Ying Yu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Deng, X., Yu, Y. Automatic calibration of crack and flaking diseases in ancient temple murals. Herit Sci 10, 163 (2022). https://doi.org/10.1186/s40494-022-00799-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40494-022-00799-y

Keywords