Skip to main content

Extraction and restoration of scratched murals based on hyperspectral imaging—a case study of murals in the East Wall of the sixth grotto of Yungang Grottoes, Datong, China

Abstract

Restoring the murals' various kinds of deteriorations is urgently necessary given the growing awareness of the need to protect cultural relics. Virtual restoration starts with an accurate extraction of deterioration. It is challenging to precisely extract scratches from murals because of their intricate information. Hyperspectral images are used to accentuate scratches of mural in this paper. First, a technique for improving information was put forth that involved the transformation of Principal Component Analysis (PCA) and a high-pass filter. Second, by using multi-scale bottom hat transformation, Otsu threshold segmentation, and non-deterioration mask, the deterioration information was extracted from the enhanced result. Third, the morphological transformation and connected component analysis were used to denoise the extracted results. Additionally, the scratched image was repaired using an improved exemplar-based region filling method. The results of deterioration information under different enhancement methods were discussed, and the deterioration extraction method proposed in this paper was contrasted with other deterioration extraction methods. The extraction accuracy was greatly increased by the suggested method. Additionally, we assessed the accuracy of various virtual restoration techniques for image restoration and discovered that our suggested restoration method did a good job of maintaining the structural integrity of the mural's information.

Introduction

In this paper, we concentrate on the virtual restoration and extraction of the mural's scratches. Murals are among the most significant cultural artifacts and have a high aesthetic value. Nevertheless, as time went on, the mural's surface started to show signs of deterioration. One of the most prevalent types of mural deterioration is scratches. The primary cause of the formation of scratches is the external force which depicts destruction on the surface of mural. They can be easily confused with the linear element painted in the mural because they are typically depicted on the image as long, narrow lines. Scratches can be classified as scratches of paint layer or scratches in other layers.

Currently, information enhancement, information extraction, and virtual restoration can be roughly categorized into three steps in the virtual restoration process for deterioration. Information enhancement primarily consists of adjusting the local contrast between deterioration features and background data to extract the deterioration more precisely. Cornelis et al. [1] based on digital images, using improved local contrast enhancement method to enhance the contrast between cracks and background information. Huang et al. [2] divided the image into low frequency and high frequency through total variation (TV) model decomposition to enhance the structure and texture information on it. Based on hyperspectral images, Sun et al. [3] used PCA and two-dimensional gamma transform to enhance the scratch information in the mural.

Information extraction refers to the automatic extraction of deterioration on the image. The information extraction of scratch is also usually called extraction of ridge valley line [4]. The traditional extraction method involves manually outlining the degraded areas on the murals, which proves less applicable when dealing with actual projects that encompass extensive deterioration on the murals. The automatic detection and generation of linear deterioration typically employ segmentation methods to determine pixels requiring restoration, paving the way for subsequent virtual restoration based on identified areas in need of repair [5]. At present, the detection of deterioration on murals largely focuses on spatial information within images. The detection methods can be categorized into the following four types [6]: namely integrated algorithm, morphological approach, percolation-based method and practical method. Currently, filter-based methods are widely employed for disease detection on murals [7]. However, due to the diverse shapes of linear deterioration on murals, traditional morphological filtering methods struggle to achieve precise extraction. Furthermore, it can be difficult to discern deterioration details from parts of the mural that have similar color patterns when the background information is complicated. Cornelis et al. [1] utilized a multiscale top-hat transform to extract crack deterioration on oil paintings. However, during the extraction process, they did not account for the influence of background same-colored line information on the extraction results. Salinee et al. [8] utilized a seed-based region growing algorithm to extract crack information from Thai murals. The extracted crack details were overly simplistic and didn't account for background interference. Ultimately, the quality of crack extraction heavily relied on the initial selection of seed points. Deng et al. [9] converted the RGB images of murals into HSV images, noticing that defects like cracks exhibited higher saturation levels in the S channel. They used a multidimensional gradient detection algorithm to extract cracks and detachment defects on murals. However, this method is only suitable for defects with high saturation levels that are easily distinguishable from the background information. Cao et al. [10] categorized loss on murals into paint loss and deep loss. They proposed a comprehensive threshold segmentation and improved seed-point growth method for extracting different types of loss on murals. However, this method still requires manual determination of segmentation thresholds during the deterioration extraction process. Rakhi et al. [11] digitized images with cracks, generating an image mask by thresholding the intensity values of pixels. In addition to traditional algorithms, many scholars also utilize deep learning methods to extract deterioration information from murals. Sizyakin et al. [12] realized the automatic recognition of cracks in paintings based on multimodal data. Quan et al. [13] employed an enhanced U-Net to extract cracks from painted artifacts. However, due to the diverse shapes of cracks, the same network might struggle to guarantee restoration effectiveness when dealing with different shapes of cracks.

Regarding the different types of damages on murals, the existing mural restoration methods can be classified into two categories: completely missing information and removing redundant information. The virtual restoration conducted on linear deterioration such as scratches belongs to the category of completely missing information. The current methods for image virtual restoration comprise three main categories: traditional algorithms, deep learning and image decomposition. According to different restoration principles, traditional algorithms can be divided into two main categories: diffusion-based image restoration methods and exemplar-based image restoration methods. Pei et al. [14] utilized an exemplar-based Markov random field model, proposing a concept of first completing structural information and then filling in textural details to perform virtual restoration on missing information in paintings. However, the restoration heavily relies on structural guidance, necessitating manual completion of structural information. Pulak et al. [15] introduced a constraint-based exemplar restoration algorithm to repair missing information on artifacts. This method significantly enhanced the efficiency of restoring extensive missing damages on artifacts. Zhou et al. [16] addressed stained paintings and calligraphy by using hyperspectral images containing characteristic features of the deterioration to guide image restoration. This approach resolved the issue of discontinuous background structural information during the image restoration process. Huang et al. [17] decomposed Dunhuang murals into structural and textural components. They performed virtual restoration using the TV algorithm for the structural part and the Criminisi algorithm for the textural part. However, this algorithm is based on decomposing the image into L-component grayscale images, limiting its virtual restoration capabilities to grayscale mural images. Jia et al. [18] decomposed the hyperspectral images of oil paintings using VO decomposition into structural and textural components. They employed different methods for virtual restoration on the structural and textural parts. Rakhi et al. [11] divided image restoration into structural reconstruction and texture completion, achieving successful applications in virtual restoration of Indian murals. In recent years, a trend in the application of deep learning for image restoration involves first completing structural information and then filling in textural details. Yurui et al. [19] addressed extensive missing information on paintings and calligraphy by employing the concept of completing structure before filling in texture, conducting virtual restoration for large-scale missing damages. Deng et al. [20] emphasized the significance of structural information in virtual restoration of mural by introducing a dual-branch image restoration model guided by structure. But this model still needs a large amount of real data of mural to train the net. Zhou et al. [21] used the structure of Dunhuang murals to guide the whole restoration progress and resolved the problem of mis-color.

At present, the method widely used in image inpainting is exemplar-based region filling algorithm. The essence of exemplar-based matching methods lies in the application of texture synthesis from sample images in restoration. The difference lies in the prioritization calculation of sample blocks. A method based on data value and confidence terms is proposed by Criminisi et al. [22] to determine the priority of handling restoration blocks. This strategy's basis for the priority calculation method has been well studied. Xu et al. [23] utilized the P-Laplacian operator to compute the data term. Meur [24] proposed a data term based on structural tensors. The aim of these methods is to ensure the continuity of image structures as much as possible in the restoration results, meeting the requirements of visual psychology. Therefore, the key aspect of exemplar-based sample matching image restoration methods lies in how to compute the data term and derive priorities from it, ensuring the preservation of structural features in the restored image.

Above all, Previous extraction methods did not completely solve the problem of inapplicability in the extraction of large-scale deterioration on the murals. And when restoring linear deterioration on murals, the current virtual restoration techniques frequently result in discontinuous outcomes. We present a set of efficient scratch information enhancement and information extraction techniques based on hyperspectral data to address the problems mentioned above. During the restoration process, the structural component is added to the image to make it more continuous using the improved exemplar-based region filling method. This paper's main contributions are as follows:

First, we proposed a novel system for the identification of scratched parts in the ancient murals using hyperspectral data. Automatic detection of scratched part in the ancient mural is a very challenging task. The manual creation of masks gives better result but it is a tedious and time-consuming process. Traditional detected method can generate mask but also very easy to be affected by the information of background in the mural. The proposed deterioration mask generation method effectively identifies the scratched patches by combining the enhancement methods and extraction methods.

Second, a new strategy was developed to restore the scratched part using structure-guided exemplar-based region filling algorithm. The proposed method can effectively restore the scratched patch and solve the problem of discontinuous edge of murals when using the traditional method.

Third, the proposed method was compared with state-of-the-art extraction and restoration methods on certain mural and simulated mural. Final results were illustrated the superiority of the proposed method in terms of accuracy and efficiency.

The paper is organized as follows. Section “Methods” explains the data, preprocessing, information enhancement, information extraction and virtual restoration. The final extracted and restored results are provided in Section “Results”. Section “Discussion” compares the proposed method with the state-of-the-art methods. Finally, the conclusion is given in Section “Conclusion”.

Methods

Yungang Grottoes locates in Datong city, Shanxi province, China. Between October 13, 467, and April 26, 499, during the reign of Emperor Xiaowen of the Northern Wei Dynasty, the sixth grotto was dug. Among all the Yungang Grottoes, the sixth grotto is the most magnificent and cannot be compared to any other grottoes from the same age. There is a mural depicting nine Buddha arhats on the east wall of the sixth grotto. The mural measures 3.8 m in height and 5.6 m in length. However, as Fig. 1 illustrates, there are noticeable large scratches on the mural as a result of human factors. And the hyperspectral image of the study region can be seen from Fig. 2.

Fig. 1
figure 1

An example of the deterioration in the eastern mural of sixth grotto

Fig. 2
figure 2

The hyperspectral image of study region

The general workflow is illustrated in Fig. 3. First, pre-processing was done on the original mural hyperspectral data. The first band with the most centralized information was high-pass filtered following the PCA transformation. To guarantee that all the texture information was preserved, a mask for scratch deterioration was created in the section dedicated to extracting deterioration information. Second, the deterioration information was extracted using the multi-scale bottom hat transformation. And the Otsu algorithm was applied to differentiate between the background data and the deterioration. To increase the overall deterioration extraction accuracy, the resulting deterioration binary map was then denoised and the noise area below a predetermined threshold was filtered out. Ultimately, the deterioration area's continuity was guaranteed by the application of morphological transformation. The damaged image was virtually restored using the improved exemplar-based region filling method.

Fig. 3
figure 3

The overall workflow of the method

Data pre-processing

The hyperspectral image analysis system THEMIS-VNIR/400H from Themis Vision System, USA, with a spatial resolution of 1392 × 1000 pixels and a sampling interval of 0.6 nm, was used to gather data from the experimental area. Its spectral resolution was 2.8 nm, and the images were collected in 1040 bands ranging from 377.45 (visible light) to 1033.10 nm (near-infrared).

In the process of data acquisition by the hyperspectral imaging system, the results of the data will be affected by different ambient illumination and dark current noise. Therefore, the influence of such noise can be reduced by reflectance correction. The correction formula is:

$$R = \frac{{R_{raw} - R_{dark} }}{{R_{white} - R_{dark} }} \times 99{\text{\% }}$$
(1)

where R is the reflectance, \(R_{raw}\) is the collected hyperspectral data, \(R_{dark}\) is the dark current data, and \(R_{white}\) is the standard reflector data. The reflectance of a standard reflector is 99%.

Information enhancement

It is very challenging to extract the scratch information in the low contrast area of the mural because the scratch is very similar to the background line texture. In order to improve the contrast between the texture and deterioration information, the information enhancement pre-processing step is added before information extraction.

PCA and high-pass filter

We performed the forward PCA transformation to the hyperspectral image using ENVI 5.3 software from Exelis Visual Information Solutions, USA, to make the information more concentrated into a limited set of features by separating the signal and noise, and obtained the information contribution of ten top components. By transforming the original data into a subspace with a smaller dimension, where the image is rearranged as a decreasing function of its spectral information, PCA lowers the volume of data. PCA calculates the original data's covariance matrix, finds the eigenvectors that correspond to it, projects the image onto these eigenvectors and chooses how many principal components to keep [25]. The first principal component contains the largest percentage of variance in the data, the second principal component contains the second largest variance, and so on, and the final principal component band appears as noise because it contains very little variance (mostly caused by noise in the original spectrum). The standard deviation of ten top components can be seen from Fig. 4. The first band with the highest amount of deterioration information is chosen for high-pass filtering enhancement after the top ten characteristic bands are examined.

Fig. 4
figure 4

The standard deviation of ten top components

The low frequency component of the image is removed by the high-pass filter, which also amplifies the high frequency component. High-pass filters are frequently used to sharpen an image by enhancing its texture and edge information [26].

Information extraction

Morphological filters are currently widely used to extract information on linear deterioration. The bottom hat transformation is chosen to extract deterioration based on the characteristics of the deterioration information maintained on the image above. The bottom hat result is obtained by subtracting the original image from the image after the morphological closing. As a result, the original image's darker gray area can be obtained using the bottom hat transformation. Among these, the size and inherent properties of the detected deterioration information should be taken into consideration when choosing structural elements in morphological closing.

Non-damaged area mask

It is challenging to separate the line information from the deterioration information on the mural through image preprocessing completely. Thus, the pre-processed image has a mask of the non-deterioration area created to lessen the impact of background information. ENVI software is primarily used to create the mask for the background portion devoid of deterioration information when creating a mask for scratch information.

Multi-scale bottom hat transformation

To reduce the influence of the noise result in the mural caused by the texture lines on the retrieved information, the multi-scale bottom hat transformation [27] was applied to extract the degradation information.

In addition to lessening the impact of noise on the extraction results, the multi-scale detection framework can hold more precise data. The square structure element is chosen by the multi-scale bottom hat transformation mentioned above. The size of structural element b ranges from 3 × 3 to n × n, with n depending on the width of the texture information or detected scratched information. Extremely small scratches on the image can be extracted by selecting certain structural elements while other structural elements do not react to this small information. As we can see from Fig. 5, The bottom hat transformation applied to the enhanced image and the information is extracted using Otsu threshold segmentation. The images with structural components 3, 4, and 5 are added to gain the base map that contains most of the linear information. Here, set n to 10 in order to account for the scratch width in this post. Because the acquired base map contains small noise, the small noise is removed by multiplying the results of the other structural components by the base map to obtain the final scratch map.

Fig. 5
figure 5

Multiscale morphological bottom-hat workflow

Otsu segmentation

The Otsu method is an adaptive threshold segmentation algorithm for image gray level, which divides the image into foreground and background according to the distribution of gray value on the image. The foreground is what we want to segment according to the threshold. The boundary between the background and the foreground is the threshold we ask for. The intra-class variance between the corresponding background and foreground is calculated under different thresholds. When the intra-class variance reaches a maximum value, the corresponding threshold is the threshold obtained by the Otsu method.

We suppose that the number of pixels in an image is M × N. The number \(N_0\), whose average gray value is \(u_0\), represents foreground pixels having a gray value below the segmentation threshold T. The number \(N_1\), whose average gray value is \(u_1\), represents background pixels having a gray value above the segmentation threshold T. The proportion of the number of pixels belonging to the foreground and background in the whole image is:

$$\omega_0 = \frac{N_0 }{{M \times N}}$$
(2)
$$\omega_1 = \frac{N_1 }{{M \times N}}$$
(3)

The total average gray level of the image is denoted as u and the variance between classes is denoted as g.

$$u = \omega_0 \times u_0 + \omega_1 \times u_1$$
(4)
$$g = \omega_0 (u_0 - u)^2 + \omega_1 (u_1 - u)^2$$
(5)

Denoising

The image's fine noise spots were denoised using the connected domain labeling method, which was applied after scratch map. In order to achieve noise reduction, each white pixel in the binary image is passed through using the 4-neighbor method. This was used to filter out the noise points that do not meet the connectivity domain threshold requirements.

The scratch map`s white pixels are discontinuous after marking the image. To make the deterioration information continuous for the ensuing restoration work, the dilation in the morphological transformation is adopted for the issues mentioned above.

Virtual restoration

Traditional exemplar-based algorithms mainly restore the degraded part by finding the most suitable exemplar, but sometimes it results in an incorrect texture filling. When calculating data value, the existing exemplar-based method try to extract the structural feature from the original image as much as possible [28]. However, natural images are generally composed of structure and texture, and the texture components of many images will contain strong structural features, which makes it very difficult to extract the main sketch structure of the image. In this study, relative total variation was introduced to extract the main sketch structure of the image, and the structural features of the image were used to guide the exemplar-based image restoration method to solve the problem of discontinuity.

Relative total variation

The image can be composed by structural component and textural component. Structure usually refers to the major sketch feature of image after filtering the details of image [29]. Texture usually refers to surface patterns that are similar in appearance and local statistics [28]. Relative total variation (RTV) contains a general pixel-wise windowed total variation measure, written as:

$${\rm{\mathfrak{D}}}_x \left( p \right) = \sum \limits_{q \in R(p)} g_{p,q} \cdot \left| {(\partial_x S)_q } \right|$$
(6)
$${\rm{\mathfrak{D}}}_y \left( p \right) = \sum \limits_{q \in R(p)} g_{p,q} \cdot \left| {(\partial_y S)_q } \right|$$
(7)

where q belongs to R(p), the rectangular region centered at pixel p. \({\rm{\mathfrak{D}}}_x \left( p \right)\) and \({\rm{\mathfrak{D}}}_y \left( p \right)\) are windowed total variations in the x and y directions for pixel p, which count the absolute spatial difference within the window R(p). The \(g_{p,q}\) is a weighting function defined according to spatial affinity:

$$g_{p,q} \propto {\text{exp}}( - \frac{(x_p - x_q )^2 + (y_p - y_q )^2 }{{2\sigma^2 }})$$
(8)

where \(\sigma\) controls the spatial scale of the window.

To help distinguish prominent structures from the texture elements, besides \({\rm{\mathfrak{D}}}\), our method also contains a novel windowed inherent variation, expressed as

$${\rm{\mathcal{L}}}_x \left( p \right) = \left| { \sum \limits_{q \in R(p)} g_{p,q} \cdot (\partial_x S)_q } \right|$$
(9)
$${\rm{\mathcal{L}}}_y \left( p \right) = \left| { \sum \limits_{q \in R(p)} g_{p,q} \cdot (\partial_y S)_q } \right|$$
(10)

where \({\rm{\mathcal{L}}}\) captures the overall spatial variation.

To further enhance the contrast between texture and structure, especially for visually salient regions, \({\rm{\mathcal{L}}}\) and \({\rm{\mathfrak{D}}}\) are combined to form a more effective regularizer for structure-texture decomposition. The objective function is finally expressed as

$$\arg {\mathop {\min }\limits_S} \sum \limits_p (S_P - I_P )^2 + \lambda \cdot (\frac{{{\rm{\mathfrak{D}}}_x \left( p \right)}}{{{\rm{\mathcal{L}}}_x \left( p \right) + \varepsilon }} + \frac{{{\rm{\mathfrak{D}}}_y \left( p \right)}}{{{\rm{\mathcal{L}}}_y \left( p \right) + \varepsilon }})$$
(11)

where the term \((S_P - I_P )^2\) makes the input and result not deviate wildly.

Improved exemplar-based region filling algorithm

Every pixel in a degraded image has a data value and a confidence value. The priority calculation is biased toward those patches which are on the continuation of strong edges and are surrounded by high-confidence pixels. The priority of a pixel is determined by confidence value and data value which be calculated as follows:

$$C\left( p \right) = \frac{{\sum_{q\epsilon \Psi_{\hat{p}} \cap^(1 - \Omega )} C(q)}}{{\left| {\Psi_{\hat{p}} } \right|}}$$
(12)
$$D\left( p \right) = \frac{{\left| {\nabla I_p^\bot \cdot n_p } \right|}}{\alpha }$$
(13)
$$P\left( p \right) = C(p)D(p)$$
(14)

where \(\left| {\Psi_{\hat{p}} } \right|\) is the area of \(\Psi_{\hat{p}}\), \(\alpha\) is a standardization factor (e.g. \(\alpha\) = 255), \(n_p\) is a unit vector symmetrical to the front \(\delta \Omega\) (\(\delta \Omega\) is the boundary of unknown region) in the point p and \(\bot\) is the symmetrical operator. The priority P(p) is evaluated for each border patch, with distinct patches for every pixel on the restriction of the target region. During the initialization C(p) = 0 \(\forall ,p \in {\Omega }\) and C(p) = 1 \(\forall ,p \in (1 - {\Omega })\).

The calculation of data value of original image can be seen from Fig. 6a. The original image is composed by three parts which is known area 1, known area 2 and degraded part \({\Omega }\). The main part of calculation of data value is to calculate \(\nabla I_p^\bot\) which denotes the direction that the gradient changes at its slowest pace. There are two situations after extracting structural component of original image.

Fig. 6
figure 6

Calculation of data value of improving exemplar-based region filling method a degraded image; b situation 1; c situation 2

The first situation is the area L is not the major component of structure, the known area 1 and known area 2 be cooperated to known area1 after smoothing which can be seen form Fig. 6b. Then pixel 1 (P1) may not the pixel which has the most priority. The restoration may happen in pixel 2 (P2). The second situation is the area L is the major component of structure and the other area remain the same which can be seen from Fig. 6c. The pixel 1 (P1) still has the most priority and to be repaired along the area L. If the area L is not the major component of structure, the restoration along this line will lead the result of discontinuity. Thus, the data value has to be considered comprehensively using the method of structure-guiding.

$$D\left( p \right) = \alpha D(p)_t + \beta D(p)_s$$
(15)

where \(D(p)_t\) is the data value of original image I, \(D(p)_s\) is the data value of structural image S.

Algorithm 1
figure a

Restoration of Scratched Mural images

The total workflow of virtual restoration can be seen from Fig. 7. The process of image restoration algorithm guided by structural components is described as algorithm 1.

Fig. 7
figure 7

The process of image restoration algorithm guided by structural components

Results

Information enhancement

We used the hyperspectral imagery of the Yungang Grottoes to perform PCA transformation. Figure 8 shows the top ten component images, which are the results of PCA. Among these components, the first band contains the most information and the scratch information is most different from the background information in this band After reflectance correction of the data, the hyperspectral image was transformed by PCA using the ENVI software.

Fig. 8
figure 8

Top ten components obtained by the PCA transformation (a 1st component; b 2nd component; c 3rd component; d 4th component; e 5th component; f 6th component; g 7th component; h 8th component; i 9th component; j 10th component)

The first principal component containing the most information was selected for high-pass filtering enhancement. The results of the first principal component are shown in Fig. 9a. The deterioration information in the image is enhanced by the high-pass filter, and the result of the enhancement is shown in Fig. 9b.

Fig. 9
figure 9

Information enhancement results (a scratched image of the first principal component after PCA transformation; b enhanced image after high-pass filter)

Information extraction

Scratched images in practical engineering are subject to the influence of background line information. As a result, the mask was created for non-damaged areas, and in those areas, the region of interest was manually drawn in ENVI software.

The multi-scale bottom hat transformation is used to extract the line information in the image because features that are similar with scratches on the image will affect the results of extraction. The multi-scale bottom hat transformation produces a square structural element with a size range of 3 × 3… 10 × 10. The maximum structural element, as determined by repeated experiments, is 10. We can extract very small linear deterioration by selecting small structural elements. From these very small deterioration regions, we are unable to extract other structural elements from the multi-scale structural elements. The more noticeable feature of linear deterioration can be extracted when the larger structural elements are chosen in order to extract information from the image.

The Otsu algorithm is used to threshold the improved images following multi-scale bottom-hat transformation. By processing the extracted structure further after each structural element, the background and deterioration information can be obtained without having to manually select the threshold.

The extraction results of other larger structural elements are connected with the base map above. By using this image as a reference, the noise that is not related to the deterioration information on the base map is eliminated. The final connected map is given in Fig. 10a. The connected domain labeling method is used to further filter out the impact of small noise in the extraction results by removing fine noise from the images. We utilize the largest noise pixel in the background as a guide when choosing the filtering parameters in order to exclude any noise that has a value lower than this maximum noise. The connected domain marker's area of 50 is selected following multiple experiments. Figure 10b shows the filtered map.

Fig. 10
figure 10

Information extraction results (a final map of scratched image; b denoised scratched map; c dilated scratched map)

We can observe information discontinuity on the images denoised by the connected domain labeling. In order to create a continuous image, the dilation operation in the morphological transformation is utilized to enlarge the binary image's white pixels. We use the average width of scratches as a guide when choosing the dilation parameters. The expansion element has been set to 6 after numerous experiments. We can see the dilated scratched map in Fig. 10c.

Virtual restoration

Figure 11 shows the results of the final restoration. Figure 11a displays the original scratched image. The structural image after smoothing can be seen in Fig. 11b. The information which is the major component will be remained at the structural image, which will guide the whole process of filling. Final result is displayed in Fig. 11c.

Fig. 11
figure 11

Restored images (a original scratched image; b structural image; c scratched restoration)

It is evident that the improved exemplar-based region filling method preserves the fundamental structure of the mural while restoring its information. The details of restoration can be seen from Fig. 12.

Fig. 12
figure 12

Details of restoration (a–c original images; d–f restored images)

We also choose images of other sections of east wall of the sixth grotto for restoration to confirm the efficacy of the restoration technique used in this work. The original images can be seen from Fig. 13a, d, the degraded images can be seen from Fig. 13b, e, and the restored images can be seen from Fig. 13c, f.

Fig. 13
figure 13

Restoration results using improved exemplar-based region filling algorithm (a, d original images; b, e degraded images; c, f restored images)

Discussion

Combination of different enhancement steps

To further verify the effectiveness of the enhancement method presented, we perform the extraction experiment of the original image, the image of first band after PCA and the image after enhanced. As seen in Fig. 14, different enhancement techniques produce different extraction results.

Fig. 14
figure 14

Different detection results (a non-enhanced result; b first band result; c proposed result)

This research proposes an improved method that can fully extract the scratches from the image. As we can see from Fig. 15. Scratch information on the image cannot be accurately extracted if it is not improved in any way. The results of the extraction process are insufficient if the scratches are extracted directly from the first band following PCA transformation. The scratches information can be fully extracted following the enhancement method and noise will still be present in this study, but it can be effectively eliminated in the next steps.

Fig. 15
figure 15

Comparison of the extraction accuracy for the different extraction methods (a original image; b non-enhanced result; c first band result; d proposed result)

Comparison of different extraction methods

A technique for extracting scratches from murals with more linear information is presented in this paper. To test its validity, the Gabor filter method [30], the integrated method [31] as well as the seed-based region growing method [8] were selected. The Gabor filter method is widely used in automatic crack detection. The integrated method combines various enhancement and extraction methods, and the seed-based region growing method uses a small number of seed points to compute the location of scratches. Figure 16 displays the various results of the extraction.

Fig. 16
figure 16

Different detection results (a Gabor filter; b integrated method; c seed-based region growing; d proposed method)

From Fig. 16, we can find that the deterioration extraction method based on Gabor filtering can comprehensively extract some edge information, but it may ignore deterioration information in regions with prominent background information. The integrated method can extract most of the deterioration information, but the extracted lines are not continuous. The seed-based region growing method selects initial seed points through threshold segmentation, which helps to mitigate the impact of background lines, but the deterioration information grown using this method is not continuous. As illustrated in Fig. 16d, the proposed method can effectively avoid the influence of background information and provide more continuous result of deterioration information.

The correctly classified scratch areas were manually sketched on the three areas to compare with the correct scratch areas obtained under the three detection methods.

The evaluation indexes were DA (detection accuracy) and FDR (false discovery rate) [32]. DA and FDR are calculated as follows:

$$DA = \frac{S_T }{{S_G }}$$
(16)
$$FDR = \frac{S_F }{{S_G }}$$
(17)

where \(S_G\) is the number of pixels of deterioration parts manually selected in the different detection areas, \(S_T\) is the detection result of deterioration parts under different enhancement methods (the number of pixels correctly extracted under different enhancement methods) and \(S_F\) represents the number of misclassified pixels.

Three regions A, B and C were selected on the image of Yungang Grottoes to test the detection accuracy. The size of the three study areas is 150 × 150 pixels which can be seen in Table 1. The number of pixels in the deterioration area manually sketched in detection region A, B and C is 3561, 3511 and 4379, respectively. Accurate extraction results are shown in Table 1. The majority of the deterioration information in the middle region can be extracted by the Gabor filter, but the FDR value is excessively high. The seed-based region growing method can eliminate the influence of background information, but in the edge area of the transition between deterioration and background, the seed point cannot grow well, leading to a low DA value. The FDR value of the threshold method is low, but the extracted deterioration information is not continuous. The approach suggested has a low error rate (FDR) of less than 0.2 and a DA value above 0.5 in the accuracy evaluation area, which can better avoid the influence of background line information.

Table 1 Comparison of the extraction accuracy for the different extraction methods

Comparison of restoration methods

To verify the suitability of the restoration technique, virtual restoration of the lab-produced simulated murals was done using the diffusion-based method of Fast-marching method (FMM) [33], the exemplar-based method of Criminisi [22], and the structure-guided image restoration method [17]. Figure 17 displays the results of the different methods.

Fig. 17
figure 17

Comparison of restoration results of different algorithms in simulated murals

The edge information of the FMM was blurred during the virtual restoration of the image because of the influence of the line information on the mural background. Facial blur is another issue that the Criminisi algorithm faces. The image restoration method based on plane structure guidance finds it challenging to extract similar structure guidance restoration on the image due to the complex information on the fresco. Both the image face and the edge position of mural paintings can be virtually restored using the technique described in this paper. Figure 18 displays the parts' details. As can be seen from Fig. 18 and Table 2, the improved exemplar-based region filling method not only better filled texture details in the process of restoration, but also retained the structural similarity with the original image.

Fig. 18
figure 18

Comparison of detail restoration results of different algorithms

Table 2 Accuracy evaluation of different restoration methods

Root mean squared error (RMSE), Peak signal-to-noise ratio (PSNR), and Structural similarity index (SSIM) are used to evaluate the restoration accuracy.

$$RMSE = \sqrt {{\frac{1}{N}\mathop \sum \limits_{i = 1}^n (Y_i - f(x_i ))^2 }}$$
(18)
$$PSNR = 20log_{10} \frac{255}{{RMSE}}$$
(19)

where \(Y_i\) is the pixel value of restored image, \(f(x_i )\) is the pixel value of original simulated image, N is the number of observations.

$$SSIM\left( {x,y} \right) = \frac{{(2u_x u_y + c_1 )(\sigma_{xy} + c_2 )}}{(\mu_x^2 + \mu_y^2 + c_1 )(\sigma_x^2 + \sigma_y^2 + c_2 )}$$
(20)

where \(u_x\) and \(u_y\) are the mean value of image x and y. \(\sigma_x\) and \(\sigma_y\) are the standard deviation of x and y. \(\sigma_{xy}\) is the covariance of image x and image y. \(c_1\) and \(c_2\) are constant.

And we also use other reference-guided methods to compare with our method. The results can be seen from Fig. 19. Structure-guided method emphasize too much on the edge of simulated mural and cause the problem of discontinuous. The reference-guided method can restore image well, especially in the part of face. It will enhance the linear information and make picture cleaner. But the condition of that is not usually suitable for mural protection and it also has the problem of discontinuous in the edge of picture. Figure 20 reveals the results of restoration in other scratched murals and more restored results of digital images can be seen from Fig. 21.

Fig. 19
figure 19

Comparison with other reference-guided algorithms in simulated murals

Fig. 20
figure 20

Comparison of restoration results of different algorithms in murals

Fig. 21
figure 21

Comparison of restoration results of different algorithms in digital images

Conclusion

This paper proposes an automated system for the extraction and restoration of scratched parts seen in the murals from Yungang Grottoes. This model overcomes the traditional challenges existed in other methods. Three steps comprise our method: information enhancement, information extraction, and virtual restoration. Our algorithm improves the ability to distinguish between background information and scratches by combining PCA and high-pass filter. Then, the scratch information is extracted using a combination of non-damaged area mask, multi-scale bottom hat, Otsu, connected component analysis and dilation. Then the restoration result is improved based on improved exemplar-based region filling method. Experiments taken place in the mural of east wall of the sixth grotto of Yungang Grottoes and simulated mural show that the proposed method can achieve better results both in extraction and restoration. As compared with three existing approaches of extraction, the technique described in this study can entirely extract the scratches without being impacted by their depth or shape when there is extensive scratch degradation of the mural. When repairing large-scale deterioration on complex images, the improved exemplar-based region filling method based on structure guidance could fix the issue of discontinuous lines or vague surface caused by existing restoration methods. Although this paper only takes the scratched parts on mural as the research object, the research work can provide a technical idea and reference for extraction and restoration of other linear degradation existed in murals such as crack and graffiti. In the future, on the base of the work in this paper, we will further study the automatic extraction and restoration methods for the degraded areas of ancient murals in other places. The primary drawback of this system is that the restoration accuracy will be impacted by the influence of small deterioration areas that are difficult to reduce in the hyperspectral image. Future research should examine the enhanced extraction technique utilizing hyperspectral data.

Availability of data and materials

This dataset used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

PCA:

Principal component analysis

RTV:

Relative total variation

DA:

Detection accuracy

FDR:

False discovery rate

FMM:

Fast marching method

RMSE:

Root mean square error

PSNR:

Peak signal to noise ratio

SSIM:

Structural similarity

References

  1. Cornelis B, Ružić T, Gezels T, et al. Crack detection and inpainting for virtual restoration of paintings: the case of the Ghent Altarpiece. Signal Process. 2013;93(3):605–19. https://doi.org/10.1016/j.sigpro.2012.07.022.

    Article  Google Scholar 

  2. Huang W, Wang SW. Dunhuang murals inpainting based on image decomposition. 2010 3rd International Conference on Computer Science and Information Technology, 2010;2010:397–400. https://doi.org/10.1109/ICCSIT.2010.5564944.

  3. Sun PY, Hou ML, Lyu SQ, et al. Enhancement and restoration of scratched murals based on hyperspectral imaging—a case study of murals in the Baoguang Hall of Qutan Temple, Qinghai, China. Sensors. 2022;22(24):9780. https://doi.org/10.3390/s22249780.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Lopez M, Lumbreras F, Serrat J, et al. Evaluation of methods for ridge and valley detection. IEEE Trans Pattern Anal Mach Intell. 1999;1999:327–35. https://doi.org/10.1109/34.761263.

    Article  Google Scholar 

  5. Tijana R, Aleksandra P. Context-aware patch-based image inpainting using Markov random field modeling. IEEE Trans Image Process. 2015;2015:444–56. https://doi.org/10.1109/TIP.2014.2372479.

    Article  Google Scholar 

  6. Mohan, Arun P, Sumathi P, et al. crack detection using image processing: a critical review and analysis. Alex Eng J. 2017;57(2):787–798. https://doi.org/10.1016/j.aej.2017.01.020.

  7. Mahajan A, Raisoni G. Cracks inspection and interpolation in digitized artistic picture using image processing approach. J Recent Trends Eng (IJRTE). 2009;97–99.

  8. Jaidilert, Salinee, Ghulam Farooque, et al. Crack Detection and Images Inpainting Method for Thai Mural Painting Images. 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC). 2018;2018:143–148. https://doi.org/10.1109/ICIVC.2018.8492735.

  9. Deng XC, Yu Y. Automatic calibration of crack and flaking diseases in ancient temple murals. Heritage Sci. 2022;10:1–17. https://doi.org/10.1186/s40494-022-00799-y.

    Article  Google Scholar 

  10. Cao JF, Li YF, Cui HY, et al. Improved region growing algorithm for the calibration of flaking deterioration in ancient temple murals. Heritage Sci. 2018;6:1–12. https://doi.org/10.1186/s40494-018-0235-9.

    Article  Google Scholar 

  11. Mol, Rakhi V, Maheswari P. The digital reconstruction of degraded ancient temple murals using dynamic mask generation and an extended exemplar-based region-filling algorithm. Heritage Sci. 2021;9:1–18. https://doi.org/10.1186/s40494-021-00604-2.

  12. Pulak P, Mrinmoy G, Soumitra S, et al. a patch-based constrained inpainting for damaged mural images. Digital Hampi: Preserving Indian Cultural Heritage. 2018;205–223. https://doi.org/10.1007/978-981-10-5738-0_13.

  13. Yuan Q, He X, Han XN, et al. Automatic recognition of craquelure and paint loss on polychrome paintings of the Palace Museum using improved U-Net. Heritage Sci. 2023;11:1–11. https://doi.org/10.1186/s40494-023-00895-7.

    Article  Google Scholar 

  14. Pei SC, Zeng YC, Chang CH, et al. Virtual restoration of ancient Chinese paintings using color contrast enhancement and lacuna texture synthesis. IEEE Trans Image Process. 2004;2004:416–29. https://doi.org/10.1109/TIP.2003.821347.

    Article  Google Scholar 

  15. Pulak P, Mrinmoy G, Soumitra S, et al. A patch-based constrained inpainting for damaged mural images. Digital Hampi: Preserving Indian Cultural Heritage. 2018;205–223. https://doi.org/10.1007/978-981-10-5738-0_13.

  16. Zhou PP, Hou ML, Lyu SQ, et al. Virtual restoration of stained chinese paintings using patch-based color constrained poisson editing with selected hyperspectral feature bands. Remote Sens. 2019;11:1384. https://doi.org/10.3390/rs11111384.

    Article  Google Scholar 

  17. Huang W, Wang SW, Yang XP, et al. Dunhuang murals in-painting based on image decomposition. J Shandong Univ (Eng Sci). 2010;40(2):24–7 (in Chinese).

    Google Scholar 

  18. Jia ZY, Xue G, Chen, et al. Study on digital image inpainting method based on multispectral image decomposition synthesis. Int J Pattern Recogn Artif Intell. 2019;33(01):1954004. https://doi.org/10.1142/S0218001419540041.

  19. Ren YR, Yu XM, Zhang RN, et al. StructureFlow: image inpainting via structure-aware appearance flow. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). 2019;2019:181–190. https://doi.org/10.48550/arXiv.1908.03852.

  20. Deng XC, Yu Y. Ancient mural inpainting via structure information guided two-branch model. Heritage Sci. 2023;11:1–17. https://doi.org/10.1186/s40494-023-00972-x.

    Article  CAS  Google Scholar 

  21. Zhou Z, Liu X, Shang J, et al. Inpainting digital dunhuang murals with structure-guided deep network. ACM J Comput Cult Heritage, 2022;4:15. https://doi.org/10.1145/3532867.

  22. Criminisi A, Patrick P, Kentaro T, et al. Object removal by exemplar-based inpainting. IEEE Comput Soc Conf Comput Vis Pattern Recogn. 2003;2:2–2. https://doi.org/10.1109/CVPR.2003.1211538.

    Article  Google Scholar 

  23. Xu Z, Sun J. Image inpainting by patch propagation using patch sparsity. IEEE Trans Image Process. 2010;2010:1153–65. https://doi.org/10.1109/TIP.2010.2042098.

    Article  Google Scholar 

  24. Meur L, Gautier J, Guillemot C, et al. Examplar-based inpainting based on local geometry. 2011 18th IEEE international conference on image processing. 2011;2011:3401–3404. https://doi.org/10.1109/ICIP.2011.6116441.

  25. Lazcano R, Madroña D, Salvador R, et al. Porting a PCA-based hyperspectral image dimensionality reduction algorithm for brain cancer detection on a manycore architecture. J Syst Architect. 2017;4(14):101–11. https://doi.org/10.1016/j.sysarc.2017.05.001.

    Article  Google Scholar 

  26. Mohammad A, Nasrin, Shima, et al. Graphene-based high pass filter in terahertz band. Optik. 2019;198:163–246. https://doi.org/10.1016/j.ijleo.2019.163246.

  27. Giakoumis I, Pitas I. Digital restoration of painting cracks. IEEE Int Symp Circuits Syst (ISCAS). 1998;4:269–72. https://doi.org/10.1109/ISCAS.1998.698812.

    Article  Google Scholar 

  28. Qiang ZP, He LB, Chen X, et al. Image inpainting using image structural component and patch matching. J Comput-Aided Des Comput Graph. 2019;31(5):821–30 (in Chinese).

    Google Scholar 

  29. Daniel C, Heinz, Chein-I. Chang, et al. Fully constrained least squares linear spectral mixture analysis method for material quantification in hyperspectral imagery. IEEE Trans Geosci Remote Sens. 2001;2001:529–545. https://doi.org/10.1109/36.911111.

  30. Muhammad S, Senthan M, Khurram K, et al. Pavement crack detection using the Gabor filter. 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013). 2013;2013:2039–2044. https://doi.org/10.1109/ITSC.2013.6728529.

  31. Yusuke F, Yoshihiko H. A robust automatic crack detection method from noisy concrete surfaces. Mach Vis Appl. 2011;22(2):245–54. https://doi.org/10.1007/s00138-009-0244-5.

    Article  Google Scholar 

  32. Zhang F, Xi QY, Li QX, et al. Feasibility of removing manual marks on ultrasonic image and repairing images based on double gradient combined with improved Criminisi algorithm. Chin J Med Imag Technol. 2023;39(3):429–34 (in Chinese).

    Google Scholar 

  33. Telea A. An image inpainting technique based on the fast marching method. J Graph Tools. 2004;9:23–4.

    Article  Google Scholar 

  34. Liao L, Liu TR, Chen DL, et al. TransRef: multi-scale reference embedding transformer for reference-guided image inpainting. ArXiv. 2023;2306:11528. https://doi.org/10.48550/arXiv.2306.11528.

Download references

Acknowledgements

Not applicable.

Funding

This research was supported by the National K&D Program of China (No. 2022YFF0904400), National Natural Science Foundation of China (Nos. 42171356).

Author information

Authors and Affiliations

Authors

Contributions

KZQ designed the idea and led the writing of the article, SQL and MLH conducted the experiment and collected the data. LHL conducted the analysis and supervised the whole process. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Shuqiang Lyu.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qiao, K., Hou, M., Lyu, S. et al. Extraction and restoration of scratched murals based on hyperspectral imaging—a case study of murals in the East Wall of the sixth grotto of Yungang Grottoes, Datong, China. Herit Sci 12, 123 (2024). https://doi.org/10.1186/s40494-024-01215-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40494-024-01215-3

Keywords