Skip to main content

Automatic labeling framework for paint loss disease of ancient murals based on hyperspectral image classification and segmentation

Abstract

Ancient murals have suffered from continuous damage over time, and especially paint loss disease. Therefore, disease labeling, as the basis for ancient mural restoration, plays an important role in the protection of cultural relics. The predominant method of disease labeling is currently manual labeling, which is highly dependent on expert experience, time consuming, inefficient and results in inconsistent accuracy of the marking effect. In this paper, we propose a labeling framework for paint loss disease of ancient murals based on hyperspectral image classification and segmentation. The proposed framework involves first the extraction of features from the hyperspectral image, and then image segmentation is performed based on the spatial features to obtain more accurate region boundaries. Then, the hyperspectral image’s regions are classified based on their spatial-spectral characteristics, and the candidate areas of paint loss disease are obtained. Finally, by leveraging the true color image segmentation results, the proposed disease labeling strategy combines the results of classification and segmentation to propose the final paint loss disease labeling areas. The experimental results show that the proposed method can not only combine the hyperspectral space and spectral information effectively to obtain accurate labeling of paint loss disease, but can also mark the paint loss disease not easily observed using ordinary digital cameras. Compared with the state-of-the-art methods, the proposed framework could be promising for accurate and effective paint loss disease labeling for ancient murals.

Introduction

Mural culture stands as an integral component of cultural heritage, exuding profound historical, cultural, scientific, artistic, emotional, and research value. The rich artistic information it carries inspires archaeologists to dig deeper into the historical development track and cultural development process. However, ancient murals suffer from varying degrees of damage due to natural and man-made factors (assuming man-made destruction has been suppressed). Currently, experts have identified more than 20 types of disease causing damage to murals, such as paint loss, disruption, sootiness, flaking, detachment, cracks, scratches and blisters [1]. Among them, paint loss refers to the phenomenon where the paint layer of the mural is separated from the base color layer or the ground layer, which is a typical disease type that most ancient murals suffer from.

In order to protect and repair ancient murals effectively, disease labeling is the first step in an investigation, as it involves the objective recording of the location and extent size of the damage, which is the basis for subsequent protection and restoration processes. Traditional disease labeling methods were mainly manual or interactive labeling processes combined with traditional methods, which are highly dependent on the applying expert’s experience and also time consuming. In addition to manually drawing a disease map, the most commonly used method is to use Autodesk Computer Aided Design software to mark the disease manually. Tian et al. [2] used the k-means clustering algorithm to improve edge detection for disease edge labeling. Zhang et al. [3] used 3D laser scanning technology to give the location, length and area of mural diseases. Although manual or interactive labeling can achieve good results, the labor cost is high and the marking efficiency is low. Therefore, the reduction of labor costs in disease labeling tasks has become an important research direction at present. There are currently some methods to mark disease based on morphological features. Cornelis et al. [4] used a filter, the top hat transform and K-singular value decomposition to identify and label cracks, and then used a clustering method to extract and discard erroneous instances, finally merging the three labeled images to obtain the crack marking result. Cao et al. [5] analyzed the color characteristics of the flaking area of the ancient temple murals and marked the suspected flaking-damaged points by threshold segmentation. The characteristics of paint loss disease are complicated. Compared with the morphological characteristics of damage such as cracks and mud smirches, they are not easy to describe, so traditional feature-based disease labeling methods cannot be applied to identify paint loss. In recent years, some scholars have proposed the use of deep learning methods for disease labeling. Meeus et al. [6] developed a multi-scale deep learning system with dilated convolution, which can solve the problem of paint loss detection. The above labeling methods all use pictures taken using ordinary cameras. However, the characteristics of paint loss disease are more complicated. Images obtained using ordinary cameras contain less information. A method to overcome this issue is through the use of hyperspectral images for labeling paint loss.

Hyperspectral imaging technology cameras have fine wavelength resolution and cover a wide range of wavelengths, which bestows them a comparative advantage when it comes to material identification has advantage on identifying material [7]. Because of its non-contact, non-destructive, non-polluting, and high-efficiency characteristics, this technology has been employed in the study of cultural relics in applications such as pigment analysis [8], hidden information mining [9, 10], virtual ancient mural restoration [11], information enhancement [12], etc. Li et al. [13] proposed an unsupervised clustering method to predict the degree of flaking of the Mogao Grottoes’ murals accurately. Liu et al [14]. used Support Vector Machine (SVM) to classify mural artifacts through hyperspectral image classification to mark disease, and then used mathematical modeling to assess disease risk. These traditional methods are based on manual features, which highly rely on the expert’s experience and make the analysis of complex mural scenes costly, time-consuming and difficult.

In order to solve the problems of the high resource requirement of manual disease labeling processes and the limited image information obtained when plain digital cameras are utilized, in this paper hyperspectral technology imaging is used to label paint loss disease. The paint loss regions are different from other regions, and the corresponding characteristics captured via hyperspectral imaging are also different, bearing similarities to the different types of features encountered in remote sensing images. Therefore, a hyperspectral classification method is used to label the worn areas of the murals. However, as the characteristics of paint loss disease are very complicated, and there are instances where similar objects exhibit different spectra or similar spectra are obtained from different objects. This can cause inaccurate hyperspectral classification results, especially at boundaries. Therefore, in this paper a method of combining hyperspectral classification and segmentation is proposed. The former is used to determine the location of the paint loss disease, and then the segmented image regions are obtained according to the classification result. The proposed method uses the spectral and spatial characteristics of hyperspectral images effectively and allows the automatic marking of mural paint loss regions with complex backgrounds and complex damages.

Briefly, the major contributions in this paper are as follow:

  1. 1.

    An automatic labeling framework is proposed for the analysis of paint loss disease of ancient murals based on hyperspectral image classification and segmentation. The framework allows end-to-end extraction of the outline of paint loss regions, which can be used for mural analysis and restoration processes.

  2. 2.

    The proposed method involves the use of information of different characteristic hyperspectral imaging bands and allows the labeling of areas that are not obvious under natural light.

  3. 3.

    A fusion strategy is employed to combine image classification and segmentation results effectively.

Research aim

This study aims to develop an algorithm that can quickly and accurately label the paint loss disease of murals. The algorithm combines two methods of hyperspectral classification and segmentation. The former is used to determine the position of the paint loss disease, and then the segmented image area is obtained according to the classification result. The algorithm effectively utilizes the spectral and spatial properties of hyperspectral images, and can automatically label the paint loss areas of mural with complex backgrounds and complex damages. The algorithm is tested on two hyperspectral murals and obtained disease labeling results. The research results show that the proposed algorithm can generate highly accurate and precise labeling results for murals. The accurate disease labeling results are of great significance to the protection and restoration of murals, and can provide a scientific basis for the subsequent restoration and monitoring of murals.

Related work

In this section, we introduce some basic methods used in this paper to solve the automatic labeling process of paint loss disease of ancient murals.

Continuum removal

Continuum removal, a widely used preprocessing technique in hyperspectral data analysis [15], effectively mitigates the impact of background absorption. Through this method, absorption and reflection patterns in the spectral curve are emphasized, normalizing them to a unified spectral background. This process significantly promotes the identification of spectral features. In this work, specifically for the experimental data processing, the formula is as follows:

$$ S_{cr}= S_{or} / R_{c} $$
(1)

where \(S_{cr}\) is the continuum with the spectral reflectance removed, \(S_{or}\) is the original spectral reflectance, and \(R_{c}\) is the continuum line reflectance.

Fig. 1
figure 1

Region Adjacency Graph and Nearest Neighbor Graph

Region adjacency graph (RAG) and the nearest neighbor graph (NNG)

RAG is a data structure that records the adjacency relationship between regions. For an initial partition result with K regions, the adjacency relationship is recorded using an undirected graph \(G=(V,E)\), where \(V= \{\sum 1,\sum 2,...,\sum k \}\) is the set of all vertices and E is the set of all boundaries. The initial partition with five regions can be represented by the RAG shown in Fig. 1. In the RAG, each edge has a weight that represents the cost of merging the two regions. The merging process using RAG follows the principle of minimizing the cost of each merge iteration.

Although the merging results of RAG are usually satisfactory, it can lead to high computational complexity in practical applications. Therefore, NNG is adopted as an optimization scheme for RAG in this paper. NNG reduces the computation of ineffective merges by ensuring that each node in NNG has only one directed edge pointing to the best merging region. Based on this characteristic, NNG always contains a closed loop, as shown in Fig. 1, where each loop records adjacent regions that satisfy the criterion of local optimality. NNG can provide efficient and accurate merging results, significantly reducing the time overhead compared to RAG.

Dense conditional random field (Dense CRF)

Dense CRF is a common method in the field of image segmentation and image annotation [16, 17]. It improves the performance of the traditional conditional random field by increasing the local context information of each pixel. The energy function of CRF in Dense CRF is as follows:

$$ E(y) = \sum _{i}\psi _{u}(y_{i})+\sum _{i,j}\psi _{p}(y_i,y_j) $$
(2)

where y is the label of the pixel, \(\psi _{u}(y_{i})\) and \(\psi _{p}(y_i,y_j)\) are the unary potential and the paired potential, respectively. The unary potential \(\psi _{u}(y_{i})=-\log P(y_i)\). \(P(y_i)\) is the label assignment probability of the pixel i predicted by the network. The paired potential is defined as \(\psi _{p}(y_i,y_j)=\mu (y_i,y_j)\sum _{m=1}^{K}\omega _{m}k_{m}(f_i,f_j)\),where \(\mu (y_i,y_j)\) represents:

$$\begin{aligned} \mu (y_i,y_j)=\left\{ \begin{array}{cc} 1, y_i = y_j &{} \\ 0, y_i \ne y_j &{} \end{array} \right. \end{aligned}$$
(3)

where \(f_i\) and \(f_j\) are the feature vectors of pixels i and j in any feature space, and \(\omega _m\) makes the corresponding weights. The Gaussian kernel \(k_m\) depends on the pixel position and dimensionality reduction depth features in the network.

MatchShapes function

The MatchShapes function serves to assess the similarity between two shapes, with a smaller return value indicating a higher degree of similarity. Its calculation method is:

$$\begin{aligned} I(A,B)=\sum _{k=1}^{7} \left| \frac{1}{sign(h_{i}^A)\cdot \log \left| h_i^A \right| } - \frac{1}{sign(h_{i}^B)\cdot \log \left| h_i^B \right| }\right| \end{aligned}$$
(4)

where I(AB) is the shape similarity, and \(h_i^A\) and \(h_i^B\) are the HU moments of A and B (the HU moment is a combined moment with 7 invariant moments proposed by Hu [18]). This function can be called through the OpenCV library.

Method

The overall framework is shown in Fig. 2. First, the hyperspectral image is preprocessed using black and white correction and image cropping. Next, in order to combine spectral and spatial information, hyperspectral image segmentation and classification are performed independently. In this paper, a self-similarity marking strategy is adopted. Based on the classification result, the effective area in the segmentation result is selected as the region of interest and the position and edges of the paint loss are marked. In order to improve the marking accuracy, the true color version of the hyperspectral image is introduced, and the mutual similarity marking strategy is applied. Based on the true color (TC) image segmentation result, the effective area in the region of interest is selected to supplement the position and edge marking information of the paint loss.

Fig. 2
figure 2

Architecture of the proposed method

Image segmentation

In the image segmentation pipeline, multiple feature extraction methods are used along with a super-pixel segmentation method for accurate and fast image segmentation.

Hyperspectral images contain more information than visible light images, but there is data redundancy. Consequently, feature extraction methods are usually utilized to achieve data dimensionality reduction while highlighting the characteristics of paint loss disease. However, different feature extraction methods focus on different information. Therefore, this work employs classical feature extraction methods to reduce the dimensionality of the data and obtain the feature bands. The methods used are as follows:

  1. 1.

    Principal Component Analysis (PCA) [19,20,21,22],

  2. 2.

    Minimum Noise Fraction (MNF) [23],

  3. 3.

    the Discrete Cosine Transform as a method of Independent Component Analysis (DCT-ICA) [24].

For the PCA and DCT-ICA method, the first three principal components are selected for segmentation, as they basically contain most of the information of the original data and have less redundant information.

For MNF, the band of maximum noise ratio eigenvalues satisfying \(\lambda > 2\) is selected. This selection of eigenvalues ensures that the selected bands retain most of the information contained in the original hyperspectral images.

After feature extraction, simple linear iterative clustering segmentation method [25] and region merging method are used to segment and merge the extracted results in the segmentation stage. In this paper, the feature results obtained by PCA / MNF / DCT-ICA are mapped to the \(L^* A^* B^*\) domain to adapt the segmentation method, so as to realize the segmentation of spectral features. Subsequently, the NNG technology in section Region adjacency graph (RAG) and the nearest neighbor graph (NNG) is used to merge and update the segmented regions. The number \(\gamma \) of merging regions in this paper ranges from 30 to 80. Finally, the paint loss disease area map is obtained.

Image classification

In the image classification pipeline, a spatial-spectral full convolution network (SSFCN) [26] is used as the classifier. The algorithm flow is shown in Fig. 3.

The preprocessing operation in section Continuum removal is utilized to obtain the processed data \(S_{cr}\) (continuum removal) from the original hyperspectral data \(S_{or}\), where both \(S_{or}\) and \(S_{cr}\) serve as input data for the network.

The network structure is shown in Fig. 3. The upper and lower branches of the network are used as spectral and spatial feature extractors respectively. The outputs of the first three convolutional layers in the upper branch are merged into the fourth merged layer. The output of the second and third pooling layers of the lower branch and the output of the first convolution layer are merged into the fourth merged layer. Then the two branches are merged into the merging layer according to the corresponding weight factors. After the joint feature, the classification map is obtained by the convolution layer and the softmax function (the mask matrix is used in the training phase). Finally, Dense CRF is combined with global information to make the network achieve the purpose of accurate classification.

Fig. 3
figure 3

Hyperspectral image classification by SSFCN algorithm

In the classification results, we extract the classes of the disease as follows:

$$\begin{aligned} D\left( u,v\right) = {\left\{ \begin{array}{ll} Y\left( u,v\right) , &{} \text {if} \,Y\left( u,v\right) \le d-1 \\ 0, &{} otherwise \end{array}\right. } \end{aligned}$$
(5)

where \(Y\left( u,v\right) \) is the label at the \(\left( u,v\right) \) position in the original classification result.d is the number of the paint loss disease classes. \(D\left( u,v\right) \) is the label at the \(\left( u,v\right) \) position in the image after extracting the disease classes.

Disease labeling strategy

Although image classification is important for the labeling process, pixels from similar objects may have different spectral curves in the hyperspectral image, especially in the edges, which may cause misclassification. On the other hand, pixel-wise classification often causes a regional inconsistency issue due to the surface cover or noise. Paint loss disease manifests as large or small regions with good regional consistency, so image segmentation is well suited for solving this problem. Therefore, based on the results in section Image segmentation and Image classification, we introduce the disease marking strategy of mutual similarity strategy and self-similarity strategy. The classification and segmentation results are used as prior knowledge to accurately identify the disease area and edge through the labeling strategy. In addition to the feature segmentation results in section Image segmentation, we also introduce the TC results to better fit the artificial disease markers in the visible light environment. The specific process is shown in Fig. 4.

Fig. 4
figure 4

Disease labeling strategy based on classification and segmentation

The set of segmented regions of TC image \(X_T^i\) (where \(i\in [l,r]\)) are obtained by using the method of section Image segmentation. The proportion of hyperspectral classification results in the different feature extraction bands obtained after segmentation is then used to select the regions of interest and form an ROI set. Specifically, the ROI set \(R_F\) is obtained from the segmented region set \(X_{Fj}^i\) of different feature extraction bands, and satisfies the following condition:

$$ \frac{S\left( D_j^i \right) }{S\left( X_{Fj}^i \right) } > \theta _f $$
(6)

where \(S\left( D_j^i\right) \) is \(D_j^i\) the area of the disease part in the \(i_{th}\) segment of the \(j_{th}\) characteristic band in the classification result, \(S \left( X_{Fj}^{i} \right) \) is the area of the \(i_{th}\) segment of the \(j_{th}\) characteristic band, and \(\theta _{f}\) is the percentage threshold. In this paper, \(\theta _{f}\) ranges from 65% to 75%

Then, the more obvious disease areas in the TC image are extracted using the TC segmentation results. Specifically, a set of obvious disease regions \(R_{v}\) on the TC image is selected from \(X_T^i\) according to the following conditions:

$$\begin{aligned} {\left\{ \begin{array}{ll} I\left( X_T^i,R_F^k\right)< \lambda _v \\ d\left( X_T^i,R_F^k\right) < \alpha _v \end{array}\right. } \end{aligned}$$
(7)

where \(I \left( X_T^i,R_F^k\right) \) is the shape similarity measure between the \(i_{th}\) TC image segmentation area and the \(k_{th}\) ROI area; \(\lambda _{v}\) is the normalized shape similarity threshold, set to \(\lambda _v = 0.3 \) in this paper because of the complexity of the edges encountered in mural paint loss; \(d\left( X_T^i,R_F^k\right) \) is the Euclidean distance between the shape center of the \(i_{th}\) TC image segmentation area and the \(k_{th}\) ROI region; and \(\alpha _{v}\) is the centroid distance threshold, which is set to \(\alpha _{v} = 5\) in this paper. To calculate shape similarity, we use OpenCV’s MatchShape function of section MatchShapes function. This step is referred to as worn area selection based on mutual similarity.

Furthermore, the less significant disease regions on the TC image are extracted from the ROI set. Specifically, the pre-selected area is selected from \(R_{F}\) according to the following conditions:

$$\begin{aligned} {\left\{ \begin{array}{ll} I \left( R_F^i,R_F^j\right)< \lambda _u \\ d \left( R_F^i,R_F^j\right) < \alpha _u \\ R_F^i \cap R_v = \varnothing \end{array}\right. } \end{aligned}$$
(8)

where \(I\left( R_F^i,R_F^j\right) \) is the shape similarity between the \(i_{th}\) and \(j_{th}\) areas of the ROI set \(\left( i \ne j\right) \); \(\lambda _{u}\) is the normalized shape similarity threshold, set to \(\lambda _{u} = 0.1\) in this paper; \(d\left( R_F^i, R_F^j \right) \) is the distance between the shape center of the \(i_{th}\) and \(j_{th}\) areas of the ROI set \(\left( i \ne j\right) \); \(\alpha _{u}\) is the centroid distance threshold, set to \(\alpha _{u}=5\) in this paper;\( R_F^i \cap R_v = \varnothing \) means that the selected area is in a different position from the previously selected disease area. This step is referred to as the selection of worn areas based on self-similarity.

Finally, the \(R_u\) and \(R_v\) are fused and edge extraction is performed to obtain the final labeling results of the paint loss disease, as follows:

$$ R_{e} = edge\left( R_{u} + R_{v}\right) $$
(9)

where \(edge\left( \cdot \right) \) is the edge extraction algorithm, which in this paper is canny edge detection [27]. Then, the final result is fused with the TC image of the data to obtain the final automatic labeling result of the paint loss disease.

Experiments and results

Data and collection

In this paper, data from two mural cultural relics are used to verify the effectiveness of the proposed method.

  1. (1)

    Dataset 1 was obtained from the Qing Dynasty (1644–1911) murals of Sanhuang Temple in Xi’an, Shaanxi, collected using a Specim IQ hyperspectral imaging camera with a spectral range of 400 nm–1000 nm with 204 bands. The image dimensions of the dataset are \(360\times 300\) pixels.

  2. (2)

    Dataset 2 was obtained from Avalokitesvara on the west side of the south wall of the Yuan Dynasty (1271–1368) Great Hall of Fengguo Temple in Jinzhou, Liaoning Province, and was collected using a SOC710 hyperspectral imager. The spectral range is 400 nm–1000 nm with 128 bands, and the image dimensions are \(111\times 86\) pixels.

In order to reduce the effect of ambient light and background noise interference from the acquisition process, we pre-processed the data using black and white correction [28] and image cropping. The black and white correction refers to the correction of the influence of the light source as follows:

$$ R=\frac{R_{0}-D}{W-D} \times 100\% $$
(10)

where D and W are black and white reference images, respectively, and R is the calibrated image.

The TC images and the ground truth of the two hyperspectral datasets are shown in Fig. 5. In the two datasets, the sample points required for hyperspectral classification are manually selected by two experts. Different degrees of paint loss and different paint colors are assigned to different categories.

Fig. 5
figure 5

Datasets. a hyperspectral TC image (left: dataset 1; right: dataset 2); b ground truth with disease marking (left: dataset 1; right: dataset 2)

Evaluation metrics

In this paper, we used four metrics to evaluate the segmentation and classification performance quantitatively [29, 30]:

  1. 1)

    Pixel Accuracy (PA). PA is the simplest indicator of image segmentation; it is the ratio of correctly classified pixels divided by the total number of pixels, and can be understood as the percentage of correctly classified pixels in the image. PA is defined as:

    $$ PA = \frac{TP+TN}{TP+TN+FP+FN} $$
    (11)

    where TP is the number of pixels that are classified correctly as paint loss disease, TN is the number of pixels with no disease loss, FP is the number of pixels erroneously classified as belonging to the paint loss disease category and FN is the number of pixels that are erroneously classified as negatives. The closer the PA value is to 100%, the better the effect.

  2. 2)

    Mean Pixel Accuracy (MPA). This is the average value of the correct classification probability of the pixels in a category. It is also an improvement over PA evaluation, as it can reflect the segmentation accuracy better. MPA can be defined as:

    $$MPA= \frac{1}{N} \sum _{i=1}^{N} PA_i $$
    (12)

    which N means all pixel categories, and \(PA_i\) is the PA value for each pixel category. The closer the MPA value is to 100%, the better the effect.

  3. 3)

    Pratt‘s Figure of Merit (PFOM). PFOM is an index for evaluating edge detection performance. It is defined using a combination of three factors, which include prediction errors of true edges, prediction errors of false edges, and edge position errors. It can be defined as:

    $$ PFOM = \frac{1}{\max \left( N_e,N_d\right) \sum \limits _{k=1} ^{N_d}\frac{1}{1+\beta d \left( k \right) ^2}} $$
    (13)

    where \(N_e\) is the number of reference edge points, \(N_d\) is the number of edge points extracted by the algorithm, \(\beta \) usually set to the constant 1/9, \(d\left( k\right) \) is the Euclidean value between the k-th calculated real edge point and the detected edge point distance:

    $$ d \left( k\right) = \left\| N_e - N_d \right\| _{Eucild} $$
    (14)
  4. 4)

    Intersection-over-union (IOU). IOU is a standard measure of semantic segmentation. It is equal to the ratio of intersection and union of two sets, which in the case of image segmentation are the ground truth and predicted segmentation. The function is expressed as follows:

    $$ IOU = \frac{TP}{TP+FP+FN} $$
    (15)

    The closer the IOU value is to 100 %, the better the effect.

Experimental results

Image segmentation method based on feature extraction

In this paper, the data obtained using the three feature extraction methods are used as the input data for hyperspectral image segmentation. In order to verify the rationality and necessity of the proposed method, the same segmentation method (super-pixel segmentation mentioned in section Continuum removal) was is used and different feature extraction methods are then applied for a set of ablation experiments. Specifically, in Table 1, the Combination 1, 2 and 3 refer to the use of each corresponding method alone for feature extraction, while the Combination 4, 5 and 6 refer to the combined use of the methods for feature extraction. Table 1 shows the PA values of the final labeling results.

The experimental results show that for the single-feature extraction method experiment, the labeling result of the Combination 2 has the highest accuracy. In the case of the combined use of feature extraction methods, the labeling results using of the Combination 4 have the highest accuracy, while all of the combination results are generally more accurate than those obtained using single methods. The labeling result accuracy of all three feature extraction methods combined is in general more than 0.25 higher than that of single methods, and more than 0.1 higher than that of the two pairwise-combined methods.

Overall, the experimental results showed that the method combining the three feature extraction achieved the best results, which proves that the three feature extraction methods are complementary and their combined use for segmentation is reasonable and necessary.

Table 1 Comparison experiment of feature extraction methods on dataset 1
Fig. 6
figure 6

Visual classification results of different methods. a hyperspectral TC image; b ground truth; c classification result of both branches input with \(S_{or}\); d classification result of both branches input with \(S_{cr}\); (e) classification result of the proposed method

Image classification methods

For the hyperspectral image classification method, the original and processed (through continuous spectrum removal) hyperspectral data were used as inputs of the spatial and spectral branches in the SSFCN, respectively. In order to verify its rationality and necessity, we used the same classification method to conduct a set of experiments with three data input methods from dataset 1. Quantitative and qualitative analyses are shown in Table 2 and Fig. 6. The first involved using the original hyperspectral data \(S_{or}\) as inputs to both the spectrum and the spatial branches. For the second, hyperspectral data with removed spectral reflectance (namely the \(S_{cr}\) continuum) was used as an input for both branches. The last one is to use the original hyperspectral data \(S_{or}\) and processed data (continuous spectrum removal) \(S_{cr}\) as inputs for the spatial and spectral branches, respectively. The overall accuracy of \(S_{or}\) and \(S_{cr}\) are 97.32% and 97.60%, respectively. The overall accuracy for combining the two methods is 97.63%

The experimental results show that from the perspective of objective indicators, including the hyperspectral data removed from the continuum as input yields better results than the original input data alone. The classification accuracy of both branches using \(S_{cr}\) as the input was higher than that of both branches using \(S_{or}\) as the input. However, the classification accuracy when different inputs were used in the two branches, as proposed in this paper, was higher than that of the same-input cases.

Table 2 Comparative experiment of image classification method on dataset 1
Fig. 7
figure 7

Results for datasets. a hyperspectral TC image (left: dataset 1; right: dataset 2); b reference image with traditionally drawn disease markers (left: dataset 1; right: dataset 2); cf results obtained using the four compared methods (left: dataset 1; right: dataset 2); g result image of the proposed method (left: dataset 1; right: dataset 2)

Table 3 Results of dataset 1 and dataset 2

Compare with other methods

We used four methods to label mural paint loss disease and compared their results with the method proposed in this paper. The methods used are the MNF inverse transform+SVM method [14], the OPTICS clustering method [13], and the two hyperspectral classification methods RPNET [31] and SSFCN [26]. The experimental results follow, where red line frames represent the edges of the disease region.

Figure 7 show the results of the disease marking for datasets 1 and 2, the left are the results of dataset 1 and the right are the results of dataset 2, where Fig. 7a is the hyperspectral TC image, Fig. 7b is the reference image with traditionally drawn disease markers, Fig. 7c–f are the marked result images of the four compared methods and Fig. 7g is the labeling result of the method proposed in this paper.

By visually comparing the labeling results of different methods with the reference images, it can be seen that all methods can roughly mark the disease location, but Fig. 7c and d produce incorrect labeling and the presence of noise is substantial. The mislabeling is reduced in Fig. 7e and f, but because the spatial information is not fully combined, the disease edges are not very accurate. As can be seen from Fig. 7g, the proposed method can mark the location of disease and extract its edges more accurately, greatly reducing the phenomenon of mislabeling. This leads to a smoother result and the marking of disease regions that are not perceptible under visible light.

Table 3 is the objective evaluation indices obtained for datasets 1 and 2. The quantitative results are consistent with the above analysis, and the proposed method is generally superior to the all comparison methods. When comparing the outcomes of RPNET and SSFCN to those of MNF+SVM and OPTICS, it becomes evident that deep learning methods generally surpass traditional approaches in both segmentation accuracy and edge overlap. Compared with the deep learning methods, the combined method of segmentation and classification proposed in this paper improves accuracy of dataset 1 by about 0.2 percentage points and the degree of edge coincidence by about 0.1. Compared with the traditional method, the method proposed in this paper shows an even greater improvement, as the accuracy of dataset 1 is improved by about 0.6 percentage points, and the edge coincidence degree is improved by about 0.2. The method proposed in this paper also achieves better results than deep learning methods on dataset 2. Specifically, the accuracy of dataset 2 is improved by about 0.01, and the edge coincidence degree is increased by about 0.1; compared with the traditional methods, the accuracy is improved by about 0.03, and the edge coincidence degree is increased by about 0.2.

Conclusion

In this paper, a deep learning-based hyperspectral classification network is used to pre-extract the hyperspectral data from murals. After feature extraction, the bands are segmented, and a disease labeling strategy based on classification and segmentation is applied to label paint loss disease effectively. The framework proposed in this paper can be used to identify paint loss disease automatically, thus reducing the resources required for restoration efforts. This method combines the spectral information and spatial information of cultural relic data effectively. At the same time, the method proposed in this paper combines hyperspectral image classification and image segmentation on different feature extraction bands. This allows the labeling of disease perceptible not only under visible light, but also diseases with outlines that are unclear under visible light.

However, due to limitations in data acquisition techniques, this work can currently only be applied to some static cultural relics, such as murals, rock art, etc. Additionally, with the continuous development of classification/clustering methods, it is expected that unsupervised segmentation methods in this work could be replaced with higher precision methods. In the future, our research team will continue to explore higher performance unsupervised clustering methods and explore the possibility of applying hyperspectral technology to other cultural heritage works, contributing to the advancement of the field of cultural heritage.

Availability of data and materials

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

Abbreviations

TN:

True Negative

TP:

True Positive

FN :

False Negative

FP:

False Positive

PA:

Pixel Accuracy

TC:

The true color

ROI:

Region of Interest

MPA:

Mean Pixel Accuracy

IOU:

Intersection-over-union

NNG:

Nearest neighbor graph

RAG :

Region Adjacency Graph

SVM:

Support Vector Machine

MNF:

Minimum Noise Fraction

PFOM:

Pratt’s Figure of Merit

PCA:

Principal Component Analysis

OPTICS:

A unsupervised clustering method in [13]

DCT-ICA:

A hyperspectral data analysis method in [24]

RPNET:

A hyperspectral classification method in [31]

SSFCN:

A deep learning model for classification in [26]

References

  1. Da-peng L, Heng-qian Z, Li-fu Z, Xue-sheng Z. Preliminary study in spectral mixing model of mineral pigments on Chinese ancient paintings-take azurite and malachite for example. Spectrosc Spectr Anal. 2018;38(8):2612–6.

    Google Scholar 

  2. Tian S, Guo H, Cheng Q, et al. K-means sobel algorithm in edge extracting of mural diseases. In: 2010 2nd International Conference on Information Engineering and Computer Science, pp. 1–4 (2010). https://doi.org/10.1109/ICIECS.2010.5677896 . IEEE

  3. Zhang A, Hu S, Gao F. Investigation on diseases of tibet murals using 3d laser scanning technology. In: 2009 13th International Conference Information Visualisation, pp. 568–571 (2009). https://doi.org/10.1109/IV.2009.109 . IEEE

  4. Cornelis B, Ružić T, Gezels E, Dooms A, Pižurica A, Platiša L, Cornelis J, Martens M, De Mey M, Daubechies I. Crack detection and inpainting for virtual restoration of paintings: the case of the Ghent altarpiece. Signal Process. 2013;93(3):605–19.

    Article  Google Scholar 

  5. Cao J, Li Y, Cui H, Zhang Q. Improved region growing algorithm for the calibration of flaking deterioration in ancient temple murals. Herit Sci. 2018;6:1–12.

    Article  Google Scholar 

  6. Meeus L, Huang S, Devolder B, Dubois H, Martens M, Pižurica A. Deep learning for paint loss detection with a multiscale, translation invariant network. In: 2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA), pp. 158–162 (2019). IEEE

  7. Sun M, Zhang D, Wang Z, Ren J, Chai B, Sun J. What’s wrong with the murals at the Mogao grottoes: a near-infrared hyperspectral imaging method. Sci Rep. 2015;5(1):14371.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  8. Wang X, Chai B, Sun S. Thinking on the method of investigation and record of the current situation of Mogao grottoes murals. Dunhuang Res. 2007;05:103–106123124.

    Google Scholar 

  9. Tu B, Zhou C, Liao X, Zhang G, Peng Y. Spectral-spatial hyperspectral classification via structural-kernel collaborative representation. IEEE Geosci Remote Sens Lett. 2020;18(5):861–5.

    Article  Google Scholar 

  10. Yihao F, Yue C, Jun W, Cheng L, Xiaoyu Z, Lu L, Baheti Z, Jinye P. Secrets on the rock: analysis and discussion of the dunde bulaq rock art site. Herit Sci. 2024;12(1):38.

    Article  Google Scholar 

  11. Li X, Lu D, Pan Y. Virtual dunhuang mural restoration system in collaborative network environment. In: Computer Graphics Forum, pp. 331–340 (2000). Wiley Online Library

  12. Nocca F. The role of cultural heritage in sustainable development: multidimensional indicators as decision-making tool. Sustainability. 2017;9(10):1882.

    Article  Google Scholar 

  13. Li P, Sun M, Wang Z, Chai B. Optics-based unsupervised method for flaking degree evaluation on the murals in Mogao grottoes. Sci Rep. 2018;8(1):15954.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Liu X, Hou M, Dong Y, Wang W, Lü S. Evaluation of paint loss disease in qutan temple frescoes based on hyperspectral imagery. Geomat World. 2019;26(05):22–8.

    CAS  Google Scholar 

  15. Yang H, Du J. Classification of desert steppe species based on unmanned aerial vehicle hyperspectral remote sensing and continuum removal vegetation indices. Optik. 2021;247: 167877.

    Article  Google Scholar 

  16. Gao M, Xu Z, Lu L, Wu A, Nogues I, Summers R.M, Mollura D.J. Segmentation label propagation using deep convolutional neural networks and dense conditional random field. In: 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), pp. 1265–1268 (2016). IEEE

  17. Nguyen A, Kanoulas D, Caldwell D.G, Tsagarakis N.G. Object-based affordances detection with convolutional neural networks and dense conditional random fields. in 2017 ieee. In: RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5908–5915

  18. Hu M-K. Visual pattern recognition by moment invariants. IRE Trans Inf Theory. 1962;8(2):179–87.

    Article  Google Scholar 

  19. Kang X, Xiang X, Li S, Benediktsson JA. Pca-based edge-preserving features for hyperspectral image classification. IEEE Trans Geosci Remote Sens. 2017;55(12):7140–51.

    Article  Google Scholar 

  20. Licciardi G, Marpu PR, Chanussot J, Benediktsson JA. Linear versus nonlinear pca for the classification of hyperspectral data based on the extended morphological profiles. IEEE Geosci Remote Sens Lett. 2011;9(3):447–51.

    Article  Google Scholar 

  21. Ren J, Zabalza J, Marshall S, Zheng J. Effective feature extraction and data reduction in remote sensing using hyperspectral imaging [applications corner]. IEEE Signal Process Mag. 2014;31(4):149–54.

    Article  Google Scholar 

  22. Demir B, Ertürk S. Empirical mode decomposition of hyperspectral images for support vector machine classification. IEEE Trans Geosci Remote Sens. 2010;48(11):4071–84.

    Google Scholar 

  23. Green AA, Berman M, Switzer P, Craig MD. A transformation for ordering multispectral data in terms of image quality with implications for noise removal. IEEE Trans Geosci Remote Sens. 1988;26(1):65–74.

    Article  Google Scholar 

  24. Boukhechba K, Wu H, Bazine R. Dct-based preprocessing approach for ica in hyperspectral data analysis. Sensors. 2018;18(4):1138.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S. Slic superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell. 2012;34(11):2274–82.

    Article  PubMed  Google Scholar 

  26. Xu Y, Du B, Zhang L. Beyond the patchwise classification: spectral-spatial fully convolutional networks for hyperspectral image classification. IEEE Trans Big Data. 2019;6(3):492–506.

    Article  Google Scholar 

  27. Canny J. A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell. 1986;6:679–98.

    Article  Google Scholar 

  28. Ghani ASA, Isa NAM. Enhancement of low quality underwater image through integrated global and local contrast correction. Appl Soft Comput. 2015;37:332–44.

    Article  Google Scholar 

  29. Minaee S, Boykov Y, Porikli F, Plaza A, Kehtarnavaz N, Terzopoulos D. Image segmentation using deep learning: a survey. IEEE Trans Pattern Anal Mach Intell. 2021;44(7):3523–42.

    Google Scholar 

  30. Pratt W.K. Image quantization. In: Digital Image Processing, pp. 127–144 (2007). Chap. 5. https://doi.org/10.1002/9780470097434.ch5 . https://onlinelibrary.wiley.com/doi/abs/10.1002/9780470097434.ch5

  31. Xu Y, Du B, Zhang F, Zhang L. Hyperspectral image classification via a random patches network. ISPRS J Photogramm Remote Sens. 2018;142:344–57.

    Article  Google Scholar 

Download references

Acknowledgements

The author sincerely thanks the Shaanxi History Museum for its strong support in this work, as they have provided valuable data and materials for this work.

Funding

Supported by Xi’an Science and Technology Innovation and Qinchuangyuan Innovation Major Program (Program No.23ZDCYJSGG0009-2023), the Key Research and Development Program of Shaanxi (Program No. 2021ZDLGY15-06), the National Natural Science Foundation of China (Program No. 62101446).

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: Y.K., H.Y. Methodology: H.Y. Formal analysis: F.Y., W.J. Investigation: Y.K., N.W. Resources: Z.Q., P.J. Data curation: F.Y. Writing—original draft preparation: Y.K., H.Y. Writing—review and editing: H.Y., N.W. Supervision: W.J., P.J. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Jun Wang or Jinye Peng.

Ethics declarations

Competing interests

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yu, K., Hou, Y., Fu, Y. et al. Automatic labeling framework for paint loss disease of ancient murals based on hyperspectral image classification and segmentation. Herit Sci 12, 192 (2024). https://doi.org/10.1186/s40494-024-01316-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40494-024-01316-z

Keywords