Skip to main content

Restoration of non-structural damaged murals in Shenzhen Bao’an based on a generator–discriminator network

Abstract

Shenzhen is a modern metropolis, but it hides a variety of valuable cultural heritage, such as ancient murals. How to effectively preserve and repair the murals is a worthy of discussion question. Here, we propose a generation-discriminator network model based on artificial intelligence algorithms to perform digital image restoration of ancient damaged murals. In adversarial learning, this study optimizes the discriminative network model. First, the real mural images and damaged images are spliced together as input to the discriminator network. The network uses a 5-layer encoder unit to down-sample the 1024 × 1024 × 3 image to 32 × 32 × 256. Then, we connect a layer of ZeroPadding2D to expand the image to 34 × 34 × 256, and pass the Conv2D layer, down-sample to 31 × 31 × 256, perform batch normalization, and repeat the above steps to get a 30 × 30 × 1 matrix. Finally, this part of the loss is emphasized in the loss function as needed to improve the texture detail information of the image generated by the Generator. The experimental results show that compared with the traditional algorithm, the PSNR value of the algorithm proposed in this paper can be increased by 5.86 db at most. The SSIM value increased by 0.13. Judging from subjective vision. The proposed algorithm can effectively repair damaged murals with dot-like damage and complex texture structures. The algorithm we proposed may be helpful for the digital restoration of ancient murals, and may also provide reference for mural restoration workers.

Introduction

According to archaeological materials, Bao’an District has a history of more than 7000 years of human activity and a history of more than 1600 years of county construction. According to incomplete statistics, more than 1000 architectural murals are well preserved in Bao’an. These murals can be divided into four categories according to the painting’s theme: landscape, character stories, flowers, birds, or animals, and calligraphy. They play an indispensable role in the study of Lingnan culture and have important cultural and artistic value. Many ancient murals have been damaged to varying degrees due to the natural environment and man-made damage. Although some scholars have put forward some suggestions on the preservation and restoration of murals [1, 2], the restoration of murals is still urgent.

At present, the murals inpainting mainly relies on the good painting techniques and rich experience of scientific researchers, which takes a long time and produces uneven effects. Due to the complex composition, trace amount and degradation, the difficulty of mural inpainting is greatly increased. Ma et al. attempted to analyze mural samples by Fourier Transform Infrared Spectroscopy (FTIR) and Gas Chromatography (GC) to understand the consistency of the material [3]. Rezida et al. used portable X-ray fluorescence spectrometer (pXRF) to analyze the composition spectrum of mural pigments, and used an optical microscope and a scanning electron microscope (OM-SEM) to analyze the structure of the murals [4]. Liang et al. proposed the idea of making specific color charts for various artworks to improve the color and spectral accuracy of digital imaging of cultural artworks. Taking the ancient Chinese Dunhuang murals as the research object, the pigment prototype color map of the Dunhuang murals was drawn [5].

In the field of digital image inpainting, Bertalmio et al. proposed a computer image inpainting algorithm based on PDE (Partial Differential Equation) [6]. The algorithm is developed to repair the missing areas by diffusing the defect areas from the outside along the isophote and using the complete pixel information around the defect areas. Image inpainting based on PDE including the methods of Mumford-Shah [7, 8] and Total Variation [9]. In addition, there have traditional image inpainting methods based on texture synthesis such as the restoration of Dunhuang murals through sparse modeling of texture similarity and structural continuity [10, 11], image restoration proposed by the principle of self-similarity which is based on the block-based [12], an exemplar-based image inpainting approach [13], combine inconsistent images using a block-based composition approach [14], and calculates priority the priority-based texture synthesis inpainting algorithm that determines the filling order [15], etc.

Artificial intelligence technology is currently experiencing tremendous development and innovation. In the field of digital image repair, powerful autonomous learning and thinking algorithms such as deep learning and neural networks have been proposed. The application of these algorithms can help restoration workers complete the restoration of murals and other cultural relics efficiently and accurately. For example, Zeng et al. designed a convolutional neural network based on nearest neighbor pixel matching for the restoration of ancient murals, it successfully predicted the information in the larger missing area and obtained good image restoration results [16]. Cao et al. proposed the ancient mural classification method based on the improved AlexNet network and the adaptive sample block and local search (ASB-LS) algorithm based on the Crimonisi algorithm [17, 18]. Xie et al. proposed using Stacked Sparse Denoising Auto-encoders could solve the problems of image noise and text superimposed defacement to a certain extent [19]. Zhang et al. proposed using the Rectified Linear Units activation function to get better repair effects and faster training speed [20]. The use of Generative Adversarial Network (GAN) for image restoration is gaining more and more attention. For example, Cao et al. recently proposed an improved GAN method to restore ancient murals, which is mainly for murals that are relatively well-preserved but have a small part missing [21]. With the application of deep learning to image repair, comprehensive mural information repair can be realized (the defect of traditional image repair), and at the same time, more efficient and reasonable repair results can be obtained.

Based on this, we take the conservation and restoration of Bao'an District murals as an example and employ a generator–discriminator network algorithm (a type of neural network algorithm). Using 137 relatively intact murals as training models and 22 poorly preserved murals as restoration objects, the AI technique was used to restore images of ancient Bao’an District murals.

Methods

Generator–discriminator network algorithm

A generator–discriminator network model is used in this paper. The generator network is based on the improved U-NET model [22]. The generator is essentially an auto-encoder that is subdivided into an encoder and a decoder. The encoder is composed of a multi-layer lower sampler and the decoder of a multi-layer upper sampler. The repaired image of the mural is generated by the generator.

Subsequently, the repaired image is sent to the discriminator network. The discriminator network is used to determine whether the input image is generated by the generator. When the discriminator has significant difficulty distinguishing between the real image and the image generated by the generator network, it can be considered that the image has been well repaired by the generator network. For murals with non-structural damage, the loss-point distribution is similar to salt-and-pepper noise (Fig. 1). In this paper, pepper and salt-and-pepper noise algorithms for removing black spots are adopted to simulate the loss of murals (Fig. 2).

Fig. 1
figure1

Damaged mural. The authentic damaged murals in Bao’an District, Shenzhen. The purple (RGB:255,0,254) marks the damaged places. The damage of these authentic murals is similar to salt and pepper noise

Fig. 2
figure2

Algorithm to simulate damaged murals

The overall repair process is as follows: the damaged mural image is put through the generator network to obtain the repaired image. After the repaired image and the damaged image are stitched together, they are input to the discriminator network, which determines whether the input image is generated for the model or captured for the real image (Fig. 3).

Fig. 3
figure3

Overall network structure

Generator network structure

The generator network is based on a modified version of the U-Net model, which consists of an encoder and a decoder. The encoder and decoder are connected directly through the residual network (Fig. 4).

Fig. 4
figure4

Generator network structure

Encoder

The encoder consists of eight coding units, each of which is a Conv → Batchnorm → Leaky ReLU structure. The fixed stride of each layer of convolutional network is equal to 2, which is used for down-sampled images. The input of each layer will be retained for residual connection to retain more image details. The image array is batch_size, 1024, 1024, 3 (Table 1).

Table 1 Encoder structure Conv2D

Decoder

The decoder consists of 8 decoding units, which can be analogized as the reverse process of the encoder. Each decoding unit is a TransposedConv → Batchnorm → ReLU structure and is used for image reconstruction. By connecting the encoder network with residuals as input, a dropout layer is added in the first three layers of the decoder to enhance robustness. Detailed figures are shown in Table 2.

Table 2 Decoder structure Conv2DTranspose

Discriminator network structure

The discriminator network is also an image convolution network. The network structure is similar to the classic image classification network [23]. The difference is that the input of the classical image classification network is a picture while the output is a classification of the picture. In this paper, the discriminator network input is composed of two pictures, and the output is a 30 × 30 matrix. Each element represents the classification result (0 or 1) of its region. For the scenario in this article, 0 indicates that the discriminator network considers this region to be a restored mural picture generated by the machine learning model, and 1 represents that the discriminator network considers this region to be a real mural picture. By subdividing the spliced image into areas of 30 × 30 and highlighting these losses as needed in the loss function, we can improve the degree of detail in the images generated by the generator and achieve more satisfactory results (Fig. 5).

Fig. 5
figure5

Overview of the discriminator network process

In terms of specific structure, the network first uses a 5-layer encoder unit to down-sample the 1024 × 1024 × 3 image to 32 × 32 × 256. Then, we connect a layer of ZeroPadding2D to expand the image to 34 × 34 × 256, and pass the Conv2D layer, down-sample to 31 × 31 × 256, perform batch normalization, and repeat all of the above steps. Finally, we obtain a 30 × 30 × 1 matrix.

Loss function

Generator network loss function

The following two indices can be used to measure the effect of the generator network: (1) the spoofing effect of the generator network for the discriminator network, and (2) the difference between the repaired image and the real image.

For (1), Log Loss is used in this paper to calculate the loss between the discriminator network’s output and the 30 × 30 all-1 matrix.

$${L}_{Gen1}=\mathrm{log}loss(ones,\mathrm{discriminator}\_\mathrm{gen}\_\mathrm{output})$$

Here, \(ones\) comprise a 30 × 30 matrix (all of the elements are 1), and \(\mathrm{discriminator}\_\mathrm{gen}\_\mathrm{output}\) is the output of the repaired image that is generated by the generator network inputted to the discriminator network.

For (2), this paper first calculates the absolute value of the difference between the real image matrix and the generated image matrix, takes the average value of the row to obtain a 30 × 1 matrix, and then takes the average value of all columns (namely the reduce_mean algorithm, which is the three-step process just described). The final output is used as the loss function.

$${L}_{\mathrm{Gen}2}=reduce\_mean(\left|real\_image-gen\_ouput\right|)$$

Here, \(real\_image\) is the matrix of the real shooting image, and \(gen\_ouput\) is the matrix of the repaired image generated by the generator network.

The total generator network loss function can be expressed as follows:

\({L}_{Gen}={L}_{Gen1}+\lambda {L}_{Gen2}\).

In order to keep the ratio of \({L}_{Gen1}\) to \({L}_{Gen2}\) in a reasonable range, the article adds \(\lambda\) as an adjustment. Hence, \(\lambda\) controls the effect of the discriminator network on the generator network. The value of \(\lambda\) is 90.

Discriminator network loss function

The following two indicators can be used to measure the effect of the discriminator network: (1) the identification effect of the discriminator network on the real shooting mural image, (2) the identification effect of the discriminator network on the repaired image generated by the generator network.

For (1), this paper uses Log Loss to calculate the loss between the output of the discriminator network and the 30 × 30 all-1 matrix when the input is a real mural image.

$${L}_{Dis1}=\mathrm{log}loss(ones,discriminator\_real\_output)$$

Here, \(ones\) comprise a 30 × 30 matrix (all of the elements are 1), and \(discriminator\_real\_output\) is the output that is input to the discriminator network after stitching together the real mural images and damaged images.

For (2), Log Loss is used in this paper to calculate the loss between the output of the discriminator network and the 30 × 30 all-0 matrix when the input is used to generate images for the model.

$${L}_{Dis2}=\mathrm{log}loss(zeros,discriminator\_gen\_output)$$

Here, \(zeros\) comprise a 30 × 30 matrix (all the elements are 0), and \(discriminator\_real\_output\) is the output that is input to the discriminator network after stitching together the mural repaired images and damaged images generated by the generator network.

The total discriminator network loss function can be expressed as follows:

\({L}_{Dis}={L}_{Dis1}+{L}_{Dis2}\).

Results and discussion

Experiment environment

The hardware environment is mainly composed of Intel(R) Xeon(R) Gold 5118 CPU @ 2.30 GHz and 64 GB memory and two Nvidia Tesla V100 16 GB graphics cards. The software environment includes the Tensorflow 2.0 deep learning framework running on the Ubuntu18 system. The software is written in Python language, and 137 image-enhanced mural pictures has been used to train the model for 150 rounds to get the results.

Experiment data source

The 137 murals used as training models and the 22 murals utilized as restoration objects are typical representatives of the murals in Bao’an District, and the study on mural restoration using them as basic image data is representative to a certain level (Fig. 6).

Fig. 6
figure6

Location of Bao’an District in Shenzhen and quantity distribution of murals used in the experiment

From the perspective of regional distribution, 159 murals used in the experiment came from 15 ancient villages in 7 subdistricts of Bao’an District. As can be seen from Table 1, the specific distribution covers most of Bao’an (Fig. 7) and is widely distributed, representing the overall appearance of the murals in the district.

Fig. 7
figure7

taken from each subdistrict

The number of murals

In terms of architectural type, there are 32 murals in watchtowers, accounting for 20.13%, 9 murals in old-style private schools (5.66%), 21 murals in residences (13.21%), 6 murals in study rooms (3.77%), 35 murals in temples (22.01%), and 56 murals in ancestral halls (35.22%).

Judging by the dates inscribed on the murals, most of the buildings were built in the Ming and Qing dynasties up to the Republic of China. Note that the painting date of 7 murals is to be determined. According to the inscribed dates, 38 murals were produced before 1840, and 114 were produced between 1840 and 1949. Regarding the content of murals, there are 107 murals of flowers, birds and animals, accounting for 67.30% of all murals, 27 murals of character stories, accounting for 16.98%, and 25 murals of landscapes, accounting for 15.72%.

Influence of discriminator network resolution on inpainting effect

We use 1024 × 1024 images as input, and a matrix with a discriminator network output resolution of 30 × 30. In order to explore the impact of the discriminator network resolution on the quality of the repaired image, this paper selected several different resolutions for experiments. The following results are the outputs of the model after 150 rounds of training. From left to right in Fig. 8, the final output formats of the discriminator network are 2 × 2, 6 × 6, 30 × 30, 126 × 126.

Fig. 8
figure8

Effects of various discriminator network resolutions

We select two groups of pictures from the 2 × 2 image group and the 126 × 126 image group for comparison (Fig. 9). It can be seen that due to the higher resolution of the 126 × 126 group, more details are restored on the face of the person. Also, we can see that the edges of the branches are sharper. However, at the same time, if we compare the leaves, it can be seen that the black noise generated by the 126 × 126 group is significantly more than that of the 2 × 2 group. This is in line with expectations.

Fig. 9
figure9

Comparison group of repair effects

Comparison test of different methods for mural restoration with artificial salt and pepper noise

In this part, the proposed method is compared with the other three competing methods. The result is six simulated damage mural images as shown in the Fig. 10. Three algorithms, Criminisi [15], Darabi [14], and Wang [13], have been described in introduction section. The proposed algorithm and the three methods are run on the same set of mural images in order to observe the pros and cons of their repairing effects. First, we artificially added pepper and salt noise to damage the well-preserved mural. Then, we randomly selected the area on each image with a pixel size of 512*512. The inpainting effects of damaged murals based on different methods are shown in Fig. 10.

Fig. 10
figure10

Comparison group of repair effects

In Fig. 10, in general, all algorithms have achieved good results in color restoration、texture similarity and structure continuity of artificially noise. Since the noise is not a large area damage, there are fewer restrictions on the search and texture diffusion in matching blocks. However, when the noise is added to the entire image, the processed textures become more complicated. The textures generated by these three algorithms still shows some blurry, which fail to reflect the original textures and structures of the image. Criminisi’s method appears unwanted textures and incompleteness structures. Wang's algorithm has made slight progress in filling textures, but it can still be seen that many structures and textures are incoherence and incompleteness. Darabi’s method is better than the previous two algorithms whether the restoration of color, structures and textures. In contrast, the proposed algorithm spliced the damaged pictures and repaired pictures into input, which enhanced the consistency of the overall restoration of the mural image. Therefore, in theory, it can have good inpainting effects on mural images with complex information.

After completing the above-mentioned mural inpainting with artificial salt and pepper noise, we calculated and compared the average of the peak signal-to-noise ratio (PSNR) values and structural similarity index (SSIM) values (Fig. 11). PSNR usually reflects the image quality of the restored image compared with the original image. The higher the value of PSNR, the better the quality of the inpainted image (Fig. 11a). It can be seen that the average PSNR of our proposed algorithm is 34.36 ± 0.99, which is significantly higher than both Criminisi and Wang (**P < 0.01), and also higher than the Darabi’s method. SSIM is an index to measure the similarity of two images. The closer SSIM is to 1, the more similar two images are (Fig. 11b). It can be seen that the SSIM values of the proposed algorithm is as high as 0.91 ± 0.2, which is also significantly higher than the methods of Criminisi and Wang (**P < 0.01), and also higher than the SSIM values obtained by the Darabi’s method.

Fig. 11
figure11

Comparisons of PSNR and SSIM on inpainting six mural images. a presents the PSNR on inpainting simulated damaged mural images, b shows the SSIM on inpainting simulated damaged mural images, N = 6, **P < 0.01, v.s. Criminisi and Wang

Restoration effect of the discriminator network used for real mural images

In order to test the restoration effect of the algorithm proposed in this paper on real, non-simulated loss murals, we selected 22 mural images with which to observe the restoration effect of the real mural after model training. The 22 mural images are disperse and dot-like, and similar to simulated damaged murals for repairing experiments. Performance comparisons are made on one of the three real damage murals. The inpainting effects of the four algorithms are shown in Fig. 12.

Fig. 12
figure12

Comparison of the restoration effects on authentic damaged murals of Shenzhen produced by different methods

Although the four methods for repairing simulated damaged murals can work well, there are still obvious differences between four methods for real damaged. The missing regions of real damaged murals are very complicated. Due to the limitations of traditional algorithms, when repairing murals with complex damaged structures, traditional methods have poor results, with blurry information and texture fragmentation, and even many areas to be repaired are not inpainted at all, it fails to reflect the textures and structures of the original mural image. Our algorithm has better performance than other methods. The proposed algorithm not only has a more satisfactory effect than traditional methods in inpainting areas with missing textures and structures, but also performs better color restoration and strong visual consistency (Fig. 12).

In order to ensure a more convincing evaluation result, a subjective evaluation method similar to that of Cao [21] and [18] was adopted. We invited two ancient mural restoration experts to score the murals of structure continuity and textures consistency before and after the four algorithms. The scoring system is divided into 10 levels, with 10 points being the highest and 1 point being the lowest. The results are shown in Fig. 13. Compared with the traditional algorithms, two experts believe that the proposed algorithm has a significant inpainting effects in terms of structures continuity and textures consistency (**P < 0.01). This indicates that based on subjective evaluation, our proposed algorithm is superior to the other three methods.

Fig. 13
figure13

Subjective scoring of the restoration effects of different repair methods on real damaged murals in Shenzhen. N = 6, **P < 0.01, v.s. Criminisi, Darabi and Wang

Discussion

In recent years, many scholars have paid attention to the use of some technical methods for the protection and restoration of murals. Using technology to study the influence of Streptomyces on the color of ancient Egyptian tomb murals, some solutions have been presented [24]. The digital simulation restoration of ancient Chinese murals has mostly involved the study of Dunhuang murals [25], and there are few studies on the restoration of precious murals in other places, such as the ancient murals in Shenzhen. For example, a digital image restoration technique with a macro perspective was constructed for the Dunhuang mural protection and restoration system architecture. There are some other improved image decomposition techniques, such as the Criminisi algorithm and Markov algorithm for the digital inpainting of Dunhuang murals [26, 27]. Some algorithms are digital restoration methods based on the classification of mural destruction patterns or plaque characteristics. For example, morphological component analysis (MCA) has been used to decompose a mural into two parts, the structure and texture, to repair cracks and mud spots in the mural, respectively [28, 29]. Some studies have pointed out a mural restoration method based on sample block priority can accurately calibrate the mud spot disease in Tibetan digital murals and perform simulated restoration [30]. The intelligent restoration of ancient murals described above has achieved certain results. Nevertheless, mural restoration using artificial intelligence is still in its infancy.

Generally, traditional image inpainting methods according to the repaired areas are divided into (1) small damaged areas that are transferred to known areas for repairing [18, 31,32,33], and (2) large damaged areas that are synthesized and matched for repairing [34, 35]. In practical application, the calculation is very large and time-consuming. Moreover, in the field of mural image restoration, due to the small number of preserved complete murals, the traditional method uses some ordinary pictures as training input [34]. Although the problem of sample size is compensated for, the trained model is often not effective for real damaged because the artistic style is not considered in the process of transfer learning [36]. In this paper, 137 murals were used for training and learning how to inpaint the murals, not for restoring the real ones directly. Through the generator and discriminator network proposed in this paper, the use of the 137 murals as training models can result in better inpainting of the real murals.

Conclusion

We propose a generative adversarial network. The network uses image stitching input and the output is a 30 × 30 matrix. After the model has been trained for 150, it tries to repair severely worn mural images without structures. The results show that, compared with the traditional algorithms, our algorithm can significantly improve the subjective ornamental values or PSNR values and SSIM values of damaged frescoes, indicating that the proposed algorithm has better restoration effect in terms of the color restoration, texture similarity and structure continuity of the damaged frescoes.

However, some limitations still exist. First of all, this study only simulated salt and pepper noise defects and lacked exploration of large-scale defects. Secondly, the color of the mural after restoration needs to be further considered and optimized. Third, as the resolution of the discriminator network improves, the image generated by the model obtains more details, and some noise is also introduced. In the actual use process, it is necessary to select an appropriate value to achieve a balance between detail and noise. This method can also be used to try to repair large missing damaged murals. Finally, it is possible to introduce transfer learning algorithms or increase the sample size of high-quality ancient mural training data sets from other regions, so as to achieve high-quality restoration of large-scale defects.

Availability of data and materials

All data for analysis in this study are included within the article.

Abbreviations

Conv:

Convolution layer

Batchnorm:

Batch normalization layer

PSNR:

Peak Signal-to-Noise Ratio

SSIM:

Structural Similarity

ReLU:

Rectified Linear Unit

References

  1. 1.

    Liu L. Research on Shenzhen Phoenix Village protective update strategy based on symbiosis theory (in China). Harbin: Harbin Institute of Technology; 2014. p. 2–10.

    Google Scholar 

  2. 2.

    Song J. Study on regionally for contemporary museum design in Pearl river delta Region (in China). Guangzhou: South China University of Technology; 2012. p. 68–90.

    Google Scholar 

  3. 3.

    Ma Z, Yan J, Zhao X, Wang L, Yang L. Multi-analytical study of the suspected binding medium residues of wall paintings excavated in Tang tomb, China. J Cult Heritage. 2017;24:171–4.

    Article  Google Scholar 

  4. 4.

    Khramchenkova R, Biktagirova I, Gareev B, Kaplan P. Horse-headed Saint Christopher Fresco in the Sviyazhsk Assumption Cathedral (16th-17th century, Russia): history and archaeometry. Mediterr Archaeol Archaeom. 2018;18(3):195–207.

    Google Scholar 

  5. 5.

    Liang J, Wan X. Prototype of a pigments color chart for the digital conservation of ancient murals. J Electron Imaging. 2017;26(2):023013.

    Article  Google Scholar 

  6. 6.

    Bertalmio M, Vese L, Sapiro G, Osher S. Simultaneous structure and texture image inpainting. IEEE Trans Image Process. 2003;12(8):882–9.

    Article  Google Scholar 

  7. 7.

    Tsai A, Yezzi A, Willsky AS. Curve evolution implementation of the Mumford-Shah functional for image segmentation, denoising, interpolation, and magnification. IEEE Trans Image Process. 2001;10(8):1169–86.

    CAS  Article  Google Scholar 

  8. 8.

    Esedoglu S, Shen JH. Digital inpainting based on the Mumford-Shah-Euler image model. Eur J Appl Math. 2002;13:353–70.

    Article  Google Scholar 

  9. 9.

    Chan TF, Shen JH. Mathematical models for local nontexture inpaintings. Siam J Appl Math. 2002;62(3):1019–43.

    Article  Google Scholar 

  10. 10.

    Wang H, Li Q, Zou Q. Inpainting of dunhuang murals by sparsely modeling the texture similarity and structure continuity. ACM J Comput Cult Heritage. 2019. https://doi.org/10.1145/3280790.

    Article  Google Scholar 

  11. 11.

    Wang H, Li Q, Jia S. A global and local feature weighted method for ancient murals inpainting. Int J Mach Learn Cybern. 2020;11(6):1197–216.

    CAS  Article  Google Scholar 

  12. 12.

    Drori I, Cohen-Or D, Yeshurun H. Fragment-based image completion. Acm Trans Graph. 2003;22(3):303–12.

    Article  Google Scholar 

  13. 13.

    Wang J, Lu K, Pan D, He N, Bao B-K. Robust object removal with an exemplar-based image inpainting approach. Neurocomputing. 2014;123:150–5.

    Article  Google Scholar 

  14. 14.

    Darabi S, Shechtman E, Barnes C, Goldman DB, Sen P. Image Melding: Combining Inconsistent Images using Patch-based Synthesis. ACM Trans Graphics. 2012. https://doi.org/10.1145/2185520.2185578.

    Article  Google Scholar 

  15. 15.

    Criminisi A, Perez P, Toyama K. Region filling and object removal by exemplar-based image inpainting. IEEE Trans Image Process. 2004;13(9):1200–12.

    Article  Google Scholar 

  16. 16.

    Zeng Y, Gong Y, Zeng X. Controllable digital restoration of ancient paintings using convolutional neural network and nearest neighbor. Pattern Recogn Lett. 2020;133:158–64.

    Article  Google Scholar 

  17. 17.

    Cao J, Cui H, Zhang Q, Zhang Z. Ancient Mural Classification Method Based on Improved AlexNet Network. Stud Conserv. 2020;65(7):411–23.

    CAS  Article  Google Scholar 

  18. 18.

    Cao J, Li Y, Zhang Q, Cui H. Restoration of an ancient temple mural by a local search algorithm of an adaptive sample block. Heritage Sci. 2019. https://doi.org/10.1186/s40494-019-0281-y.

    Article  Google Scholar 

  19. 19.

    Xie, J, Xu, L, and Chen, E. Image denoising and inpainting with deep neural networks. Advances in neural information processing systems. 2012; 1.

  20. 20.

    Zhang L, Wu Y, Zhao H. Image denoising with rectified linear units. 2014.

  21. 21.

    Cao J, Zhang Z, Zhao A, Cui H, Zhang Q. Ancient mural restoration based on a modified generative adversarial network. Heritage Science. 2020;8(1):7.

    Article  Google Scholar 

  22. 22.

    Ronneberger, O, Fischer, P, and Brox, T. U-Net: convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention. 2015.

  23. 23.

    Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell. 2015;39(4):640–51.

    Google Scholar 

  24. 24.

    Abdel-Haliem ESF. Discoloration of Ancient Egyptian Mural Paintings by Streptomyces Strains and Methods of Its Removal. Int J Conserv Sci. 2012;3(4):249–58.

    Google Scholar 

  25. 25.

    Pan, YH and Dong-Ming, LU. Digital Protection and Restoration of Dunhuang Mural. Acta Simulata Systematica Sinica. 2003.

  26. 26.

    Yang X-P, Wang S-W. Dunhuang Mural Inpainting in Intricate Disrepaired Region Based on Improvement of Priority Algorithm (in China). J Comput Aided Des Comput Graph. 2011;23(2):284–9.

    Google Scholar 

  27. 27.

    Yang X-P, Wang S-W. Dunhuang mural inpainting based on Markov random field sampling. J Comput Appl. 2010;30:1835.

    Google Scholar 

  28. 28.

    Shen J, Wang H, Meng WU, Yang W. Tang dynasty tomb murals inpainting algorithm of MCA decomposition. J Front Comput Sci Technol. 2017;11:1826–36.

    Google Scholar 

  29. 29.

    Smith LN, Elad M. Improving dictionary learning: multiple dictionary updates and coefficient reuse. IEEE Signal Process Lett. 2013;20(1):79–82.

    Article  Google Scholar 

  30. 30.

    Jiang J, ZG, Wang ZX. Digital curtain diameter mould disease auto calibration and restoration method simulation. Comput Simul. 2018;35: 215–219.

  31. 31.

    Shen J, Chan TF. Mathematical models for local nontexture inpaintings. Siam J Appl Math. 2002;62:1019–43.

    Article  Google Scholar 

  32. 32.

    Chan TF, Shen J. Nontexture Inpainting by Curvature-Driven Diffusions. J Visual Commun Image Represent. 2001;12:436.

    Article  Google Scholar 

  33. 33.

    Cao J, Zhang Z, Zhao A, Cui H, Zhang Q. Ancient mural restoration based on a modified generative adversarial network. Heritage Sci. 2020;8(1):7.

    Article  Google Scholar 

  34. 34.

    Wen L, Xu D, Zhang X, Qian W. the inpainting of irregular damaged areas in ancient murals using generative model (in China). J Graph. 2019;5:925–31.

    Google Scholar 

  35. 35.

    Ren X, Chen P. Murals inpainting based on generalized regression neural network (in China). Comput Eng Sci. 2017;039(10):1884–9.

    Google Scholar 

  36. 36.

    Liu J. Intelligent Image Processing and Inpainting for Ancient Fresco Preservation (in China). Hangzhou: Zhejiang University; 2010. p. 20–30.

    Google Scholar 

Download references

Acknowledgements

The authors wish to express their great gratitude to the Researchers Weiwen Huang and Dr. Jinming Liu at the China Mobile Guangdong Co for their kind support and assistance with this research.

Besides, we thank LetPub (www.letpub.com) for its linguistic assistance during the preparation of this manuscript.

Funding

This study was funded by The National Social Science Fund of China (19BZS117) and Key-Area Research and Development Program of Guangdong Province (2018B010112001).

Author information

Affiliations

Authors

Contributions

All the authors contributed to the current work. JL devised the study plan and wrote the manuscript. HW, ZQD, JL and MTP were responsible for the whole experiment, data collection and analysis. HHC supervised the entire process and provided constructive advice. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Honghai Chen.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Li, J., Wang, H., Deng, Z. et al. Restoration of non-structural damaged murals in Shenzhen Bao’an based on a generator–discriminator network. Herit Sci 9, 6 (2021). https://doi.org/10.1186/s40494-020-00478-w

Download citation

Keywords

  • Mural restoration
  • Mural in Bao’an district
  • Generator network
  • Discriminator network
\