Skip to main content
  • Research article
  • Open access
  • Published:

Documentation of paintings restoration through photogrammetry and change detection algorithms

Abstract

The philosophical and theoretical foundations of the Theory of Restoration, envisioned by Cesare Brandi in 1975, are established around clear and straightforward guidelines on what is ethically acceptable, and unacceptable, in conservation. Specifically, the Italian scholar advocates for the complete reversibility of restoration work and respect for the history of an artwork. Indeed, according to these concepts, all interventions should be fully reversible so to return the object to its initial conditions without any damage. Bearing in mind these assumptions, a detailed documentation of all the steps of the conservation process, and the possibility to retrieve them a posteriori, must be considered essential. This concept especially applies when dealing with paintings restoration characterized by fine and small details. In recent years, the tendency is to favour minimal invasive interventions ranging from consolidation actions, cleaning samples, and colours retouching. Materials change more or less conspicuously over time according to their consistency and the intensity of the changing factors. Icons do not make an exception to this rule. This process affects the icon’s whole structure: the support, the painting itself and the varnish coating. This paper investigates the performance of change detection algorithms, developed in the remote sensing domain, and, in the framework of this research applied at a microscale (paintings). Each phase of the restoration process is documented exploiting a multi-epoch image acquisition. A monitoring methodology coupled with photogrammetry and 3D shape analysis is tested and described. It is anticipated that the proposed innovative use of change detection techniques can be applied to different kinds of painted surfaces. An icon, today preserved at the Byzantine Museum Makarios III Foundation in Nicosia and restored by the Department of Antiquities of Cyprus labs, has been used as a case study.

Introduction

For almost the last two decades digital technologies have been increasingly used to address art conservation problems and provide objective diagnostic and documentation tools.

Concerning the restoration of paintings, the scientific community has indeed proposed a variety of methodologies for the study, identification, and mapping of the decay. Techniques such as multispectral analysis [1], virtual restoration [2], Geographical Information System (GIS) [3], and 3D modelling [4] have proven to be successful when integrated in a multidisciplinary research environment.

The documentation process of painted artworks is today mainly realised through digital cameras using direct or oblique light, coupled, when possible, with colour checkerboards and metric rulers. The typical outcome is a single-shot image often influenced by more or less visible perspective distortions and usually light changes due to the reflectivity of the surfaces.

When needed, the collected images are a posteriori visually analysed to identify the changes the artworks went through during the restoration. The proposed study presents a novel way to exploit change detection procedures and photogrammetric techniques to monitor and retrieve variations occurred on the pictorial surface at different stages, automatically and straightforwardly.

Change Detection can be defined as the process of identifying transformations of a surface over time. In this study, it has been applied for the first time in heritage science to the restoration procedure of a painted item, a Byzantine icon.

Eleven image data sets have been acquired and processed to highlight the changes that occurred due to the conservation interventions.

In details, the paper is structured as follow: (i) description of the data collection procedure through photogrammetric techniques (see “Photogrammetric surveys” section); (ii) data post-processing for the production of radiometrically correct orthophotos (see “Data post processing” section); (iii) change detection algorithms application over different epoch datasets in 2D and 3D space (see “2D change detection analysis” section); (iv) analysis and interpretation of the results (see “2D change detection data interpretation” section).

Additionally, an assessment of the geometric conservation condition of the wooden support has been realised.

Methods/experimental

2D change detection

Many techniques have been developed in the remote sensing domain, with the purpose of identifying the differences occurred on the Earth surface over time. This task, usually referred to as change detection, can be approached in a supervised or unsupervised way, being the latter preferred when no training samples or only a little knowledge on the ground is available.

The Multivariate Alteration Detection (MAD) algorithm was selected in this study after a comparison with the well-known and widely used in remote sensing, Principal Component Analysis (PCA) algorithm [5]. The latter is a statistical procedure used in several domains and developed to transform a set of correlated variables into a new set of uncorrelated variables considering the principal directions in which the data are spread in space. Whereas PCA allows reducing the size and redundancy in the original data, MAD considers maximum autocorrelation eliminating issues related to the possibility that a dominating element in the image affects the PCA components. In addition, MAD is invariant for linear transformations of the data, making it insensitive to the application to raw Digital Numbers (DNs) or transformed images [6].

The MAD is a broadly used mathematical analysis method of images linear transformation. Introduced by Nielsen et al. [7], MAD seeks to improve the simple image differentiating techniques by exploiting the Canonical Correlation Analysis (CCA). Indeed, the main principle is to make the images as similar (i.e., correlated) as possible, before computing their difference. The latter is carried out by using CCA to find two sets of linear combinations of the original variables, where the first two linear combinations (called canonical variates) are the ones featuring the most significant correlation (called first canonical correlation). This process is then iterated to compute the higher-order canonical correlations/variates, under the condition to be orthogonal (i.e., uncorrelated) to the previous ones. If N is the maximum number of bands in first and second input images, the differences between the corresponding pairs of variates (called MAD variates or components), constitute N change maps that are usually combined in a single multi-band image.

Since MAD analysis lacks in semantic interpretation, the adoption of a combined procedure can be preferred to support the understanding of changes found by MAD. For this reason, Nielsen proposed to apply the Maximum Autocorrelation Factor (MAF) transformation to the MAD components [8]. MAF transform seeks to isolate the noise component of the data, by computing a new set of variates out of the original ones, where low order components feature maximal spatial autocorrelation (signal), whereas the highest order variates feature minimal spatial autocorrelation (noise). Accordingly, the first MAF-MAD component will identify areas with maximum changes, while the noise is expected to be isolated in the lower order MAF-MAD components. The use of the MAD technique, either alone or in combination with MAF transform, is well-known in the remote sensing community [9,10,11].

2D change detection literature review

Detecting changes in images of the same scene acquired at different times has seen significant development in a wide range of disciplines such as video surveillance, remote sensing, medical diagnosis, driver assistance systems, civil engineering, disaster management, and cultural heritage [12]. However, up to date, there is no record it has ever been used for the documentation of the restoration process or monitoring of painted surfaces.

In the last three decades, the research community has proposed a variety of change detection methods using different approaches such as image differencing, image rationing, principal component analysis, change vector analysis and post-classification. A complete review of change detection algorithms can be found in [13,14,15].

Change detection algorithms are usually categorized into two main typologies:

  • Algebra-based change detection, including image differencing, image rationing, image regression, vegetation index differencing, change vector analysis and background subtraction techniques;

  • Classification based change detection such as post-classification comparison, spectral–temporal analysis, unsupervised change detection, and hybrid change detection.

Although the study reported in this paper represents the first attempt in using change detection algorithms for the documentation of the restoration process of a painted artefact, the heritage and archaeological domains have experimented an extensive use of multi-temporal imagery as a useful tool for large-scale protection of monuments, sites and cultural landscapes [16]. Table 1 presents a literature review concerning the use of 2D change detection algorithms applied in heritage and archaeological science.

Table 1 2D change detection, summary of the literature review

3D change detection

In the 3D domain, change detection algorithms ask for three-dimensional models as input data instead of 2D images. These are usually obtained through the state-of-the-art image- and range-based modelling techniques, in forms of point clouds or meshes.

Changes are then usually identified by measuring the distance between 3D models. Four main approaches for distance computation are generally adopted, namely:

  • DEM of Difference (DoD) [25];

  • Direct cloud-to-cloud comparison with closest point technique (C2C) [26];

  • Cloud-to-mesh distance or cloud-to-model distance (C2M) [27,28,29];

  • Multiscale Model to Model Cloud Comparison (M32C) [30].

3D change detection literature review

In Table 2 are summarised some significant examples describing how the aforementioned 3D change detection algorithms, coupled with range- and image-based modelling techniques, can help monitor geometric features of heritage assets over time.

Table 2 3D change detection, summary of the literature review

Geometric analysis of paintings

In the field of painting’s conservation, there is a number of non-contact techniques which allow detailed analyses of the artwork’s surface and support in the visible and non-visible spectrum.

In recent years the study and characterisation of artworks’ surface employing different non-invasive digital techniques are quickly evolving [38,39,40,41]. Typical outputs are usually represented by pigment identification, colour measurements, extraction of geometric features (brush strokes details), and shape’ measurements. Range-based 3D modelling techniques [42,43,44,45], such as laser scanners and structured light sensors, and image-based 3D modelling technique, such as photogrammetry [46,47,48] can provide precise and reliable 3D geometrical and radiometric information useful for detailed analyses and inspections. Both approaches allow the retrieval of very high geometric details with reported spatial resolutions ranging from 60 and 400 μm.

The case study

The artwork used as case study used is a Byzantine icon dating to the 18th century, today exhibited at the Byzantine Museum Archbishop Makarios III Foundation of Nicosia. The wooden panel, originally placed in the church of St. Spyridon in Tremetousia (district of Larnaca, Cyprus), measures 34 cm × 42 cm and represents the bust of the Apostol Peter set against a gold ground. The half-figure of the bearded apostle is depicted looking toward the observer. He is dressed in a blue tunic and a red tint pallium, holding in his right hand the keys and in his left hand a scroll mentioning an extract of the Second General Epistle of Peter (1:1, 1:2) from the New Testament of the Bible (Greek: συμεων πετρος δουλος και αποστολος ιησου χριστου τοις ισοτιμον ημιν λαχουσιν πιστιν εν δικαιοσυνη του θεου ημων και σωτηρος ιησου χριστου;

Translation: Simon Peter, servant and apostle of Jesus Christ, to them that have obtained equal faith with us in the justice of our God and Savior Jesus Christ).

On the top left and top right corners of the icon, a red abbreviation in Greek says ‘Apostolos Petros’ (Fig. 1).

Fig. 1
figure 1

Apostle Petros icon, orthophoto before the restoration

During September 2018 the icon was restored by the Department of Antiquities of Cyprus labs. At a preliminary visual inspection, the icon presented (i) a layer of dark varnish covering all the painted surface; (ii) areas where the colour layer detached entirely from the support and is now missing; and (iii) areas where the colours appeared lifted.

The restoration pipeline adopted by the technicians was following Brandi’s Theory of Restoration [49], respecting concepts such as the unacceptability of creative conservation and the complete reversibility of any interventions.

An overview of the icon restoration process is summarised in Table 3.

Table 3 Restoration phases

Digital methodology

The proposed digital methodology consists of four steps: (i) photogrammetric survey, (ii) dense point cloud and orthophoto production, (iii) 2D and 3D multi-temporal change detection, (iv) data interpretation (Fig. 2).

Fig. 2
figure 2

Proposed methodology

This approach was tested on a multi-temporal dataset consisting of eleven orthophotos. Each time a conservation task was completed (see Table 3), the process was halted, the icon moved in a controlled photographic environment, and the image dataset was collected.

Photogrammetric surveys

The photogrammetric setup consisted of a Canon 5D Mark IV camera, 30-megapixel full-frame sensor (6 mm pixel size), equipped with a Canon EF 24–105 mm f/2.8 USM lens, two photographic lamps, polarised sheets, and polarizes filter. The camera ‘s autofocus was disabled, and the lens focusing ring and Focal Length were fixed at 40 mm with a piece of insulating tape to avoid accidental changes of interior orientation parameters during the photogrammetric survey.

In 3D dense stereo matching, surface reflections may lead to incorrect measurements and blunders in the resulting dense point cloud. The use of a lens polarised filter, oriented in the same polarisation plane, mainly results in a surface enhancement effect, increasing the saturation and contrast of the image. However, with this configuration, the reflections are not eliminated. To overcome the problem of disturbing reflections, polarising filters can be mounted both on the camera lens and on the light source(s). Reflections in the images can be then suppressed by crossing the polarising direction of the filters leading to homogeneously illuminated images and better matching results. To achieve this goal, two polarised sheets were placed over the circular lamps, equipped with fluorescent light bulbs, and rotated accordingly until perpendicular or crossed planes of polarisation were reached.

A few studies have been published concerning the use of polarising films for image-based 3D reconstruction. Table 4 summarises a literature review concerning this topic.

Table 4 Use of polarised light in 3D dense stereo matching, literature review

A photogrammetric camera network was a priori planned [57], following a convergent schema, with the primary aim to guarantee the automated matching of homologous points and an adequate number of intersecting rays [58] (Fig. 3). Given the requirements of sub-millimetre accuracy for 3D geometry, a photographic scale of 1:20 was selected. The icon was placed vertically on an easel, and an estimated camera-object distance of 0.75 m was set, thus resulting in a mean Ground Sample Distance (GSD) of 0.11 mm. An average base-to-depth ratio (B/D) of 0.2 was computed (Fig. 4).

Fig. 3
figure 3

Image-data capturing setup

Fig. 4
figure 4

Photogrammetric camera positions

Data post processing

All images, acquired in RAW format, were pre-processed (colour calibration using a colour checker board, histogram’s stretching, and white balancing) in a photo editing software.

The typical photogrammetric workflow was then applied, consisting of three main steps, namely: (i) image correspondences detection, (ii) bundle adjustment and (iii) dense image matching [59].

Starting from the estimated camera poses and orientation, a dense 3D reconstruction via pixel-based image matching algorithm was applied. This was performed using the first-level image pyramid, corresponding to the original full image resolution. The derived dense point clouds consisted of ~ 3 million points for each single processed dataset. Subsequently, 11 orthomosaics with an average pixel size of 0.2 mm, radiometrically balanced, and digitally blended so that the seam lines between images are not visible, were produced.

With the final goal of fitting the same physical space, each orthophoto was cropped using a photo-editing software preserving the original resolution and GSD. A further refinement of the image alignment was realised co-registering each output as described in “2D change detection analysis” section.

2D change detection analysis

The orthophotos have been processed using the open-source software Orfeo ToolBox (OTB) [60], a remote sensing image processing library developed by CNES, the French Space Agency.

A three-step strategy was adopted:

  • first, to refine the image overlap, an initial image co-registration was performed. This application computes a disparity map in 2D between two images that correspond to the same scene. It is intended for the case where small misregistration between images has to be estimated and fixed. The algorithm uses an iterative approach to determine the best match between local patches, and the final output image contains X and Y offsets, as well as the metric value, with sub-pixel accuracy. It is understood that the input images should have the same size (height and width), scale, and occupy the same physical space.

  • Second, change detection between the orthophotos is performed by adopting the MAD algorithm. A MAD map is thus produced, consisting of three bands that represent the variates (change maps) sorted by increasing correlation.

  • Finally, the MAF transform is applied to the MAD variates, and the lowest order MAF-MAD component (i.e., the first component) is initially analysed to detect the changes occurred.

Results and discussion

2D change detection data interpretation

The analysis of the 2D change detection results has been realised by mutually comparing the RGB orthophotos and MAD/MAF change detection maps.

As mentioned in “2D change detection” section, the lowest order MAD-MAF component (i.e., the first component) is initially used to detect the changes that occurred on the icon surface.

Time 0: initial conservation status of the painting

Before the beginning of the restoration process, the conservation status of the icon was assessed through visual analysis. The artefact showed a layer of dark varnish uniformly distributed all over the surface. It was possible to clearly identify areas where (i) the colours were missing such as the left profile of the Apostle’ s face, the hair, the keys, and the vest; (ii) the colour fragments were lifted from the wooden support; (iii) the preparatory layer of gypsum was visible. An image dataset was collected at Time 0 before any physical restoration started (Fig. 1).

Time 0–1: cleaning samples

The first step of the restoration process has been represented by testing cleaning agent, diluted in water at different percentages, for the removal of the layer of varnish. Samples have been realised in different areas of the icon in rectangular shapes to assess the most suitable compound. The MAF band 1 map highlights the zones of the icon being cleaned, respectively (i) neck of the Apostle; (ii) left elbow; (iii) bottom part of the scroll; (iv) background area above the scroll (Fig. 5). When all the bands (R:1–G:2–B:3) are visualized, it is also possible to distinguish the response given by the different colours of the icon (tunic, pallium, flesh, background), (Fig. 6).

Fig. 5
figure 5

Time 0 orthophoto (left), Time 1 orthophoto (centre), MAF Map Band 1 (right)

Fig. 6
figure 6

MAF Map Band R: 1–G:2–B:3

Time 1–2: half icon varnish removal (right side)

After the appropriate cleaning compound was identified, the varnish was removed on the right-hand side of the icon. From the analysis of the change detection map, the new cleaning intervention appears evident, whereas, as expected, the areas previously cleaned (test samples Time 0–1) are not highlighted by the procedure at Time 1–2 (Fig. 7).

Fig. 7
figure 7

Time 1 orthophoto (left), Time 2 orthophoto (centre), MAF Map Band 1 (right)

Time 2–3: half icon varnish removal (left side)

According to the results achieved in the previous step, also the left-hand side of the icon was cleaned. All the varnish visible at the beginning of the restoration process was removed. The MAF results are consistent with what has been already observed during the analysis of the change detection in Time 1–2 (Fig. 8).

Fig. 8
figure 8

Time 2 orthophoto (left), Time 3 orthophoto (centre), MAF Map Band 1 (right)

Time 3–4: initial retouching of the head profile, keys, and scroll

After the complete varnish removal, the lacunae integration of the icon was initiated. Firstly, a neutral and uniform colour forming the preparatory layer was applied to the areas of the head/face profile, keys, and scroll. The MAF band 1 map pinpoints the extents affected by these interventions, confirmed by the subsequent exam of the RGB photos (Fig. 9).

Fig. 9
figure 9

Time 3 orthophoto (left), Time 4 orthophoto (centre), MAF Map Band 1 (right)

Time 4–5/5–6/6–7: retouching of the shoulders profile and bottom of the scroll; retouching over the pallium (right); retouching over the pallium (left)

The restoration procedure described above (Time 4) continued at Time 5, 6, 7. The interventions regarded different areas of the icon such as the profile of the shoulders (Fig. 10) and the small gaps on the pallium which have been recorded to underline the efficacy of the proposed methodology (Fig. 11).

Fig. 10
figure 10

Time 4 orthophoto (left), Time 5 orthophoto (centre), MAF Map Band 1 (right)

Fig. 11
figure 11

Time 5 orthophoto (left), Time 6 orthophoto (centre), MAF Map Band 1 (right)

After the analysis of the MAD/MAF maps, it was possible to isolate different radiometric responses regarding the application of colours (black value and white value) according to the modification occurred (painting integration over (i) the gold background; (ii) the gypsum layer or (iii) the pre-existing colours). However, these data need a more in-depth analysis through additional case studies.

It has also been observed that different bands of the MAF map highlight different kind of changes occurred on the icon surface. At Time 6–7, band 1 of the MAF map showed a small lacunae integration occurred on the shoulder of Saint Peter (Fig. 12), whereas MAF band 0 map, showed the intervention concerning the face of the Apostle where an additional layer of neutral colour was applied (Time 7) (Fig. 13).

Fig. 12
figure 12

Time 6 orthophoto (left), Time 7 orthophoto (centre), MAF Map Band 1 (right)

Fig. 13
figure 13

Time 6–7 MAF Map Band 0

Time 7–Time 8/Time 8–Time 9: retouching on the head (beard and hair); retouching on the keys and additional details on pallium and head

As described above, the integration of missing parts of the icon, when the colour was applied on top of a pre-existing one, returned a unique value (white) throughout the survey.

Time 8 and Time 9 change detection analysis undoubtedly shows areas where the retouching, using the rigatino technique, was realised above layers of neutral colours applied in earlier phases of the restoration (Time 4) (Fig. 14).

Fig. 14
figure 14

Time 7 orthophoto (left), Time 8 orthophoto (centre), MAF Map Band 1 (right)

In Time 8 and Time 9 series it is possible to identify interventions on the keys, the tunic (Fig. 15) and small refinement on the Apostle face and hair (lower part), cheekbone (Fig. 16) and the upper side of the head.

Fig. 15
figure 15

Time 8 orthophoto (left), Time 9 orthophoto (centre), MAF Map Band 1 (right)

Fig. 16
figure 16

Time 8 orthophoto detail (left), Time 9 orthophoto detail (centre), MAF Map Band 1 detail (right)

Time 9–Time 10: retouching on the scroll, keys, and tunic; retouching on the gold background

Time 10 was the last image dataset acquired at the end of the restoration process. The interventions, as shown by the MAD/MAF produced map, were focused on the red Greek inscription on the right upper corner, the scroll, the small details of the keys, head profile and the lacunae on the gold background. However, the resulting map highlighted the importance of a correct and rigorous photographic setup. A non-correct orientation of the polarised filters can indeed lead to illumination anomalies, as shown on the edge of the right-hand side of the icon (Fig. 17).

Fig. 17
figure 17

Time 9 orthophoto (left), Time 10 orthophoto (centre), MAF Map Band 1 (right)

3D change detection analysis

The MAD/MAF change detection approach identifies modifications only in two-dimensional space (Fig. 18 describes the MAF map Time 1–Time 10, showing the global changes after the restoration process). A 3D change detection algorithm was then used to (i) assess if any geometric variation of the surface, before (Time 0) and after (Time 10) the restoration, could be highlighted; and (ii) evaluate the responsiveness of the 3D change detection algorithm dealing with sub-millimetric variations.

Fig. 18
figure 18

MAD/MAF map showing the changes occurred at the end of the icon conservation process (cleaning, consolidation, and retouching)

Time 1 and Time 10 point clouds, each consisting of an average of 3 million points, and featuring a point resolution in X and Y of 0.11 mm, were registered exploiting an Iterative Closest Point (ICP) algorithm. The alignment process returned an RMSE of 0.09 mm. Subsequently, a cloud-to-cloud signed distances map was computed adopting M3C2 plug-in. This algorithm, implemented in the open source software CloudCompare [61], performs a direct comparison of point clouds in 3D, thus avoiding the preliminary phase of meshing or gridding. The output was represented by a colour-coded point cloud which highlighted values in the magnitude of ± 0.5 mm (Fig. 19).

Fig. 19
figure 19

Distance M3C2 (mm) between Time 0 and Time 10 dense point clouds

The M3C2 outcome can be explained assuming the removal of the layer of varnish which was covering the icon before the conservation process was initiated. The varnish could be indeed represented by the positive values ranging from 0.0 to 0.2 mm. However, this hypothesis must be confirmed with additional case studies.

Shape analysis

The produced 3D dense point cloud (Time 10) was finally inspected and analysed to identify any features concerning the wooden support not directly noticeable during the initial visual inspection. Assuming the original planarity of the wooden support, the icon did not show any apparent deformation. Some shading algorithms were then applied to enhance hidden characteristics, and a best fitting plane analysis was run (Fig. 20) achieving an RMSE of 0.6 mm between the plane and the icon surface. The result shows a slight deviation which reaches its negative peak on the lower left- and right-up corners (5 mm), and positive peak on the central area (2 mm) for a maximum absolute range of about 7 mm (the entire icon spans ca 340 × 420 mm). Further 3D shape analysis will be performed in the future to control the geometric behaviour of the icon over time, and to asses if the appropriate conservation conditions, in terms of environmental variables such as temperature and relative humidity which might effect the wooden support, are in place.

Fig. 20
figure 20

3D coordinates of the photogrammetric dense point cloud coded according to the distances from the least square plane computed on the painting surface (mm)

Conclusion

The article reported an extremely promising study, and first of its nature according to the author knowledge, on the use of change detection algorithms in combination with photogrammetry for an efficient, non-invasive systematic multi-temporal documentation and monitoring of the restoration process of a painted surface.

The benefits of recording all the step of the conservation procedure, from macro changes to fine details, have been highlighted under the reversibility principle proposed in Brandi’s Theory of Restoration, which includes the last possibility of bringing the artwork back to its original status-quo-ante. Thanks to the fast and contactless photogrammetric protocol for the image data collection, the proposed methodology can be easily integrated into almost every conservation project (indoor and outdoor) without hampering the delicate steps of the restoration itself. After the initial parameters are set, and according to the typology and scale of the artefact, images can be acquired, and subsequently processed.

Because of the multiplicity of the data generated through photogrammetric techniques, it was also possible to perform a 3D study and assess the Byzantine icon geometric features before and after the restoration process. The latter included a shape analysis of the wooden support for the identification of not visible deformation patterns.

From a methodological point of view, the study underlined the importance of a correct light setup and the need of constant and uniform illumination, with the primary aim of avoiding radiometric artefacts which could impede the photogrammetric reconstruction and the identification of all the restoration phases during the change detection procedure. Moreover, the correct alignment of each image dataset has to be as accurate as possible to avoid errors.

Although the MAD/MAF map contains a high degree of details, its combination with different spectral bands and RGB data, supported by the user experience, still provide the most reliable and accurate source of semantic data interpretation.

Nevertheless, the test was successful in delivering results compatible with the restoration logs.

The described workflow, applied here for the first time to painted surfaces, can be largely used in the heritage science domain. The fields which could mainly benefit are foreseen to be those dealing with monitoring issues such as frescoes (indoor and outdoor), buildings [62], and all those heritage assets which are exposed to decay due to the incorrect conservation conditions (environmental parameters such as temperature and humidity; pollution; natural and man-made threats).

Abbreviations

GIS:

Geographical Information System

MAD:

Multivariate Alteration Detection

CCA:

Canonical Correlation Analysis

MAF:

Maximum Autocorrelation Factor

DoD:

DEM of Difference

C2C:

cloud-to-cloud comparison

C2M:

cloud-to-mesh distance or cloud-to-model distance

ICP:

Iterative Closest Point

M32C:

Multiscale Model to Model Cloud Comparison

GSD:

Ground Sample Distance

B/D:

base-to-depth ratio

OTB:

Orfeo ToolBox

References

  1. Fisher C, Kakoulli I. Multispectral and hyperspectral imaging technologies in conservation: current research and potential applications. Stud Conserv. 2006;51:3–16.

    Google Scholar 

  2. Barni M, Bartolini F, Cappellini V. Image processing for virtual restoration of artworks. IEEE Multimedia. 2000;7(2):34–7.

    Google Scholar 

  3. Fuentes A. Contribution of GIS and spatial analysis tools in the characterisation of surface damage to paintings. In: Rogerio-Candelera MA, Lazzari M, Cano E, editors. Science and technology for the conservation of cultural heritage, science and technology for the conservation of cultural heritage. 2013. p. 371–8.

  4. Remondino F, Rizzi A, Barazzetti L, Scaioni M, Fassi F, Brumana R, Pelagotti A. Review of geometric and radiometric analyses of paintings. Photogram Rec. 2011;26(136):439–61.

    Google Scholar 

  5. Jolliffe I. Principal component analysis. In: Lovric M, editor. International encyclopedia of statistical science. Berlin: Springer; 2011.

    Google Scholar 

  6. Canty MJ. Image analysis, classification and change detection in remote sensing: with algorithms for ENVI/IDL and Python. 3rd ed. Boca Raton: CRC Press; 2014.

    Google Scholar 

  7. Nielsen AA, Conradsen K, Simpson JJ. Multivariate alteration detection (MAD) and MAF postprocessing in multispectral, bitemporal image data: new approaches to change detection studies. Remote Sens Environ. 1998;64(1):1–19.

    Google Scholar 

  8. Nielsen AA, Hecheltjen A, Thonfeld F, Canty MJ. Automatic change detection in RapidEye data using the combined MAD and kernel MAF methods. In: Proc. geoscience and remote sensing symposium (IGARSS 2010), 2010. p. 3078–81.

  9. Coppin P, Lambin E, Jonckheere I, Muys B. Digital change detection methods in natural ecosystem monitoring: a review. In: Bruzzone L, Smits P, editors. Analysis of multi-temporal remote sensing images. New York: IEEE; 2002. p. 3–36.

    Google Scholar 

  10. Nori W, Sulieman HM, Niemeyer I. Detection of land cover changes in El Rawashda Forest, Sudan: a systematic comparison. In: Proc. of geoscience and remote sensing symposium (IGARSS 2009), vol. 1. 2009. p. I-88–I-91.

  11. Zanchetta A, Bitelli G. A combined change detection procedure to study desertification using opensource tools. Open Geospat Data Softw Stand. 2017;2(1):2–10.

    Google Scholar 

  12. Saunders D. The detection and measurement of colour change in paintings by digital image processing. In: Digital image processing applications, vol. 1075. SPIE; 1989. p. 405–15.

  13. Singh A. Digital change detection techniques using remotely-sensed data. Int J Remote Sens. 1989;10(6):989–1003.

    Google Scholar 

  14. Coppin P, Bauer M. Digital change detection in forest ecosystems with remote sensing imagery. Remote Sens Rev. 1996;13:207–34.

    Google Scholar 

  15. Radke RJ, Andra S, Al-Kofahi O, Roysam B. Image change detection algorithms: a systematic survey. IEEE Trans Image Process. 2005;14(3):294–307.

    Google Scholar 

  16. Cowley DC. Remote sensing for archaeological heritage management. EAC Occasional Paper, No. 5, Budapest. 2011.

  17. Barlindhaug S, Holm-Olsen IM, Tømmervik H. Monitoring archaeological sites in a changing landscape–using multitemporal satellite remote sensing as an ‘early warning’ method for detecting regrowth processes. Archaeol Prospect. 2007;14(4):231–44.

    Google Scholar 

  18. Di Giacomo G, Scardozzi G. 2012. Multitemporal high-resolution satellite images for the study and monitoring of an ancient mesopotamian city and its surrounding landscape: the case of Ur. Int J Geophy. 2012. https://doi.org/10.1155/2012/716296.

  19. Lasaponara R, Leucci G, Masini N, Persico R. Investigating archaeological looting using satellite images and GEORADAR: the experience in Lambayeque in North Peru. J Archaeol Sci. 2013;42:216–30.

    Google Scholar 

  20. Cigna F, Tapete D, Lasaponara R, Masini N. Amplitude change detection with ENVISAT ASAR to image the cultural landscape of the Nasca Region, Peru. Satell Radar Archaeol Cult Landsc. 2013;20(2):117–31.

    Google Scholar 

  21. Agapiou A, Lysandrou V, Alexakis DD, Themistocleous K, Cuca B, Argyriou A, Sarris A, Hadjimitsis DG. Cultural heritage management and monitoring using remote sensing data and GIS: the case study of Paphos area, Cyprus. Comput Environ Urban Syst. 2015;54:230–9.

    Google Scholar 

  22. Tapete D, Cigna F, Donoghue D, Graham P. Mapping changes and damages in areas of conflict: from archive C-band SAR data to new HR X-band imagery, towards the sentinels. In: Proceedings of conference: FRINGE 2015: advances in the science and applications of SAR interferometry and Sentinel-1 InSAR workshop, 2015. p. 1–4.

  23. Risbøl O, Briese C, Doneus M, Nesbakken A. Monitoring cultural heritage by comparing DEMs derived from historical aerial photographs and airborne laser scanning. J Cult Heritage. 2015;16(2):202–9.

    Google Scholar 

  24. Cerra D, Plank S, Lysandrou V, Tian J. Cultural heritage sites in danger—towards automatic damage detection from space. Remote Sens. 2016;8:781. https://doi.org/10.3390/rs8090781.

    Google Scholar 

  25. William R. DEMs of difference. Geomorphol Tech. 2012;2(3.2).

  26. Girardeau-Montaut D, Roux M, Marc R, Thibault G. Change detection on points cloud data acquired with a ground laser scanner. Int Arch Photogramm Remote Sens Spatial Inf Sci. 2005;36(Part 3):30–5.

    Google Scholar 

  27. Cignoni P, Rocchini C. Metro: measuring error on simplified surfaces. Comput Graphics Forum. 1998;17(2):167–74.

    Google Scholar 

  28. Monserrat O, Crosetto M. Deformation measurement using terrestrial laser scanning data and least squares 3D surface matching. ISPRS J Photogramm Remote Sens. 2008;63(1):142–54.

    Google Scholar 

  29. Olsen MJ, Johnstone E, Driscoll N, Ashford SA, Kuester F. Terrestrial laser scanning of extended cliff sections in dynamic environments: parameter analysis. J Surv Eng. 2009;135(4):161–9.

    Google Scholar 

  30. Lague D, Brodu N, Leroux J. Accurate 3D comparison of complex topography with terrestrial laser scanner: application to the Rangitikei canyon (N-Z). ISPRS J Photogramm Remote Sens. 2013;82:10–26.

    Google Scholar 

  31. Bruno F, Gallo A, De Filippo F, Muzzupappa M, Davidde Petriaggi B, Caputo P. 3D documentation and monitoring of the experimental cleaning operations in the underwater archaeological site of Baia (Italy). In: Proceedings of 2013 digital heritage international congress (DigitalHeritage). 2013. p. 105–12. https://doi.org/10.1109/DigitalHeritage.2013.6743719.

  32. Peteler F, Gattet E, Bromblet P, Guillon O, Vallet JM, De Luca L. Analyzing the evolution of deterioration patterns: a first step of an image-based approach for comparing multitemporal data sets. In: Proceedings of 2015 digital heritage. 2015. p. 113–6. https://doi.org/10.1109/digitalheritage.2015.7419465.

  33. Hess M, Korenberg C, Ward C, Robson S, Entwistle C. Use of 3D laser scanning for monitoring the dimensional stability of a Byzantine ivory panel. Stud Conserv. 2015;60(sup1):S126–33. https://doi.org/10.1179/0039363015Z.000000000217.

    Google Scholar 

  34. Chiabrando F, Sammartano G, Spanò A, Semeraro G. Multi-temporal images and 3D dense models for archaeological site monitoring in Hierapolis of Phrygia (TR). Archeologia e Calcolatori. 2017;28(2):469–84.

    Google Scholar 

  35. Rodríguez-Gonzálvez P, Muñoz-Nieto AL, DelPozo S, Sanchez-Aparicio LJ, Gonzalez-Aguilera D, Micoli L, Gonizzi-Barsanti S, Guidi G, Mills J, Fieber K, Haynes I, Hejmanowska B. 4D reconstruction and visualization of cultural heritage: analyzing our legacy through time. Int Arch Photogramm Remote Sens Spatial Inf Sci. 2017;42:609–16. https://doi.org/10.5194/isprs-archives-xlii-2-w3-609-2017.

    Google Scholar 

  36. Bitelli G, Girelli VA, Sammarini G. 4-dimensional recording and visualization of urban archeological excavations. Appl Geomatics. 2018;10:415–26. https://doi.org/10.1007/s12518-018-0239-x.

    Google Scholar 

  37. Bolognesi M, Furini A, Russo V, Pellegrinelli A, Russo P. Testing the low-cost RPAS potential in 3D Cultural heritage reconstruction. In: The international archives of the photogrammetry, remote sensing and spatial information sciences, volume XL-5/W4, 2015 3D virtual reconstruction and visualization of complex architectures, 25–27 February 2015, Avila, Spain, 2018. p. 229–35.

  38. Fontana R, Gambino MC, Greco M, Marras L, Materazzi M, Pampaloni E, Pelagotti A, Pezzati L, Poggi P. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation. In: Proc. SPIE videometrics VIII. 2005. p. 51–8.

  39. Blais J, Taylor F, Cournoyer L, Picard M, Borgeat Godin L, Beraldin J-A, Rioux M, Lahanier C. Ultra-high-resolution 3D laser colour imaging of paintings: the Mona Lisa by Leonardo da Vinci. In: Proc. 7th inter. conference on lasers in the conservation of artworks. 2007. p. 435–40.

  40. Lahanier C, Aitken G, Pillay R, Beraldin J-A, Blais F, Borgeat L, Cournoyer L, Picard M, Rioux M, Taylor J, Breuckmann B, Colantoni P, de Deyne C. Two-dimensional multi-spectral digitisation and three-dimensional modelling of easel paintings. Report, NRC Publication Archive. 2008.

  41. Granero-Montagud L, Portalés C, Pastor-Carbonell B, Ribes-Gómez E, Gutiérrez-Lucas A, Tornari V, Papadakis V, Groves RM, Sirmacek B, Bonazza A, Ozga I, Vermeiren J, van der Zanden K, Föster M, Aswendt P, Borreman A, Ward JD, Cardoso A, Aguiar L, Alves F, Ropret P, María Luzón-Nogué J, Dietz C. Deterioration estimation of paintings by means of combined 3D hyperspectral data analysis. In: Proceedings of SPIE—the international society for optical engineering, vol. 8790. 2013. https://doi.org/10.1117/12.2020336.

  42. Akca D, Gruen A, Breukmann B, Lahanier C. High definition 3D-scanning of art objects and painting. In: Proc. Optical 3D measurement techniques conference, vol. 2. 2007. p. 50–8.

  43. Blais F, Cournoyer L, Beraldin J-A, Picard M. 3D imaging from theory to practice: the Mona Lisa story. In: Proc. SPIE 7060, current developments in lens design and optical engineering IX, 70600L. 2008.

  44. Breuckmann B. 3-dimensional digital fingerprint of paintings. In: Proc. 19th European signal processing conference (EUSIPCO 2011), p. 1249–53, Barcelona, Spain. 2011. p. 1249–53.

  45. Abate D, Menna F, Remondino F, Gattari MG. 3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques. In: ISPRS annals of the photogrammetry, remote sensing and spatial information science, volume II-5, 2014. ISPRS technical symposium, 23–25 June 2014, Riva del Garda, Italy. 2014. p. 1–8.

  46. Robson R, Bucklow S, Woodhouse N, Papadaki H. Periodic photogrammetric monitoring and surface reconstruction of a historical wood panel painting for restoration purposes. Int Arch Photogramm Remote Sens Spat Inf Sci. 2004;35(B5):395–400.

    Google Scholar 

  47. D’Amelio S, Lo Brutto M. Close range photogrammetry for measurement of painting surface deformations. Int Arch Photogramm Remote Sens Spat Inf Sci. 2009;38(5):1–6.

    Google Scholar 

  48. Barazzetti L, Remondino F, Scaioni M, Lo Brutto M, Rizzi A, Brumana R. Geometric and radiometric analysis of paintings. Int Arch Photogramm Remote Sens Spat Inf Sci. 2010;38(5):62–7.

    Google Scholar 

  49. Brandi C. Theory of restoration. In: Basile, G, editor, Rockwell D, (Translated). 1975.

  50. Wells JM, Jones TW, Danehy P. Polarisation and colour filtering applied to enhance photogrammetric measurements of reflective surfaces. In: 46th AIAA/ASME/ASCE/ASC structures, structural dynamics & materials conference, 18–21 April 2005, Austin, Texas. 2005. p. 1887–96.

  51. Paine DP, Kise JD. Aerial photography and image interpretation. Hoboken: Wiley; 2012.

    Google Scholar 

  52. Menna F, Rizzi A, Nocerino E, Remondino F, Gruen A. High resolution 3D modelling of the Behaim globe. In: The international archives of the photogrammetry, remote sensing and spatial information sciences, volume XXXIX-B5, 2012, XXII ISPRS congress, 25 August–01 September 2012, Melbourne, Australia. 2012. p. 115–20.

  53. Abate D, Hermon S, Lotti S, Innocenti G. 3D scientific visualisation of 19th century glass replicas of invertebrates. In: 2017 IEEE 13th international conference on e-Science (e-Science), 2017. p. 533–41. https://doi.org/10.1109/escience.2017.87.

  54. Nicolae C, Nocerino E, Menna F, Remondino F. Photogrammetry applied to problematic artefacts. In: ISPRS annals of the photogrammetry, remote sensing and spatial information science, volume II-5, 2014. ISPRS technical symposium, 23–25 June 2014, Riva del Garda, Italy, 2014. p. 451–6.

  55. Guidi G, Gonizzi Barsanti S, Micoli LL Image pre-processing for optimizing automated photogrammetric performance. In: ISPRS annals of the photogrammetry, remote sensing and spatial information science, volume II-5, 2014. ISPRS technical symposium, 23–25 June 2014, Riva del Garda, Italy, 2014. p. 145–52.

  56. Conen N, Hastedt H, Kahmen O, Luhmann T. Improving image matching by reducing surface reflections using polarising filter techniques. In: The international archives of the photogrammetry, remote sensing and spatial information sciences, volume XLII-2, 2018 ISPRS TC II mid-term symposium “towards photogrammetry 2020”, 4–7 June 2018, Riva del Garda, Italy. 2018. p. 267–74.

  57. Nocerino E, Menna F, Remondino F. Accuracy of typical photogrammetric networks in cultural heritage 3D modelling projects. In: The international archives of the photogrammetry, remote sensing and spatial information sciences, volume XL-5. 2014. p. 465-472.

  58. Fraser C. Limiting error propagation in network design. Photogramm Eng Remote Sens. 1987;53(5):487–93.

    Google Scholar 

  59. Remondino F, Spera MG, Nocerino E, Menna F, Nex F. State of the art in high density image matching. Photogram Rec. 2014;29(146):144–66. https://doi.org/10.1111/phor.12063.

    Google Scholar 

  60. OTB—Orfeo ToolBox. https://www.orfeo-toolbox.org/. Accessed 16 July 2018.

  61. CloudCompare. http://www.danielgm.net/cc/. Accessed 26 July 2018.

  62. Abate D. Built-heritage multi-temporal monitoring through photogrammetry and 2D/3D change detection algorithms. Stud Conserv. 2018. https://doi.org/10.1080/00393630.2018.1554934.

Download references

Authors’ contributions

DA: Lead researcher who carried out the research project and wrote this publication. The author read and approved the final manuscript.

Acknowledgements

The author wishes to thank the Byzantine Museum (Archbishop Makarios III Foundation) of Nicosia lead by Dr. Ioannis Eliades where the icon analysed for this study is currently conserved. He would also like to express his great appreciation to the Department of Antiquities of Cyprus, in the person of its Director Dr. Marina Solomidou-Ieronymidou, for the permission to undertake the research presented, and especially to Ms. Dora Matar for the patience and support provided during the restoration process and imaging data collection.

Competing interests

The author declares no competing interests.

Availability of data and materials

The datasets generated and/or analysed during the current study are not publicly available due limitations of the sharing of the data by both the Cyprus Institute and Departments of Antiquity of Cyprus but are available from the corresponding author on reasonable request.

Funding

Not applicable.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dante Abate.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abate, D. Documentation of paintings restoration through photogrammetry and change detection algorithms. Herit Sci 7, 13 (2019). https://doi.org/10.1186/s40494-019-0257-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40494-019-0257-y

Keywords