Skip to main content

Supervised segmentation of RTI appearance attributes for change detection on cultural heritage surfaces

Abstract

This paper proposes a supervised segmentation method for detecting surface changes based on appearance attributes, focusing on cultural heritage metal surfaces. Reflectance Transformation Imaging (RTI) reconstruction coefficients (PTM and HSH) are explored for tracking changes over time on different data sets. Each acquisition is normalised to ensure the method’s robustness, allowing consecutive acquisitions with different RTI acquisition parameters. The proposed method requires expert labelling on groups of pixels representing individual classes. Afterward, the surface appearance is identified over time based on the estimated discriminant model. After segmentation, each detected category is assigned to a single colour to present the results with a user-friendly colourmap visualisation. The method is user-dependent; the labelling of the pixels must be accurately defined based on the research question. The results were evaluated based on human expertise in the conservation-restoration field and are considered ground truth in this work. A case study with visibly segmentable characteristics was used to prove the concept and evaluate the invariance of the proposed method. Comparison with the segmentation of the visible characteristics shows very accurate segmentation for HSH (99%) and lower for PTM (80%), which is influenced by surface rotation. The method was tested on metal surfaces undergoing accelerated corrosion or cleaning treatments. The results were promising for tracking changes based on segmentation. Equally promising is the possibility of qualitative quantifying the degree of change by counting the change of a selected class of pixels. PTM and HSH results are comparable in cases of mat surfaces; however, in high specular surfaces, HSH seems to provide more detailed information and, therefore, can better depict the surface characteristics. Limitations of the application are related to the possibility of identifying surface characteristics that do not exhibit topographic changes or significant reflectance differentiation.

Introduction

Imaging technologies provide the tools for capturing surface information. However, it is possible to move further from simple visualisation and perform analyses of the captured images through imaging science, the correct data acquisition and processing tools. Image segmentation allows digital images to be partitioned into multiple regions based on similarity in shape, colour, texture, and more. This similarity in an image can also be detected and visualised with human guidance and supervision. This can find applications in different image processing fields, from medical to industrial imaging for identification [1], retrieval [2], recognition [3], and change detection [4]. Change detection is achieved by assessing a particular feature or a set of features over time. It provides essential information for the stability of a process in different time intervals or after specific actions. It can apply to many circumstances, i.e., long-term effects of climate change. It aims at tracking, through comparison, information related to a particular surface characteristic. By supervised image segmentation, change detection is achieved by assessing the similarity of different data sets incorporating similarly identifiable features and their involvement over time.

Different methodologies for detecting changes have been evaluated in the past years. Several works show the application of image segmentation for retrieval and recognition [2, 3] as well as detecting change [4, 5] from the surfaces. Unsupervised image segmentation such as auto clustering, K-means clustering [6], and edge detection algorithm (graph cut and active contour) [7] have shown a significant impact in the field of image processing and in several application fields. Image segmentation using unsupervised principal component analysis (PCA) and supervised linear discriminant analysis (LDA) has been mainly explored in the application of medical imaging [8, 9], material science [10], remote sensing [11], and machine defect [12] classification. Furthermore, in analysing image stacks, multispectral and hyperspectral image segmentation, based on spectral wavelengths incorporating deep learning [13, 14], has been highly developed for training massive amounts of data. However, applying the supervised segmentation approach to reflectance transformation imaging (RTI) technique has shown a void that can be explored and implemented.

RTI is a non-invasive imaging technique that has shown potential for documentation of cultural heritage (CH). However, to date, only case-study-oriented approaches are exploring its possibilities for change detection [15,16,17]. These case studies focus on RTI documentation and visualise the relightable images while studying the surface normal. Some studies focus on surface normal and change detection of the angular deviation of the surface normal before and after an imposed change considering the nearest pixel. Corregidor et al. extended edge detection analysis to RTI data for documenting defected areas on coins [18] based on documenting the visualisation by employing specular enhancement on two phases of data. Manfredi et al. proposed a methodology that compares the topographic changes using the surface normal to characterise the change’s directionality on a mock-up painting before and after damage [19, 20]. However, they concluded that this approach is highly affected by the image registration process. Furthermore, their study explored several descriptors that could reveal only a portion of the surface information [20], hence, showing a possibility of further using the descriptors to detect and quantify surface changes. In a different approach with applications mainly on industrial imaging, Nurit (2022) explored the possibilities of extracting statistical or geometric information directly from the raw RTI data. His research proposes a series of feature maps that provide surface information based on descriptors (i.e., mean, median, standard deviation, \(D_x\), \(D_y\), Dip Angle, etc.) [21]. This corresponds to a method that succeeds in giving a specific portion of information based on the selected descriptor. However, the information provided is mono-dimensional and connected to the surface response for the selected descriptor. Furthermore, considering the appearance attributes derived from the fitting models, multidimensional information for each pixel is fitted over different lighting angles [22]. Thus, a qualitative surface examination is possible through segmentation by considering these surface reconstruction coefficients of each pixel.

The novelty of this work lies in applying supervised segmentation on RTI data aiming to isolate similar features that characterise different appearance attributes of a surface. To make the method robust, invariant parameters (rotation, translation, illuminance, scale etc.) of the RTI acquisitions were tested, and data normalisation was adopted. The paper shows the application of the segmentation method based on normalised polynomial texture mapping (PTM) and hemispherical harmonics (HSH) coefficients in detecting changes by considering challenging metal objects. Therefore, cultural heritage metal objects’ geometric and appearance characteristics are exploited regarding their reflectance response at different lighting angles. The paper is organised as follows: Stating the theoretical background on RTI and the relevant reconstruction coefficients and then proposing the segmentation method. Before applying the method to the CH objects, we evaluate its accuracy and robustness by comparing it to the existing image segmentation methods on RGB and RTI images on the validation dataset. Afterward, the article shows the results and the case studies where the segmentation method is applied. This section also shows the possibility of qualitative and quantitative change detection based on the segmentation results along with a comparison with the existing image segmentation methods. Finally, we conclude the paper with a discussion and future perspective of the work.

Theoretical background

RTI is a multi light image collection (MLIC) technique that combines multiple images, of a fixed scene, under different lighting positions (\(\theta _i\)\(\phi _i\)) from a fixed camera position (\(\theta _v\)\(\phi _v\)) [23]. The goal is to create an array of lighting angles around the object, keeping the light at a fixed distance from the surface and covering a homogeneous hemispherical illumination of the surface. This technique gives the ability to enhance image visualisation by revealing the textural characteristics of the imaged surface. This tool has been broadly used for studying the technological and decorative characteristics of a variety of CH objects [24,25,26].

Existing modeling techniques (PTM and HSH)

In this research, reconstruction coefficients used in RTI fitting models are explored as a means of segmenting the textural and chromatic appearance attributes created by RTI data. Several fitting models exist all aiming to obtain a homogeneous reconstruction of a discrete RTI dataset, such as PTM (polynomial texture mapping), HSH (hemispherical harmonics), DMD (discrete model decomposition), RBF (radial bias functions), etc. These fitting models use surface reconstruction algorithms to visualise the geometric information of the surface collected from the different lighting positions. Thus, in each imaged surface, every pixel contains specific behaviours attributed to textural and colour characteristics, reconstructed through the fitting model, containing all acquired light positions.

For this work, the PTM and HSH coefficients are considered for the segmentation as they produced a fixed number of coefficients, 6 and 16, respectively, for each pixel. Furthermore, these coefficients have been selected as more representative of cultural heritage applications. Even though there is evidence that there exist more accurate models [22, 27, 28], especially concerning specular surfaces, PTM and HSH provide a more simplified evaluation of the proposed methodology, are openly available and therefore address a larger audience. In other works, their performance has been assessed with the scope to compare them with new and more advanced models and on surfaces with high gloss and specularity; in later cases, HSH has proven to perform better than PTM [22, 27, 28].

The PTM generates a polynomial regression of 6-vectors of coefficients (\(a_0\)-\(a_5\)), approximating the angular reflectance. The coefficients are calculated per-pixel from the discrete light positions obtained from the acquisition system [29] fitting to the second-degree polynomial Eq. 1.

$$\begin{aligned} L(l_u,l_v) = a_0 + a_1l_u + a_2l_v + a_3l_ul_v + a_4l_u^2 + a_5l_v^2 \end{aligned}$$
(1)

where, (\(l_u\)\(l_v\)), projections of the normalized light vector onto the local basis (\(L_u\)\(L_v\)) of a particular pixel at the spatial coordinates (uv) in the studied surface lying in UV texture coordinate system.

The hemispherical harmonics \(H_i^m\) are evolved from spherical harmonics functions (SH) using shifted associated Legendre Polynomials [29] (Eq. 2). The HSH consists of having a more appropriate projection into a set of basis functions based on hemispherical harmonics whose shapes are close to the reflectance field. The 16 descriptors coefficients \(C_l^m\) can be obtained as the projection of f onto each basis function (Eq. 3).

$$\begin{aligned} \tilde{P_l^m}(cos\theta ) = P_l^m(2\cos \theta -1) \; and \; \theta \in [0,\frac{\pi }{2}] \end{aligned}$$
(2)
$$\begin{aligned} C_l^m(\theta _v,\phi _v) = \int _{0}^{2\pi }\int _{0}^{\frac{\pi }{2}}f(\theta _v,\phi _v,\theta _i,\phi _i)H_l^m(\theta _i,\phi _i)\sin \theta _id\theta _id\phi _i \end{aligned}$$
(3)

Materials

RTI data acquisition

The RTI acquisitions were collected using a custom-made RTI system (Fig. 1) developed at the ImViA Laboratory of the University of Burgundy in Dijon, France [21, 30, 31]. Data were acquired with a dome using an industrial, monochromatic camera with a CMOS sensor (Sony IMX304, resolution 4112(H) \(\times\) 3008(V), 12.4MP). A single light source, LED 6500K was used. The examined surface was positioned opposite the camera, and an average of 150 uniformly distributed (covering a hemisphere around the object) light positions (lp) were selected for each image stack. The number of light positions was based on optimal quality over computational time parameters suggested as in [32]. Figure 2 shows the variations in the number of light sources used and the selection of uniform and non-uniform (Ring) distribution of light structures for the data acquisition. For the selection of acquisition parameters and data collection, a purpose-made user interface was used [21]. The reconstruction coefficients PTM and HSH were exported following the Eq. 1 and 3 written in the proprietary software MATLAB ®. For tracking changes over time, the same lp and same scene were applied at each time interval.

Fig. 1
figure 1

The RTI dome used for data acquisition (at ImViA Laboratory, France)

Fig. 2
figure 2

The selection of light positions (white dots on the images) for collecting the RTI acquisitions from the examined surface

Validation data set: dominoes

In this work, the segmentation method was first tested on surfaces presenting distinctive appearance characteristics that can be visually segmented to validate the proposed concept. A set of antique dominoes was used to validate the proposed methodology. The main body of the dominoes is made of dark-coloured wood with engraved designs (Fig.3), while the numbers are marked with white glossy paint. These objects exhibit high contrast in colour and texture that allows the visual segmentation and classification of the surface characteristics:

  • Wood: dark colour mat/diffusive texture with the engraved design that creates two levels of higher a lower relief

  • Number marks: engraved and covered white colour paint exhibiting high gloss/reflective surface.

Fig. 3
figure 3

Selected ROI’s of an antique domino set

Different (Region of Interest) ROIs presenting the same surface characteristics were selected for applying the method. One ROI of a domino was selected for the train data (Domino No.4), and the resulting discriminant model was then evaluated on other ROIs of the same set (Domino No. 1, 3, 5). Three separate classes were selected representing the visual segmentation criteria and named:

  • Class 1_edge: the edges of the reliefs

  • Class 2_white: the numbers, covered with white paint

  • Class 3_black: the main body of the domino

Figure 4 represents the classes defined above on the surface of Domino No. 4.

Fig. 4
figure 4

The classes defined on the surface of Domino No. 4

Computational methodology

This paper proposes a segmentation method supervising a selective group of pixels and their corresponding calculated coefficients to a user-defined class. For this purpose, a supervised data set is prepared per case study instructing the method to identify different surface appearance attributes (i.e., corrosion, metal, etc.) by matching the PTM or HSH reconstruction coefficient for each pixel based on a discriminant model. The calculated coefficients were normalized to make the method robust and invariant. The change detection is based on the response of the reconstruction coefficients after normalization on the surfaces at different time intervals, corresponding to different instances of the object’s condition. This multidimensional information was treated as multivariate dependent variables assigned to a single outcome. In the context of large data having multiple classes and appearance-based paradigms, LDA works superior to PCA [33]. Therefore, upon failing with PCA, it was decided to apply LDA of the coefficients to give confidence to the class separability for the classification of the surface [34, 35].

The supervision for the pixels is prepared using “LabelMe Image Annotation Tool” [36] for each examined category and based on the distinctive appearance characteristics at selected time intervals (e.g., before and after cleaning), called training data. Using the defined pixels and their respective classes, a discriminant model was created to predict the appearance of the entire surface. Afterwards, the same supervised data was utilised to examine the surfaces over time, called sample data, and were visualised using a colourmap representation based on the outcome of the segmentation. The complete pipeline of the proposed method is shown in Fig. 5.

Fig. 5
figure 5

Pipeline of the proposed segmentation method

Normalisation

To compare and compute changes from a surface, at least two phases’ worth of data must be collected over time. Moreover, this involves the possibility of making an error in positioning an object or choosing different light sources/structures/counts while collecting the data at a certain time interval. To make the method robust to this possible error in terms of data acquisition, it was necessary to normalize the calculated coefficients for each pixel in both fitting models (PTM, HSH), considering a data set \({\textbf {D}}\) separately for all pixels of the image. The equation for the normalisation of data set \({\textbf {D}}\) is shown in Eq. 4 with an arbitrary interval of [a b, where b > a] leaving the shape of the distribution unchanged. The coefficients are then rescaled, by stretching or squeezing the points along a number line of [\(-1\), 1], to normalize the distances between the \(D_{min}\) and \(D_{max}\) values for the calculated coefficients.

$$\begin{aligned} \hat{{\textbf {D}}} = a + [{\textbf {D}} - D_{\min }]\frac{(b-a)}{D_{\max }-D_{\min }} \end{aligned}$$
(4)

where, \(\hat{{\textbf {D}}}\) is normalized dataset of \({\textbf {D}}\), \(D_{\min }\) = min|(\({\textbf {D}}\))| and \(D_{\max }\) = max|(\({\textbf {D}}\))|

The results were evaluated both using the normalised (\(\hat{{\textbf {D}}}\)) and the non-normalised (\({\textbf {D}}\) ) data set for the proposed method. The comparison between the obtained results from both normalised and non-normalised data set was made based on the conservation and restoration expertise, which is the ground truth of this work. After normalisation of the data set, the visualisation of each defined class of the proposed method showed more noticeable results than non-normalised data, as presented in the exemplary Fig. 6 for both the PTM and HSH coefficients. Hence, the normalisation of the coefficients was adapted to make the method insensitive to the acquisition system (scale, illumination, translation, etc.) and the object’s positioning.

Fig. 6
figure 6

Before and after normalisation of data histogram and respective results from PTM and HSH coefficients

It was essential to normalise the coefficients to apply the training data to the data sets obtained from a surface at various time intervals. This will undoubtedly improve the method for comparing and precisely quantifying changes over time, placing acquisitions taken at different time intervals on the same scale concerning the possible error mentioned above.

Training

For the segmentation methodology, the normalised data were divided into train data (data to create the discriminant model) and sample data (unknown surfaces where the train data are applied to predict the classification). After defining the selected pixels to a class and the respective reconstruction coefficients of the selected pixels, the information was treated as the train data. Then, the discriminant models were created using the train data to segment the sample data. Both PTM and HSH coefficients are considered for trained data sets separately to compare the degree of information based on the number of coefficients. Finally, the methodology was applied to different case studies on the calculated descriptors of PTM and HSH coefficients to evaluate the segmentation results separately.

Supervision

A definitive selection is required to prepare the training data set. The selection of the classes is based on visual observation by conservation-restoration experts and on evaluating specific questions for each case study. One set is selected for training per case and then applied to the entire data collection. Using the “LabelMe Image Annotation Tool”, a representative number of pixels characterizing the surface appearance feature corresponding to a research question (e.g., gloss area) are selected for supervision. From the entire stack of the RTI images, only one state is required to supervise the normalized reconstruction coefficients for the location of the selected pixels to a specific class. This image can be any state of RTI from the entire stack of images where the user will be able to annotate the classes based on the research question. The selected (annotated) pixels’ reconstruction coefficients are exported and assigned to a specific numeric value to indicate each category since colour referencing cannot be applied to the calculations. An exemplary data set for the HSH is shown in Fig.7, and PTM (6-dim) supervised data was prepared similarly. Then the classified pixels and their respective coefficients are considered for creating a discriminant model.

Fig. 7
figure 7

Supervision of the reconstruction coefficients (HSH) using LabelMe Image Annotation tool

Creating discriminant analysis models and prediction

The multiclass discriminant model was created [37] in MATLAB ®, where each class (B) generates data (A) using the multivariate normal distribution. The model assumes that each observation (a) has a Gaussian mixture distribution and the same covariance matrix for each class with varying mean. Under this assumption, the model computes each class’s mean and covariance parameters. The computation of the sample mean (\(\mu\)) for the data [38] of each class n is shown in Eq. 5. Let us consider, X is an M-by-N class membership matrix, then \(X_{mn}\) = 1, if observation m is from class n and \(X_{mn}\) = 0, if observation m is not from class n.

The estimate of the class mean for data is

$$\begin{aligned} \hat{\mu _n} = \frac{\sum _{m=1}^{M}X_{mn}a_m}{\sum _{m=1}^{M}X_{mn}} \end{aligned}$$
(5)

The sample covariance (\(\hat{\sigma }\)) is calculated by subtracting the sample mean of each class from the observations of that class and taking the empirical covariance matrix of the result (Eq. 6). The classifiers for the observations were constructed using the following scheme for LDA.

The unbiased estimate of the pooled-in covariance matrix for the data is

$$\begin{aligned} \hat{\sigma } = \frac{\sum _{m=1}^{M} \sum _{n=1}^{N} X_{mn}(a_m - \hat{\mu _n}) (a_m - \hat{\mu _n})^T}{M-N} \end{aligned}$$
(6)

Three quantities, namely posterior probability, prior probability, and cost were considered additionally to optimize the overall classification cost in predicting class as shown in Eq. 7.

$$\begin{aligned} \hat{b} = \begin{matrix} \arg &{} \min \\ b= &{} 1,...,N \end{matrix} \sum _{n=1}^{N}\hat{P}(n | a)C(b | n) \end{aligned}$$
(7)

where, \(\hat{b}\) is the predicted classification with N number of classes, \(\hat{P}(n|a)\) is the posterior probability of class n for observation a and C(b|n) is the cost of classifying an observation as b when its true class is n.

The prior probability (P(n)), of class n is either uniform or custom-set or empirical. In this work, the prior probability is the number of training pixels of class n divided by the total number of training pixels. The posterior probability that an observation a belongs to a class n is the product of the P(n) and the multivariate normal density as in Eq. 9. The density function of the multivariate normal with 1-by-k mean \(\mu _n\) and k-by-k covariance \(\sigma _n\) at a 1-by-k point a is (Eq. 8).

$$\begin{aligned} P(a|n) = \frac{1}{((2\pi )^k|\sigma _n|)^{\frac{1}{2}}} \exp (- \frac{1}{2}(a - \mu _n)\sigma _n^{-1} (a - \mu _n)^T) \end{aligned}$$
(8)

where, \(|\sigma _n|\) is the determinant of \(\sigma _n\) and \(\sigma ^{-1}\) is the inverse matrix then the posterior probability P(n|a) is calculated as in Eq. 9.

$$\begin{aligned} \hat{P}(n|a) = \frac{P(a|n)P(n)}{P(n)} \end{aligned}$$
(9)

The cost of classifying an observation as b when its true class is n is referred to as true misclassification cost per class C(b|n). In this work, the cost matrix was customized for creating the classifier as default (Eq. 10).

$$\begin{aligned} C(b|n) = \Biggl \{ \begin{matrix} 0, &{} if &{} b = n\\ 1, &{} if &{} b \ne n \end{matrix} \end{aligned}$$
(10)

Segmentation and visualisation

The method was developed in MATLAB ® using the Statistics and Machine Learning Toolbox and multivariate normal density analysis. We served the selected pixels and the corresponding class to create a discriminant model and then applied it to the sample data to predict the class for each pixel for the entire surface. After predicting the classes for all the pixels, the results are set back to the surface for its visualisation. Result visualisation was possible through a user-friendly colourmap generated for each identified category of surface appearance assigned to a specific colour (see result section for examples).

Results on the dominoes

Evaluation of existing image segmentation methods

An initial evaluation was done using the existing RGB image segmentation methods, and the calculated results are shown in Fig. 8. However, it was clear that with various computational advances, RTI enables the viewer to evaluate the visual appearance of an object in various lighting situations, accentuating and disclosing the properties of the imaged object. Furthermore, the reconstruction coefficients derive the surface’s physical properties, enabling the segmentation of the defined classes to be more suitable than the segmentation on RGB images based on intensity values.

Fig. 8
figure 8

RGB image segmentation using existing methods

To claim the accuracy of the proposed method, we have also tested the supervised image segmentation based on the existing method considering the pixel intensity values. For the evaluation, we have considered the pixel-to-pixel mapping of the intensity values for the entire stack of images. These intensity values were supervised using the same labelled image used to supervise the RTI coefficients for Domino No. 4, as shown in Fig. 9.

Fig. 9
figure 9

Supervision of the pixel intensities

Using the normalised intensity values as trained data, a discriminant model was created and applied on the entire surface to evaluate the defined classes. However, the model of the intensity values failed to retrieve the surface information as accurately as from the reconstruction coefficients of the RTI method (Fig. 10).

Fig. 10
figure 10

Results using the existing image segmentation method

The existing image segmentation based on intensity values of the images is unable to retrieve complex textural information of the surfaces like, in this case, classes 1 and 3 (edge and black), which are very similar in terms of the intensity distribution. The class_2 (white) is specifically distinct on the surface, so it is possible to segment that class.

The consideration of deep learning is not an option in this work because of limited data. RTI data acquisition helps gather more information from the surfaces with varying light positions. The appearance attributes of PTM and HSH coefficients give multi-dimensional information in terms of geometry, texture, colour, etc. Hence, they can be used to supervise each class from the surface, which is claimed in the proposed segmentation method that supervises RTI appearance attributes.

Results from the proposed method

The trained data (Domino No. 4, Fig. 7) was used to evaluate the domino set using the proposed segmentation method. The calculated results are presented in Fig. 11).

Fig. 11
figure 11

Application of the methodology on a set of dominoes

It is evident that the segmentation was feasible in this case and corresponds well to the visual segmentation. The same discriminant model was consecutively applied to all selected ROIs of the sample data (dominoes No.1, 3, 5, Fig. 3). The results of the segmentation are presented in Fig. 11, showing that the method was able to segment the defined classes from the surfaces both for the trained (Domino No. 4) and the sample (Domino No. 1, 3, 5) surface.

Comparing the segmentation between PTM and HSH does not show significant differences. However, some minor differences using PTM in certain data sets show enlargement of the selected area at the edges.

Evaluation of the proposed method

To validate the accuracy and robustness of the proposed method, the method was introduced to the invariance parameters of the acquisition device. Then, the trained data for the Dominoes was applied to respective RTI data sets collected with varying parameters, such as different light positions/structure and scale, as well as surface rotation (on Domino No. 4). The calculated results of the segmentation method are shown in Fig. 12. The necessity of normalising the reconstruction coefficients proved effective in claiming the method’s robustness.

Fig. 12
figure 12

Results from the segmentation method on different invariant parameter

The percentage of the presence of white areas on the surface was calculated to assess the method’s accuracy to the invariance parameters. Standard image processing can be applied because the surface information is easily segmentable due to the high variance between the predominant surface colours. Thus, the proportion of the white regions relative to the overall surface was calculated for each invariance parameter. This was achieved by binarizing the gray scale image of the dominoes. Then, a similar pixel count was compared to the standard image processing after applying the HSH and PTM coefficients to the various invariance tests (change in the light count, rotation, or uniform/non-uniform acquisition). Finally, the binarized image of the domino (Fig. 13) was utilised as the ground truth to produce more relevant results.

Fig. 13
figure 13

Binary image segmentation of the domino

Figure 14 depicts the proposed method’s evaluation, the capability of segmenting a surface based on supervised appearance attributes, and the standard error to its invariance. In general, the comparison shows high accuracy in segmentation for both fitting models in most cases. Nevertheless, HSH coefficients have proven the method invariant to any change, whereas PTM has shown sensitivity to rotation. This case study results in a total accuracy of 99% in the case of HSH and 80% for PTM.

Fig. 14
figure 14

Evaluation of pixel count of the white gloss area from the ROIs of the Dominoes

Results on cultural heritage objects

Following the proof of concept, a series of more challenging surfaces were examined to evaluate the feasibility of applying the method in a generalized way. The main goal was to evaluate the possibility of documenting the condition of CH objects by classifying the main visible characteristics. In addition, following condition documentation, the potentiality of monitoring surface changes (i.e., alteration over time, conservation-restoration treatments) was also evaluated for classifying and quantifying the degree of change (based on the segmented data). The degree of change was also examined by measuring the same class pixels before and after the change.

The proposed segmentation is evaluated using both PTM and HSH reconstruction coefficient per pixel on the trained data for creating the discriminant model that is then applied to sample data. Evaluation of the resulting segmentation is based on visual examination by a conservation-restoration expert.

Monitoring cleaning treatment

Cleaning causes changes to the surface appearance that, in the case of metal objects, is usually characterised by a change in color, geometry, texture, and gloss. The goal was to classify the presence of corrosion (condition documentation) and how this changed during cleaning and examine the possibility of quantifying the degree of this change.

Two coins from different eras were selected as they have very different material properties. In both cases, monitoring cleaning treatments were examined by determining the surface change before, during and after cleaning. The first case examined was the obverse side of a late Roman copper alloy coin covered with thick soiling encrustations (Fig.15). The second case was the reverse side of a “1” swiss centime, issued in 1946 and manufactured using a zinc alloy. The coin is covered mainly with thin soiling/corrosion encrustations and areas of thicker encrustations.

Fig. 15
figure 15

Late roman coin (obverse side) at different treatment steps: before cleaning (left), during cleaning (middle), after cleaning (right)

Fig. 16
figure 16

Swiss coin (1946) (reverse side) at different treatment steps: before cleaning (left), during cleaning (middle), after cleaning (right)

The objects exhibit colour, texture, and relief differences before and after cleaning. These changes are easily documented in RGB images in the late Roman coin case but are more difficult to capture for the Swiss coin (Fig. 16). More specifically, the roman coin has a lighter colour and higher texture encrustations than the original surface, which is a smoother, darker colour and has a higher gloss. The encrustations cover most of the surface’s relief, making the surface details illegible; however, after cleaning the surface, details become visible. In the case of the swiss coin, a thin layer covers the entire surface, causing colour and textural alteration, along with some areas of thicker encrustations. As a result, after cleaning, the appearance of the surface is closer to the original metallic gloss of the coin.

The half-cleaned selected sides of the coins (ROI), as shown in Figs. 17 and 18, were selected as train data since they include all the surface details (before and after cleaning) and applied to all the conservation and documentation steps. Therefore, the before and after cleaning acquisitions were the sample data. The conservation-restoration expert selected three classes in each case based on the surface mentioned above characteristics (Figs. 17, 18).

Fig. 17
figure 17

Training on ROI of half cleaned observe face (Roman Coin)

Fig. 18
figure 18

Training on ROI of half cleaned observe face (Swiss Coin)

Fig. 19
figure 19

Classification of the presence of corrosion penetrating the coating and the relevant documentation mapping (Roman Coin)

For both cases examined, the results of the segmentation of the two coins are similar despite the differences in their appearance attributes (Figs. 19, 20). Change is easily tracked since it results in different surface characteristics (texture, geometry, and colour) at the different cleaning stages.

Fig. 20
figure 20

Classification of the presence of corrosion penetrating the coating and the relevant documentation mapping (Swiss Coin)

Additionally, quantifying the degree of change was possible for the selected class (change of the appearance of soiling encrustation, Fig. 21). The measured values show the effect of cleaning on the surface by decreasing the number of pixels detected between the different time intervals, each referring to a different action (before, during, and after cleaning).

Fig. 21
figure 21

Quantification of changes during the cleaning the Roman Coin (% of soiling encrustation)

Fig. 22
figure 22

Quantification of changes during the cleaning the Swiss Coin (% of thick+thin encrustation)

The total amount of surface change results are similar for PTM and HSH. Nevertheless, PTM shows slightly more prominent areas of detected pixels around the edges, as indicated in the example of the dominoes (proof of concept). This is visible both in the visualization maps and the measured pixels (i.e., percentage of thick and thin encrustations, Fig. 22).

Concerning cleaning treatments of coins, the method demonstrates the possibility of classifying the different surface characteristics based on the reconstruction coefficients, thus mapping the surface condition and monitoring change.

Monitoring the evolution of corrosion

A critical aspect of conservation documentation is tracking corrosion over time on a surface. Visual recording, although important, cannot always detect minor changes. Therefore, a case study that resulted in surface topography changes was examined. The methodology was applied to artificially aged coupons (test metal plates). Coupons made of low carbon steel were first corroded artificially, then cleaned and coated with transparent varnish, and then artificially corroded again to create filiform corrosion on their surface. The final corrosion (filiform corrosion) state was performed at different degrees (Fig. 23).

Fig. 23
figure 23

Low carbon steel coupons with different levels of filiform corrosion

The corrosion phenomenon under investigation creates characteristic filaments and corrosion spots on the surface; the aim was to identify the areas where the corrosion products have penetrated and grown over the coating layer and, if possible, quantify their presence. As shown in section "Evaluation of existing image segmentation methods", visual segmentation of the information from the RGB images is rather difficult since the underlying corrosion makes the distinction of the new corrosion not easily identifiable.

In the case of corrosion monitoring, it was possible to segment and classify the areas where the corrosion penetrated the coating. The two classes were defined to train the reconstruction coefficients as in Fig. 24. The filiform corrosion was not easily identifiable by visual observation, but the change in surface topography allows for their detection with the proposed segmentation method (Fig. 25).

Fig. 24
figure 24

Training on ROI of the coupons

Fig. 25
figure 25

Classification of the presence of filiform corrosion penetrating the coating and the relevant documentation mapping (low carbon steel coupons)

Fig. 26
figure 26

Quantification of evolution of filiform corrosion (% of corrosion)

By measuring the number of pixels of the “corrosion class,” it is possible to quantify and evaluate the percentage of the class on the surface; in total, four coupons with different degrees of filiform corrosion were examined. The method proved able to detect the “corrosion class”, resulting in accurate documentation mapping in the case of HSH. PTM coefficients, however, were unable to accurately identify the surface change, validating other studies that highlight the inadequacies of PTM reconstructions on highly specular and glossy surfaces (Fig. 26).

Segmentation of coloured surface

The last case examines the possibility of segmenting the surface based on colour differences. An early 20th c. metal box (Fig. 27) with printed colour decoration was selected. The ROI comprises different colours and visible metallic elements and is trained for each visible color as shown in Fig. 28. It must be noted that the print colours do not create essential differences in the topography of the surface.

Fig. 27
figure 27

Color printed metal box (right: selected ROI)

Fig. 28
figure 28

Training on ROI of the metal box

Fig. 29
figure 29

Application of the methodology on colour printed metal box

The segmentation, in this case, shows the method’s limitation. Even though the colour differences are visibly separable, the method seems unable to properly separate specific colours or coloured areas of detail (Fig. 29). More prominent is the inability to distinguish between the background (white) and skin colour in the areas of considerable detail on the trees. This inability is assumed to be caused due to two factors: the limitation of coefficients to separate solely based on colour reflectance differences and the lack of significant surface topographic changes. Although this work was solely dedicated to exploring the possibility of segmentation using only RTI reconstruction coefficients, this limitation can be overcome by combining the RGB information and the reconstruction coefficients for training.

Conclusion and discussion

Detecting changes on CH surfaces is highly important for the condition assessment of objects, in parallel, the ability to track and quantify changes provides invaluable information for monitoring objects. The possibilities of segmenting information of interest (e.g., corroded areas) and identifying their change over time can be performed through image segmentation. Image segmentation based on visual surface characteristics is generally possible based on colour and contrast, but it incorporates only a small part of the object’s information.

The methodology proposed goes further by adding in the segmentation process the surface’s micro-geometry. Therefore, it allows adding classes related to geometric attributes (i.e., edges, topography) that cannot be captured with simple visual segmentation (i.e., single RGB images). It demonstrates the possibilities for further data processing of RTI acquisitions to classify selected surface parameters based on segmentation using differences of the reconstruction coefficients at a pixel level. The initial evaluation showed that it could provide exact segmentation, and accuracy can reach 99% for HSH for surfaces with domino-like differences (e.g. class_2_white). When applied to more challenging surfaces (i.e., metal objects with high specularity), detecting highly diversifying data, especially when combining differences in colour and texture, is easily segmented; however, colour changes with similar topography and hues are challenging to separate.

The method is based on machine learning algorithms and requires the application of supervision defined by the end-user that provides the advantages of adapting to the segmentation needs. In general, HSH reconstruction coefficients provide better detail than PTM, especially in the case of high gloss or specular surfaces. Additionally, the method becomes more robust and invariant to acquisition parameters through data normalisation, except for PTM in rotation. Nevertheless, careful data training is required per case study, irrespective of the surface material or research question. The same training with all the classes supervised on the image can be used for monitoring the same surface over time. However, one should be precise in defining the classes based on a particular research question on an examine surface, as this may affect assessing specific undefined minute changes, which can be interesting to quantify, visualise, and document.

Changes can be quantified with the proper selection of classes to be tracked under different time intervals. This qualitative quantification of change (i.e., alteration over time, change due to conservation-restoration treatments) can help in monitoring the degree of change (i.e., the evolution of corrosion) or the progress of conservation treatments (i.e., surface cleaning). Furthermore, the results highly depend on the labelling of the images for each supervised group. Therefore, quantification of the results can be considered estimative and not absolute.

The proposed method can be used as a tool for condition documentation and surface monitoring. Given that the segmentation is user-defined, it can be adapted to the object’s needs. It is designed to provide a user-friendly visualisation through colour maps of a selected class that can be detected on the surface. This visualisation helps map where a change is happening and how it is extended.

Finally, the proposed methodology is built based on open-source algorithms for light reconstruction of RTI image stack and is not specific to the data acquisition. Therefore, it can apply to any system with access to raw RTI data. The supervision process of annotating the pixels is performed using “LabelMe Image Annotation Tool”, which is an open source plugin. Therefore, even though built on MATLAB ®, the script can be available to end-users. Future directions extended to use the DMD coefficients method, which is known to perform more accurately, especially in the case of specular surfaces. Furthermore, incorporating multispectral RTI data can lead to better colour segmentation. It can also be considered to expand the use of the method to other application fields of the RTI technique.

Availability of data and materials

The script for the developed method and the data used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

CH:

Cultural heritage

RTI:

Reflectance transformation imaging

MLIC:

Multi light image collection

LP:

Light positions

ROI:

Region of interest

PTM:

Polynomial texture mapping

HSH:

Hemispherical harmonics

DMD:

Discrete model decomposition

RBF:

Radial bias function

PCA:

Principal component analysis

LDA:

Linear discriminant analysis

References

  1. Norouzi A, Rahim MSM, Altameem A, Saba T, Rad AE, Rehman A, Uddin M. Medical image segmentation methods, algorithms, and applications. IETE Tech Rev. 2014;31(3):199–213.

    Article  Google Scholar 

  2. Chowdhary CL, Acharjya DP. Segmentation and feature extraction in medical imaging: a systematic review. Proc Comput Sci. 2020;167:26–36.

    Article  Google Scholar 

  3. Theologou P, Pratikakis I, Theoharis T. Unsupervised spectral mesh segmentation driven by heterogeneous graphs. IEEE Trans Pattern Anal Mach Intell. 2016;39(2):397–410.

    Article  Google Scholar 

  4. Radke RJ, Andra S, Al-Kofahi O, Roysam B. Image change detection algorithms: a systematic survey. IEEE Trans Image Process. 2005;14(3):294–307.

    Article  Google Scholar 

  5. Saha S, Martusewicz J, Streeton NL, Sitnik R. Segmentation of change in surface geometry analysis for cultural heritage applications. Sensors. 2021;21(14):4899.

    Article  Google Scholar 

  6. Shrivastava K, Gupta N, Sharma N. Medical image segmentation using modified k means clustering. Int J Comput Appl. 2014;103(16):12–6.

    Google Scholar 

  7. Peters J, Ecabert O, Meyer C, Kneser R, Weese J. Optimizing boundary detection via simulated search with applications to multi-modal heart segmentation. Med Image Anal. 2010;14(1):70–84.

    Article  CAS  Google Scholar 

  8. Aggarwal T, Furqan A, Kalra K. Feature extraction and lda based classification of lung nodules in chest ct scan images. In: 2015 International conference on advances in computing, communications and informatics (ICACCI), IEEE; 2015. 1189–1193.

  9. Wu N, Li M, Chen L, Yuan Y, Song S. A lda-based segmentation model for classifying pixels in crop diseased images. In: 2017 36th Chinese control conference (CCC), IEEE; 2017. 11499–11505.

  10. Giansante L, Di Vincenzo D, Bianchi G. Classification of monovarietal italian olive oils by unsupervised (pca) and supervised (lda) chemometrics. J Sci Food Agric. 2003;83(9):905–11.

    Article  CAS  Google Scholar 

  11. Lobo A. Image segmentation and discriminant analysis for the identification of land cover units in ecology. IEEE Trans Geosci Remote Sens. 1997;35(5):1136–45.

    Article  Google Scholar 

  12. Malhi A, Gao RX. Pca-based feature selection scheme for machine defect classification. IEEETrans Instrum Meas. 2004;53(6):1517–25.

    Article  Google Scholar 

  13. Yang X, Ye Y, Li X, Lau RY, Zhang X, Huang X. Hyperspectral image classification with deep learning models. IEEE Trans Geosci Remote Sens. 2018;56(9):5408–23.

    Article  Google Scholar 

  14. Li S, Song W, Fang L, Chen Y, Ghamisi P, Benediktsson JA. Deep learning for hyperspectral image classification: an overview. IEEE Trans Geosci Remote Sens. 2019;57(9):6690–709.

    Article  Google Scholar 

  15. Boute R, Hupkes M, Kollaard N, Wouda S, Seymour K, ten Wolde L. Revisiting reflectance transformation imaging (rti): a tool for monitoring and evaluating conservation treatments. In: IOP Conference Series: Materials Science and Engineering, IOP Publishing; 2018, 364, p. 012060.

  16. Ono S, Matsuda Y, Mizuochi T. Development of a multispectral rti system to evaluate varnish cleaning. In: ICOM-CC 18th Triennial Conference 2017.

  17. Manrique Tamayo SN, Valcárcel Andrés JC, Osca Pons M. Applications of reflectance transformation imaging for documentation and surface analysis in conservation. Int J Conserv Sci. 2013;4:535–48.

    Google Scholar 

  18. Corregidor V, Dias R, Catarino N, Cruz C, Alves LC, Cruz J. Arduino-controlled reflectance transformation imaging to the study of cultural heritage objects. SN Appl Sci. 2020;2(9):1–10.

    Article  Google Scholar 

  19. Manfredi M, Williamson G, Kronkright D, Doehne E, Jacobs M, Marengo E, Bearman G. Measuring changes in cultural heritage objects with reflectance transformation imaging. In: 2013 Digital Heritage International Congress (DigitalHeritage), IEEE; 2013, 1, pp. 189–192.

  20. Manfredi M, Bearman G, Williamson G, Kronkright D, Doehne E, Jacobs M, Marengo E. A new quantitative method for the non-invasive documentation of morphological damage in paintings using rti surface normals. Sensors. 2014;14(7):12271–84.

    Article  Google Scholar 

  21. Nurit M. Numérisation et caractérisation de l’apparence des surfaces manufacturées pour l’inspection visuelle. PhD thesis, University of Burgundy, 2022.

  22. Pitard G, Le Goïc G, Mansouri A, Favrelière H, Desage S-F, Samper S, Pillet M. Discrete modal decomposition: a new approach for the reflectance modeling and rendering of real surfaces. Mach Vis Appl. 2017;28(5):607–21.

    Article  Google Scholar 

  23. CHI: cultural heritage imaging: reflectance transformation imaging (RTI). 2002. https://culturalheritageimaging.org/Technologies/RTI/ Accessed March 2022.

  24. Earl G, Martinez K, Malzbender T. Archaeological applications of polynomial texture mapping: analysis, conservation and representation. J Archaeol Sci. 2010;37(8):2040–50.

    Article  Google Scholar 

  25. Mytum H, Peterson J. The application of reflectance transformation imaging (rti) in historical archaeology. Hist Archaeol. 2018;52(2):489–503.

    Article  Google Scholar 

  26. Min J, Jeong S, Park K, Choi Y, Lee D, Ahn J, Har D, Ahn S. Reflectance transformation imaging for documenting changes through treatment of joseon dynasty coins. Herit Sci. 2021;9(1):1–12.

    Article  Google Scholar 

  27. Pitard G, Le Goïc G, Mansouri A, Favrelière H, Pillet M, George S, Hardeberg J.Y. Reflectance-based surface saliency. In: 2017 IEEE international conference on image processing (ICIP), IEEE; 2017, 445–449.

  28. Dulecha TG, Fanni FA, Ponchio F, Pellacini F, Giachetti A. Neural reflectance transformation imaging. Vis Comput. 2020;36(10):2161–74.

    Article  Google Scholar 

  29. Pitard G, Le Goïc G, Favrelière H, Samper S, Desage S.-F, Pillet M. Discrete modal decomposition for surface appearance modelling and rendering. In: Optical measurement systems for industrial inspection IX, SPIE; 2015, 9525, pp. 489–498.

  30. Castro Y, Nurit M, Pitard G, Zendagui A, Le Goïc G, Brost V, Boucher A, Mansouri A, Pamart A, De Luca L. Calibration of spatial distribution of light sources in reflectance transformation imaging based on adaptive local density estimation. J Electron Imaging. 2020;29(4): 041004.

    Article  Google Scholar 

  31. ImViA Laboratory: the imaging and artificial vision laboratory (ImViA). 1996. https://imvia.u-bourgogne.fr/laboratoire Accessed March 2022.

  32. Zendagui A, Le Goïc G, Chatoux H, Thomas J.-B, Castro Y, Nurit M, Mansouri A. Quality assessment of dynamic virtual relighting from rti data: application to the inspection of engineering surfaces. In: Fifteenth International Conference on Quality Control by Artificial Vision, SPIE; 2021, 11794, pp. 94–102.

  33. Aleix M. Mart nez, avinash c kak. PCA versus LDA. 2001;23(2):228–33.

    Google Scholar 

  34. Balakrishnama S, Ganapathiraju A. Linear discriminant analysis-a brief tutorial. Inst Signal Inf Process. 1998;18(1998):1–8.

    Google Scholar 

  35. Li W, Sun L, Zhang D-K. Text classification based on labeled-lda model. Chin J Comput (Chin Edn). 2008;31(4):620.

    Article  Google Scholar 

  36. Kentaro Wada: LabelMe:Image Polygonal Annotation with Python. 2011. https://github.com/wkentaro/labelme Accessed March 2022.

  37. Fisher RA. The use of multiple measurements in taxonomic problems. Ann Eugen. 1936;7(2):179–88.

    Article  Google Scholar 

  38. Guo Y, Hastie T, Tibshirani R. Regularized linear discriminant analysis and its application in microarrays. Biostatistics. 2007;8(1):86–100.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank David Lewis, Ramamoorthy Luxman, Yuly Castro, Abir Zendagui, Marvin Nurit, Ph.D. candidates, and Gaëtan LE GOIC, Associate Professor at ImViA laboratory, University of Burgundy, France, for supporting this work.

Funding

This research is conducted within the “CHANGE” (Cultural Heritage Analysis for New Generations - Innovative Training Network) project, that has received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 813789.

Author information

Authors and Affiliations

Authors

Contributions

S.S. developed the proposed method, and A.S. prepared the materials. S.S. and A.S. designed the analysis, verified the results, and prepared the article draft. A.M. and R.S. reviewed the work and the article. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Sunita Saha.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Saha, S., Siatou, A., Mansouri, A. et al. Supervised segmentation of RTI appearance attributes for change detection on cultural heritage surfaces. Herit Sci 10, 173 (2022). https://doi.org/10.1186/s40494-022-00813-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40494-022-00813-3

Keywords