 Research article
 Open Access
 Published:
Unveiling the invisible: mathematical methods for restoring and interpreting illuminated manuscripts
Heritage Science volume 6, Article number: 56 (2018)
Abstract
The last 50 years have seen an impressive development of mathematical methods for the analysis and processing of digital images, mostly in the context of photography, biomedical imaging and various forms of engineering. The arts have been mostly overlooked in this process, apart from a few exceptional works in the last 10 years. With the rapid emergence of digitisation in the arts, however, the arts domain is becoming increasingly receptive to digital image processing methods and the importance of paying attention to this therefore increases. In this paper we discuss a range of mathematical methods for digital image restoration and digital visualisation for illuminated manuscripts. The latter provide an interesting opportunity for digital manipulation because they traditionally remain physically untouched. At the same time they also serve as an example for the possibilities mathematics and digital restoration offer as a generic and objective toolkit for the arts.
Introduction
The digital processing, analysis and archiving of databases and collections in the arts and humanities is becoming increasingly important. This is because of a myriad of possibilities that digitisation opens up that go well beyond the organisation and manipulation of the actual physical objects, allowing, for instance, the creation of digital databases that are searchable with respect to several parameters (keywords), the digital processing and analysis of objects that are nondestructive to the original object, and the application of automated algorithms for sorting newly found objects into existing digital databases by classifying them into predefined groups in the database. These possibilities go handinhand with evergrowing advances in data science that are developing mathematical methodology for analysing and processing digital data. A large component of digital data in the arts and humanities is composed of digital images. Despite many developments of mathematical image analysis methods in applications like biomedicine, the physical sciences and various forms of engineering, the arts and humanities have been mostly overlooked as an application in need of bespoke mathematical image analysis methods. Still, a few examples in this context exist and encompass works on forgery detection [1], the digital restoration of paintings with the Ghent Altarpiece [2,3,4,5,6,7] and Van Gogh’s Field with Irises [8,9,10] being prominent examples in these efforts, the digitally guided restoration of frescoes as done for the Mantegna frescoes [11, 12] and the Neidhart frescoes [13, 14], the algorithmbased analysis and classification of texture in paintings [15, 16], learned representations of artists’ styles and painting techniques [17, 18], and multimodal image registration and colour analysis in paintings [19,20,21,22,23], just to name a few.
In this work we discuss a range of mathematical methods for correcting and enhancing images of illuminated manuscripts. In particular, we consider automated and semiautomated models for digital image restoration based on partial differential equations, exemplarbased image inpainting and osmosis filtering, and their translation to the digital interpretation of illuminated manuscripts. Here, we refer to mathematical image processing as the task of digital image restoration (or reconstruction), that is the digital processing of a given image to correct for its visual imperfections. Generally, this is done with the main intention of producing a final result where imperfections have been corrected in a visually least distracting way. This is the case for several imaging tasks such as image denoising, deblurring and also image inpainting.
Medieval and Renaissance illuminated manuscripts present a particular challenge, but also an opportunity to transform current understanding of European visual culture between the 6th and 16th century. Illuminated manuscripts are the largest and best preserved resource for the study of European painting before 1500. Nevertheless, the images in some manuscripts have been affected by wearandtear, degradation over time, iconoclasm, censorship or updating. Unlike the conservation of other painted artefacts, the conservation of illuminated manuscripts preserved in institutional collections is noninvasive, usually restricted to repairs of the binding and of torn parchment or paper, and rarely involves the consolidation of flaking pigments. For the study of illuminated manuscripts, physical restoration and repairs are often disregarded. This minimal approach is due largely to the fact that when compared to wall or easel paintings, the images in illuminated manuscripts are relatively small and their pigment layers are few and very delicate. It is not possible to remove overpainting without damaging or completely removing the original painting beneath. The removal of even the smallest sample or the restoration of even the smallest painted area would constitute a considerable change to the overall image. As a consequence, pigment losses are often not filled in and overpaintings added on top of the superficial layers can often not be removed to reveal the original images. Virtual restoration is thus the only way to recover damaged illuminations, whether by infilling paint losses or by removing overpainted layers or indeed both. Bringing the images as close as possible to their original form would ensure both their accurate scholarly interpretation and their full appreciation by wider audiences. Damaged or inaccurately restored illuminations can lead to the exclusion of seminal works of art from academic debates or to incomplete and misleading interpretations of the dating, origin and artists involved. Preserving the current state of the illuminations in line with conservation ethics, faithful digital restoration would serve as a reliable surrogate for multiple reconstructions, enabling research, teaching and wider appreciation for manuscripts.
The reliable processing of illuminated manuscripts requires a multidisciplinary collaboration as the current work is based on. In what follows we discuss a range of new adaptive, semiautomated restoration methods that (a) reconstruct imagestructures using partial differential equations [13, 14, 24,25,26,27,28], (b) mimic the humanexpert behaviour, using texture and structure patches sampled from the intact part of the illuminated manuscript at hand and integrating them in exemplarbased inpainting approaches [29, 30] in order to provide a digital restoration in agreement with the available information and pleasant to the eye (c) exploit infrared imaging data, correlating the visible image content with its traces in the hidden layers of paint [31, 32], and (d) create new 3D interpretations of illuminated manuscripts through a new 3D conversion pipeline [33]. The presequel of this work is an article in the exhibition catalogue [32].
Organisation. In “Retrieving missing contents via image inpainting” section we propose a semisupervised approach for the segmentation of damaged areas of colour accurate images (in the following referred to simply as RGB images) of illuminated manuscripts and for the retrieval of missing information via a twostep image inpainting model. In “Looking through the layers via osmosis filtering” section we consider the mathematical model of image osmosis to integrate superpainted visible image information on a manuscript with hidden infrared ones for looking through the layers of a restoration process. Finally, in “Creating a 3D virtual scene from illuminated manuscripts” section we present a mathematical pipeline to convert a 2D painting into a 3D scene by means of the construction of an appropriate depth map.
Retrieving missing contents via image inpainting
The problem of image inpainting can be described as the task of filling in damaged (or occluded) areas in an image f defined on a rectangular domain \(\Omega\) by transferring the information available in the intact areas of the image to the damaged areas in the image. Over the last 30 years a large variety of mathematical models solving the image inpainting problem have been proposed, see, e.g., [28, 34] for a review. In some of them, image information is transferred into the damaged areas (the socalled inpainting domain, denoted by D in the following) by using local information only, i.e. by means of suitable diffusion and transport processes which interpolate image structures in the immediate vicinity of the boundary of D in the occluded region. Such techniques have been shown to be effective for the transfer of geometric image structures, even in the presence of large damaged areas [28]. However, because of their local nature, such methods do not make use of the entire information contained in the intact image regions. In particular, such methods do not take into account nonlocal image information in terms of patterns and textures nor image contents located far away of D. For this reason, nonlocal mathematical models exploiting selfsimilarities in the whole image have been proposed [29, 30, 35, 36]. Such models operate on image patches rather than single pixels. Small patches inside D are iteratively reconstructed by comparison with patches outside D in a suitable distance. Missing patches are then reconstructed by copy and paste of a closest patch (or its centre pixel) from the intact part of the image. These models have been proven to be impressively effective in a very large variety of applications and rendered computationally feasible in recent years with the wellknown PatchMatch algorithm [37].
The first step of any inpainting algorithm is the decomposition of the image domain in damaged and undamaged areas. This is an image segmentation problem, decomposing a given image into its constituting regions, cf. for instance [34]. Its solution may be rendered very hard in the presence of fuzzy and irregular region boundaries and small scale objects.
In the following we describe an algorithm which detects damaged areas in images with possibly large and nonhomogeneous missing regions using few examples provided by the user. This is then used as a necessary initial step for the subsequent application of a twostages inpainting procedure based on total variation inpainting [38] and exemplarbased image inpainting proposed in [36] for the reconstruction of image contents in the images of the illuminated manuscripts in Fig. 1. Our proposed segmentation is semisupervised since user input is required for training, while the inpainting procedure is fully automated.
Description of the dataset
Our dataset is composed of two manuscripts made by William de Brailes in 12301250 and now part of the collection of the Fitzwilliam Museum in Cambridge (UK), see Fig. 1: Last Judgement in Fig. 1a and Christ in Majesty with King David playing the harp in Fig. 1b, of dimension 196x123mm and 213x135mm, respectively. The images are acquired with a Leaf Valeo 22 back utilising a Mamyia RB67 body and the resulting RAW files are processed using Leafs own proprietary software, where distortions and aberrations are corrected. Also, the colour accuracy is provided by using a customized Kodak colour separation guide with greyscale (Q13 equivalent) and exported in Adobe 98 colour space. The final output results in very large .tif images (about \(4008\times 5344\) pixels and 47 MB each).
A semisupervised algorithm for the detection of the damaged areas
For identifying the damaged areas in the image (mainly missing gold leaves) we propose in the following a twostep semisupervised algorithm. Here, a classical binary segmentation model is used first for the extraction of a small training region as described in “ChanVese segmentation” section which subsequently serves as an input for a labelling algorithm which segments the whole inpainting domain based on appropriate intensitybased image features in “Image descriptors: feature extraction” and “A clustering algorithm with training” sections.
ChanVese segmentation
In binary image segmentation one seeks to partition an image in two disjoint regions, each characterised by distinctive features. Typically, RGB intensity values are used to describe image contents and mathematical image segmentation methods often compute the required segmented image as the minimiser of an appropriate functional.
Let f be the given image. We seek a binary image u so that
where C is a closed curve. In this work, we consider the ChanVese segmentation functional for binary image segmentation [39], that is
The functional \(\mathcal F\) is minimised for constants \(c_1\) and \(c_2\) and the contour C, i.e. the optimal u of the form (1). Here, \(\mu ,\,\nu ,\,\lambda _1,\,\lambda _2>0\) are positive parameters and \(int(C),\, ext(C)\) denote the inner and the outer part of C, respectively. In (2) the first and second term penalise the length of C and the area of the region inside C, respectively, giving control on the smoothness of C and the size of the regions. The two other terms penalise the discrepancy between the fitting of the piecewise constant u in (1) and the given image f in the interior and exterior of C, respectively. By computing a minimum of (2) one retrieves a binary approximation u of f.
Despite being very popular and widely used in applications, the ChanVese model and its extensions present intrinsic limitations. Firstly, the segmentation result is strongly dependent on the initialisation: in order to get a good result, the initial condition needs to be chosen within (or sufficiently close to) the domain one aims to segment. Secondly, due to the modelling assumption (1), the ChanVese model works well for images whose intensity is locally homogeneous. If this is not the case, the contour curve C may evolve along image information different from the one we want to detect. Images with significant presence of texture, for instance, can exhibit such problems. Furthermore, the model is very sensitive to the length and area parameters \(\mu\) and \(\nu\), which may make the segmentation of very small objects in the image difficult.
For our application, we make use of the ChanVese model^{Footnote 1} to segment a subregion \(D_1\) of D that will serve as a training set for the classification described in the following two subsections. To do that, we ask the user (typically, an expert in the field) simply to click on a few pixels inside the inpainting domain D to identity a candidate initial condition for the segmentation model (1), which is then run to segment the subregion \(D_1\). In Fig. 2 we show the results of this approach with a superimposed mask of the computed region \(D_1\) for some details cropped from the original images.
Because of the intrinsic limitations of the ChanVese approach, we observe that the segmentation result is not satisfactory (see, for instance, the example in the first row of Fig. 2) since it generally detects with high precision only the largest uniform region around the user selection. To detect the whole inpainting domain D in this manner, the user should in principle give many initialisation points, which may be very demanding in the presence of several disconnected and possibly tiny inpainting regions.
For this reason, we proceed differently and make use of a featurebased approach to use the area \(D_1\) as a training region for a clustering algorithm running over the whole set of image pixels. This procedure is described in the next two sections.
Image descriptors: feature extraction
In order to describe the different regions in the image in a distinctive way, we consider intensitytype features. Namely, for every pixel x in the image we apply nonlinear colour transformations to compute the HSV (Hue, Saturation, Value), the geometric mean chromaticity GMCR [40], the CIELAB and the CMYK (Cyan, Magenta, Yellow, Key) values (see [41] for more details). Once this is done, we append all these values and store them in a feature vector \(\varvec{\psi }\) of the form
For our purpose the feature vector (3), essentially based on RGB intensities, rendered precise segmentations. For more general segmentation purposes, one could add texturebased features and, if available, multispectral measurements such as infrared IR or ultraviolet UV images.
A clustering algorithm with training
Once the feature vectors are built for every pixel in the image, we use the training region \(D_1\) detected as described in “ChanVese segmentation” section as a dictionary to drive the segmentation procedure extended to the whole image domain. We proceed as follows. First, we run a clustering algorithm over the whole image domain comparing the features defined in (3) in order to partition the image in a fixed number of K clusters. To do that, we use the wellknown kmeans algorithm.^{Footnote 2} After this preliminary step, we check which cluster has been assigned to the training region \(D_1\) and simply identify in the clustered image which pixels lie in the same cluster. By construction, this corresponds to finding the regions in the image ‘bestfitting’ the training region in terms of the features defined in “Image descriptors: feature extraction” section, which is our objective. After a refinement step based on erosion/dilation of extracted regions, so as to remove or fillin possibly misclassified pixels, we can finally extract the whole area to inpaint D. We report the results corresponding to Fig. 2 in Fig 3a, b.
Inpainting models
Once an accurate segmentation of the damaged areas is provided, the task becomes the actual restoration of the image contents in D by means of the available information in the region \(\Omega \setminus D\). A standard mathematical approach solving such an inpainting problem consists in minimising an appropriate function \(\mathcal {E}\) defined over the image domain \(\Omega\), i.e. in
A standard choice for \(\mathcal {E}\) in the case of local inpainting models is the functional
where f denotes the given image to restore, \(\Vert \cdot \Vert _2\) is the Euclidean norm, \(\lambda\) and appropriately chosen positive parameter and \(\upchi _{\Omega \setminus D}\) denotes the characteristic function of the nonoccluded image areas, so that for every pixel \(x\in \Omega\):
The second term in (5) is as a distance function between the given image f and the sought after restored image u in the intact part of the image. The multiplication of \(fu\) by the characteristic function \(\upchi\) implies that this term is simply zero for the points in D, since there is no information available, while \(fu\) for all the points in \(\Omega \setminus D\) has to be as small as possible. The term R typically encodes local information (such as gradient magnitude) which is the responsible of the transfer of information inside D by means of possibly nonlinear models [28, 34]. The transfer process is balanced with the trust in the data by the positive parameter \(\lambda\). A classical choice of a gradientbased inpainting model consists in choosing
i.e. the Total Variation of v [38]. As mentioned above such an image inpainting technique is not designed to transfer texture information. Furthermore, it fails in the inpainting of large missing areas. For our purposes we use (6) as an initial ‘good’ guess with which we initialise a different approach based on a nonlocal inpainting procedure as described in the following section.
Exemplarbased inpainting
We describe here the nonlocal patchbased inpainting procedure studied in [30, 36] and carefully described in [42] from an implementation point of view.^{Footnote 3} In the following, we define for any point \(x\in \Omega\) the patch neighbourhood \(\mathcal {N}_x\) as the set of points in \(\Omega\) in a neighbourhood of x. Assuming that the patch neighbourhood has cardinality n, by patch around x we denote the 3ndimensional vector \(P_{x} = (u(x_1), u(x_2),\ldots ,u(x_n) )\) where the points \(x_i, i=1,\ldots n\) belong to patch neighbourhood \(\mathcal {N}_x\). In order to measure ‘distance’ between patches, a suitable patch measure d can be defined, so that \(d(P_{x},P_{y})\) stands for the patch measure between the patches around the two points x and y. We define then the Nearest Neighbour (NN) of \(P_{x}\) as the patch \(P_y\) around some point y minimising d.
For an inpainting application the task consists then in finding for each point x in the inpainting domain D the bestmatching patch \(P_y\) outside D. Assuming that each NN patch can be characterised in terms of a shift vector \(\phi\) defined for every point in \(\Omega\) (i.e. assuming there exists a rigid transformation \(\phi\) which shifts any patch to its NN), the problem can be formulated as the minimisation problem
Heuristically, every patch in the solution of the problem above is constructed in such a way that in the damaged region D the patch has a correspondence (in the sense of the measure d) with its NN patch in the intact region \(\Omega \setminus D\). Following [42], we use the following distance:
From an algorithmic point of view, solving the model involves two steps: the first consists in computing (approximately) the NN patch for each point in D, so as to provide a complete representation of the shift map \(\phi\). This can be computationally expensive for large images. In order to solve this efficiently, a PatchMatch [37] strategy can be applied. Afterwards a proper image reconstruction step is performed, where for every point in D the actual corresponding patch is computed. We refer the reader to [42] for full algorithmic details.
A crucial ingredient for a good performance of the exemplarbased inpainting algorithm [30, 36] is its initialisation. In particular, once the inpainting domain is known, a preprocessing step where a local inpainting model, such as the TV inpainting model (5) with (6), can be run to provide a rough, but reliable initialisation of the algorithm.^{Footnote 4}
We report the results of the combined procedure in Fig. 4 and the overall workflow of the algorithm in the diagram in Fig. 5.
Model parameters
For the segmentation of the training region \(D_1\) within the inpainting domain D we use the activecontour MATLAB function by which the ChanVese algorithm can be called. For this we fixed the maximum number of iterations to maxiter\(=1000\) and use the default value as a tolerance on the relative error between iterates as a stopping criterion. We use the default values for the parameters \(\mu\) and \(\nu\) in (2). The subsequent clustering phase was performed by means of the standard MATLAB kmeans function after specifying a total of \(K=35\) labels to assign. The use of such a large value for K turned out to be crucial for an accurate discrimination. The automatic choice of the value of K for this type of applications is a matter of future research. The clustering was iteratively repeated 5 times to improve accuracy. Once the detection of the inpainting domain is completed, in order to provide a good initialisation to the exemplarbased model we use the TV inpainting model (4) with (6) with the value \(\lambda =1000\) and a maximum number of iterations equal to maxiter2\(=1000\) with a stopping criterion on the relative error between iterates depending on a default tolerance. Finally, we followed [42] for the implementation of the exemplarbased inpainting model: for this we specified 12 propagation of iterations and tested different sizes for the patches. In order to avoid memory shortage, we restricted ourselves to patches of size \(5\times 5\), \(7\times 7\) and \(9\times 9\).
The numerical tests were performed on a standard MacBook Pro (Retina, 13inch, Early 2015), 2.9 GHz Intel Core i5, 8 GB 1867 MHz DDR3 using MATLAB 2016b.
Discussion and outlook
We proposed in this section a combined algorithm to retrieve image contents from two images of illuminated manuscripts shown in Fig. 1 where very large regions have been damaged. At first, our algorithm computes an accurate segmentation of the inpainting domain which is performed by means of a semisupervised method exploiting distinctive features in the image. Then, taking the segmentation result as an input, the procedure is followed by an exemplarbased inpainting strategy (upon suitable initialisation) by which the damaged regions are filled.
The results reported in Figs. 4 and 6 confirm the effectiveness of the combined method proposed. In particular, when looking at the difference between standard local (TV) image inpainting methods and the exemplarbased one we immediately appreciate the higher reconstruction quality in the damaged regions, especially in terms of texture information. The method has been validated on several image details extracted from the entire images, and has been shown effective also for very large image portions with highly damaged regions.
In term of computational times, the segmentations in Fig. 3 are obtained in approximatively 15 min. The inpainting results in Fig. 4 are obtained in about 3 min for patches of size \(5 \times 5\) and about 7 min for patches of size \(7 \times 7\). Overall the whole task of segmenting and inpainting the occluded regions takes approximatively 20 min per image of size \(690 \times 690\). However, these results highly depend on the size of the image, the size of the inpainting domain and the size of the patches chosen.
Future work could address the use of different features for the segmentation of the inpainting domain with similar methodologies, such as for instance texture features [43]. Furthermore, at an inpainting level, we observe that the reconstruction of fine details in very large damaged regions (such as the strings of the harp in Fig. 6) is very challenging due to the lack of correspondence with similar training patches in the undamaged region. For solving this problem a combination of exemplarbased and local structurepreserving inpainting models could be used.
Looking through the layers via osmosis filtering
In the previous section the image content in the damaged areas of the illuminations is completely lost and it was estimated only from the information available in the rest of the picture. This, however, is not the only kind of degradation encountered in the process of restoration of illuminated manuscripts. In some cases parts of an illumination are painted over. In this section we discuss as such an example the illuminations from the primer of Claude de France which illustrate the story of Adam and Eve in the garden of Eden. The two figures were originally depicted naked, as described in the book of Genesis but a later owner wanted them clothed with additional veils, leaves or beast skin added in the illumination, cf. Fig. 7. The use of infrared imaging as shown for instance in Fig. 8 allows to look through these added layers, unveiling hidden structural information underneath the painted layer. All the input colour images and their reflectogram are freely available on the Fitzwilliam museum website^{Footnote 5} along with some more information about the manuscript, in particular the pigments used.
In this section we aim to fuse the details appearing in the near infrared reflectogram (IR) with the colours of the visible colour image, in particular the skin tones, to create a digital version of the illuminations as they could have looked before overpainting. Since we only have access to one near infrared reflectogram and we cannot chose the wavelength and have no information on the pigments used, we find ourselves in one of the following three situations: (i) the added cloth is transparent in the IR; (ii) the added cloth appears in the IR but without texture; (iii) the added cloth and its texture appear in the IR. The fact that the original pigments can also be IR transparent poses an additional challenge. For these different situations we use different methods all based on the use of the linear image osmosis model studied by Weickert et al. in [31].
In the following we first present the original parabolic linear osmosis equation studied in [31] and our slightly modified local elliptic formulation of osmosis [44]. Then we recall some of its common applications in image processing and finally apply our methods to digitally unveiling Adam and Eve in Claude De France’s Primer in each of the different situations (i)–(iii) described above (cf. “IR transparent original pigments”, “Overpaint with IR transparent texture” and “Non IR transparent overpaint texture: adding an inpainting step” sections).
The Osmosis model
The osmosis model has been introduced in [31] as a nonsymmetric generalization of diffusion filters and as a new tool for image processing problems such as seamless cloning and shadow removal. The original parabolic equation for this model is
Here u is the solution we are looking for and \(\mathbf {d}\) is a given vector field defined on the image domain \(\Omega\) with values in \({\mathbb {R}}^{2}\) that we call the driftfield. Typically \(\mathbf d\) encodes information from the gradient of the desired solution u, thus it serves as a guide to the diffusion process. For a given positive image I, when \(\mathbf d = \mathbf {d}_I := \nabla I/I\), it turns out that I is a trivial steady state (i.e. a solution for \(u_t=0\)) of Eq. (9). Under this choice, the vector field \(\mathbf {d}_I\) is called the canonical driftfield of I. Note that such driftfield is invariant to multiplicative changes of I.
Equation (9) is typically solved on the whole image domain under appropriate homogeneous Neumann boundary conditions. When applied to Cultural Heritage imaging this model has been successfully rendered computationally efficient by means of standard dimensional splitting techniques and applied, for instance, to ThermalQuasi Reflectography (TQR) imaging and other similar applications in [45, 46].
In the following, we look directly for the steady state of the previous equation, i.e. the elliptic equation,
and solve it on a small subdomain D of the input image domain \(\Omega\) with mixed boundary conditions as in [44]. Restricting ourselves to a small domain has two main advantages: first, most of the image is supposed to be left untouched; secondly, the computational cost is much smaller. Moreover, having mixed boundary conditions allows for more flexibility in adapting (10) to the problem at hand. In particular, Dirichlet boundary conditions enforce the colour values on \(\partial D\) and a smooth transition of colour values across \(\partial D\), which is appropriate if the image does not feature discontinuities (i.e. image edges) at the boundary of D. Neumann boundary conditions, on the other hand, prevent any diffusion across the boundary, ensuring clear colour discontinuities which is useful when the border of the mask is along an edge between two different colours appearing the same in the IR.
Common applications of the model
The osmosis equation has been proposed for several tasks [31], the most common being shadow removal and seamless cloning as an alternative to Poisson editing [47]. All these tasks share the idea of manipulating the canonical driftfield \(\mathbf {d}_I\) of one or more input images.
Shadow removal
The problem of shadow removal involves only one image and it is, as its name suggests, a process that takes as input an image with constant shadowed areas and gives as a result a shadowfree result. A constant shadow can be thought of as a multiplicative change in the domain of the shadowed region of the image. Since the canonical drift vector field is invariant to multiplicative change, the presence of the shadow is only encoded in the driftfield on the edge of the shadow. In an ideal case with a sharp shadow boundary, setting the drift field to zero there creates pure diffusion and results in a perfectly shadowless image [31].
Seamless cloning
Seamless cloning involves two input images that we will call the background image g and the foreground image f. This problem can be described as an improved copypaste process where some information of f is copied in a subdomain D of g. That is, one directly replaces in D the colour information of g by the colour information of f. This leads to a rough result where the boundaries of the pasted region are quite noticeable. Seamless cloning consists in doing this copypaste process in such a way that the boundaries of the pasted region are no longer noticeable and the transition from f to g is smooth and natural. To this end we create a driftfield \(\mathbf {d}\) from the canonical driftfields \(\mathbf {d}_{g}\) and \(\mathbf {d}_{f}\) associated to f and g, respectively, so that:
where \(\Omega _b\) denotes the transition boundary. Once we have this composite driftfield, we can solve the osmosis equation on the whole image domain with Neumann boundary conditions [31] or, alternatively, only on the subdomain D with Dirichlet boundary conditions [44]. On the one hand solving the equation on the whole image leads to the whole image being modified. On the other hand solving the equation only on D leaves the background image g untouched outside of D.
Applications to illuminated manuscripts
In an ideal case, the added pigments do not appear on the IR while the colours to be restored are perfectly encoded in the IR. In this case the problem is reduced to a simple seamless cloning application with Dirichlet boundary conditions. The driftfield of the colour image is replaced by the one from the infrared image on the subdomain to be restored. However, unfortunately, such an ideal case is uncommon. For the illuminations of the primer, we encounter rather different scenarios. For instance, when the added cloth is IR transparent or has no texture in the IR, the osmosis equation is enough to get a satisfying result. When the texture of the added cloth appears in the IR, the osmosis equation is no longer enough and we have to add an inpainting step to our method. We describe this in a greater detail in the following.
IR transparent original pigments
In Fig. 8, the IR along with a careful examination of the colour image reveals the existence of an original fig leaf under the added leaves of the overpaint. Here the overpaint is IR transparent so it should be a simple seamless cloning problem with the colour image being the background and the infrared being the foreground image. Yet, the colour distinction between the original fig leaves and the skin of Adam and Eve is hard in the IR. If we simply follow the seamless cloning method, we get back not only the skin colour but also the fig leaves colour from the small parts left untouched in the colour image. However because they appear the same in the IR, some diffusion occurs across the edges between the skin and the fig leaves. To prevent this, we enforce Neumann boundary conditions along these edges to prevent any such diffusion. The results with and without the use of Neumann boundary conditions (represented as red lines in the mask) are presented in Fig. 8.
Overpaint with IR transparent texture
In Fig. 9, the added cloth on Adam is not IR transparent but it has little texture discernible on the IR and the original drawings appear clearly by transparency under it. This looks like a shadow in the IR as well as in the solution obtained with the method of the previous “IR transparent original pigments” section. Thus we mix seamless cloning with mixed boundary conditions and the shadow removal method. We replace the canonical driftfield of the colour image by the one of the IR in the region of interest. Then we put the driftfield to zero on the edge of the overpaint appearing in the IR. This method is illustrated in Fig. 9. The white lines of the mask are the areas where the driftfield is put to zero. In this figure we observe some transparent texture from the overpaint (over Adam’s hip and at the bottom of Eve’s veil). As expected, this texture appears in the final result.
Non IR transparent overpaint texture: adding an inpainting step
In the case of Fig. 10, the IR adds some useful information to the colour image, as shown by the result obtained using the method from the previous “Overpaint with IR transparent texture” section but a large amount of the added skirt texture, visible in the IR, is also present. To get rid of this unwanted texture, we put the drift field to zeros on the area corresponding to Adam’s skin and manually segment the lines we want to keep. Note that this leads to a complete loss of texture in this region. To have a more natural looking result, we want to have some texture for the skin. While we can’t recover the original texture with our inputs, the untouched part of the illumination gives us some example of texture for Adam’s skin. This information is enough to use the exemplarbased inpainting algorithm described in “Exemplarbased inpainting” section, using as initialisation our result with missing texture. The final result on Adam’s skin has probably not much in common with the original painting but it appears natural enough, so it can help to get a better idea of the illumination in its original state.
Preprocessing and parameters
As we just saw, such a complex restoration process necessitates significant user decisions. In fact the mask containing the subdomain to be restored must be provided by the user as well as the edges along which Neumann boundary conditions should be applied and the subdomain edges where the driftfield should be put to zero.
For our experiments we used the discretisation proposed in [31]. Then the linear system was solved using the MATLAB UMFPACK V5.4.0 LU solver. It took us at most 15 seconds to obtain the numerical solutions of the osmosis equation, our input images being respectively \(901\times 1201\), \(1001\times 1201\) and \(952\times 1248\) for Figs. 8, 9 and 10. For Fig. 10 we only show a crop of our result of size \(359\times 483\). For the inpainting step of Fig. 10, we used the implementation of the exemplarbased inpainting algorithm from [48]^{Footnote 6} with the NLmedians method, \(9\times 9\) patches, two scales and 4 iterations.
The numerical tests were performed on a standard MacBook Pro (Retina, 13inch, 2017), 3,5 GHz Intel Core i7, 16 GB 2133 MHz LPDDR3 using MATLAB 2017b.
Discussion and future work
We proposed in this section a method to digitally remove overpaint from an illumination using infrared information. Although we do not claim that our result perfectly corresponds to the original state of the illumination, we believe that nonetheless it offers an idea of its original state. For our applications the results are mostly satisfying, especially when the added pigments do not appear on the IR or when the addition doesn’t have too much texture visible in the IR. As the process necessitates some important user decisions, it is preferable to have input from an expert. From the IR alone we can only make educated guesses. Only outside information from an expert allows us to know which pigments have been overpainted, from examination under a microscope for example. This method is fast enough to allow fine tuning by the user as depending on the result the mask can be repeatedly improved. The quality of the output is highly dependent on the infrared wavelength and the pigments used for both the original painting and the overpaint.
Future work should address these difficulties and test the method on a larger dataset. An easy improvement would be to have an IR with the same resolution as the colour image to prevent the blur effect that we can observe. For the mask creation phase, a more automated segmentation detection could be inserted to have a first guess. In this work, we have only used the visible image and a single IR. Better results may be obtained by using several IR’s where the wavelengths are chosen depending on the pigments used. In such a situation, the expert would only have to specify for each area which IR should be used.
Creating a 3D virtual scene from illuminated manuscripts
In recent years, certain museums and companies have taken a step beyond using digital technology to restore historic artwork, and have instead created 3D or animated versions of historic artwork that can only be experienced digitally. For example, the British Museum’s Hutong Gallery recently created a 3D version of the 1623 painting “Reading in the Autumn Mountains” (originally painted during the Ming dynasty by the artist Xiang Shengmo). A video in which the viewer flies through the 3D painting can be found on their website [49]. Another example, which was shown at the Taipei Flora Expo in 2010/2011, features a Song Dynasty painting that was converted into an animation [50, 51]. In this case, the animated painting was displayed on a specially designed screen, twenty feet wide and more than 360 feet long, mounted on the wall of the exhibition center. Finally, the Shanghai based company Motion Magic has created 3D versions of the paintings of Vincent Van Gogh, which viewers can walk around inside after putting on virtual reality goggles [52, 53]. The result of these efforts is both a new kind of art and a new way of interacting with art. This trend is likely to get stronger as virtual reality becomes more mainstream and the demand for VR content increases.
In this section, we demonstrate the potential of these approaches by converting an illumination from the manuscript Annunciation by Simon Bening, Fitzwilliam Museum, MS 294b, Flanders, Bruges, (15221523), as well as the painting The Scream by Edvard Munch into stereo 3D (see Figs. 13 and 14). We do so using a 3D conversion pipeline originally developed for the conversion of Hollywood films. There, one is given the video shot from camera position \(p \in \mathbb {R}^3\) and orientation \(O \in SO(3)\) (corresponding to, for example, the left eye view), and the objective is to generate a plausible reconstruction of the video as it would appear from a perturbed position and orientation \(p+\delta p \in \mathbb {R}^3\), \(O+\delta O \in SO(3)\) (corresponding to the view from the other eye). In some cases p and O, along with other relevant camera parameters such as field of view, may be given. In other cases, they must be estimated. In our case the process is the same, except that we have a manuscript (or painting) rather than a video. However, this introduces a subtle difference. In the case of converting a video shot with a real camera, although we might not know the associated camera parameters, we at least knew that they exist—but here, because the input is drawn by a human, existence is not given. In particular, depending on the artist, the drawing may or may not obey the laws of perspective. This is particularly noticeable in the case of The Scream—see Fig. 16.
Overview of a 3D conversion pipeline
Here we briefly go over the 3D conversion pipeline used in this paper. The steps of the pipeline are illustrated in Figs. 11 and 12. For more details, please see [33] or [28, Ch. 9.4].

1.
Generate a rough but plausible 3D model of the scene, including a virtual camera with plausible parameters (parameters include position, orientation, field of view, possibly lens distortion, etc) placed within it. The 3D models do not have to be perfect, and are typically made a little larger than the objects they correspond to. This is because they will be “clipped” in step three. See Fig. 11a, where we show rough 3D models used for the Virgin Mary and Angel Gabriel.

2.
Generate accurate masks for all objects in the scene. This is typically done by hand, but could also be done with the help of segmentation algorithms that are then touched up. See Fig. 11b, where we show masks for the Virgin Mary and Angel Gabriel.

3.
The camera is then transformed into a projector, which is used for two purposes. Firstly, the masks from the previous step are projected onto the rough 3D geometry from step 1, and used—much like a cookie cutter—to “clip” the geometry, throwing away the portion that is unneeded. See Fig. 11c, where we illustrate this for the 3D models of Mary and Gabriel. Secondly, the original image is then used as texture by projecting it onto the clipped geometry, as in Fig. 11d.

4.
One or more new virtual cameras are added to the scene. If the original camera is taken to be either the right or left eye, then one additional virtual camera corresponding to the other eye is needed. However, sometimes the original camera position is taken to be half way between the two eyes, so that two virtual cameras (corresponding to the left and right eyes) are needed. These camera(s) will be used to render the 3D scene from one or more new viewpoints, in order to create a stereo pair.

5.
Because the new camera(s) will typically see bits of background previously hidden behind foreground objects in the original view, inpainting of occluded areas is required. This is typically done using a toolbox of inpainting algorithms that are then touched up by hand. In our example, inpainting was done in Photoshop, using a combination of Content Aware fill and manual copypasting of patches by hand. See Fig. 12a, b, where we show the rendering of Annunciation from a new view, including in (a) the areas originally occluded by Mary and Gabriel, and in (b) the result after inpainting these areas. In reality, as this scene contains many more 3D objects than just Mary and Gabriel, what is shown in 12a is just a sampling of the inpainting problems that need to be solved.
Steps one, two, and the first half of step three can be thought of as generating a depth map for the image. The rough geometry generated in step one provides the smooth component of the depth map, while the masks generated in step two define the depth discontinuities, which are imposed on the geometry by the “clipping” in step three. Because the human eye is most sensitive to depth discontinuities, these have to be very accurate, but the 3D models do not. For example, in the conversion of Fig. 13a, the virgin Mary is modelled using just a few simple geometric primitives including an ellipsoid for her body, a sphere for her head, a cylindrical halo and a cone for the bottom of her dress. This is illustrated in Fig. 11a, where the geometry of the Angel Gabriel (also consisting of simple geometric primitives) is also shown.
Results and future work
The results of our 3D conversion of Annunciation are presented in Fig. 13, where we show the original manuscript (assumed to be the right eye view) side by side with the reconstructed left eye view. Similarly, Fig. 14 shows our results for the 3D conversion of The Scream. Please see the video files provided in Additional files 1 and 2 atfor animated versions of our results. The conversion of Annunciation illustrates a limitation of the pipeline we have used: it does not handle partially transparent objects properly. In this case, bits of background in the original right eye view are visible through the halos of both the virgin Mary and the angel Gabriel. In particular, in the original right eye view, a bit of Mary’s bed is visible through her halo. When rendered from the new left eye vantage point, we should now be seeing the window through her halo, but instead we continue to see the bed. See Fig. 15 for a closeup of this defect. To overcome this, one could modify the pipeline in the “Overview of a 3D conversion pipeline” section to first decompose semitransparent objects into two images (in this case, the pure halo and the background). This is something we would like to investigate in the future.
The conversion of The Scream illustrates a nuance arising in the 3D conversion of paintings—namely that paintings may not obey the laws of perspective. In this case, due to the failure of perspective, it is not possible to extrapolate the railing of the bridge into the occluded area behind the screaming figure without introducing a bend or “kink”. This is illustrated in Fig. 16 where we also show the “kink” we had to introduce into the 3D model of the bridge in order to make 3D conversion of this painting possible.
Conclusion
An adequate mathematical analysis and processing of images arising in the arts and humanities needs to meet special requirements:

There is often particular domain expertise which any analysis should ideally make use of. For instance, when digitally restoring an image, the integration of related images such as paintings from the same artist, could be taken into account. In what we have discussed this concept is used to the extent that a dictionary of characteristic structures in the undamaged part of the illuminations was created and used to fill in the lost contents in the damaged regions, compare Figs. 4, 6. This could be driven much further, expanding the dictionary by illuminations or details of illuminations from the same artist.

The results achieved in Figs. 10, 9, 8 show a possible usecase for scientific imaging in art restoration or art interpretation. Indeed, we believe that the integration of different types of scientific imaging such as infrared imaging, are likely to give benefit to image analysis methods and so the latter should be able to capture those.

Explainability of results is crucial. There is clearly a balancing act to be made between handcrafted analysis that captures expert knowledge and a blackbox, datadriven image analysis approach. In particular, the latter should ideally have an interpretable mathematical representation that gives rise to new conclusions. In this paper we have solely considered modelbased and hence explainable solutions to art restoration and interpretation problems. The growing emergence of deep learning solutions to various image analysis tasks provides an alternative approach to these problems, at the moment however without a proper explanation.

Relevant characteristics are often hidden in very fine details of the artwork, like a brushstroke in a painting. Capturing these fine details in a digital format results in highresolution images that an image analysis method should be capable of processing. This means there is a demand for computationallyefficient image analysis methods.

Digital processing and manipulation of artwork opens up a myriad of possibilities of analysing and processing, but also of experiencing, understanding and reinterpreting artwork. As an example we have shown 3D conversion and its possible usecases in the presentation of art, cf. Fig. 13 for instance.
With the above in mind, we have discussed a selected subset of mathematical approaches and their possible usecases in the restoration and interpretation of illuminated manuscripts. These approaches are not perfect yet by all means and there is plenty of room for improvement, compare our discussion in “Discussion and outlook”, “Discussion and future work” and “Results and future work” sections.
Notes
 1.
For our computation we used the inbuilt MATLAB ChanVese segmentation code.
 2.
The popular kmeans algorithm is classically used to partition a set of N observations (in our case, the image pixels) into K clusters such that each observation is grouped to the cluster having the closest mean value. Each resulting cluster is labelled in order to distinguish it from the others.
 3.
The code is freely available at Image Processing OnLine (IPOL) Journal: https://doi.org/10.5201/ipol.2017.189.
 4.
The code is freely available at Image Processing OnLine (IPOL) Journal: https://doi.org/10.5201/ipol.2012.gtvi
 5.
 6.
The code is freely available at IPOL: https://doi.org/10.5201/ipol.2015.136
References
 1.
Polatkan G, Jafarpour S, Brasoveanu A, Hughes SM, Daubechies I. Detection of forgery in paintings using supervised learning. ICIP. 2009. https://doi.org/10.1109/ICIP.2009.5413338.
 2.
Ruzic T, Cornelis B, Platisa L, Pizurica A, Dooms A, Philips W, Martens M, De Mey M, Daubechies I. Virtual restoration of the Ghent Altarpiece using crack detection and inpainting. ACIVS. 2011;6915(9):417–28. https://doi.org/10.1007/9783642236877_38.
 3.
Anitha A, Brasoveanu A, Duarte ME, Hughes SM, Daubechies I, Dik J, Janssens K, Alfeld M. Restoration of Xray fluorescence images of hidden paintings. Signal Process. 2013;93:592–604.
 4.
Cornelis B, Ruzic T, Gezels E, Dooms A, Pizurica A, Platisa L, Cornelis J, Martens M, De Mey M, Daubechies I. Crack detection and inpainting for virtual restoration of paintings—the case of the Ghent Altarpiece. Signal Process. 2013;93(3):605–19. https://doi.org/10.1016/j.sigpro.2012.07.022.
 5.
Pizurica A, Platisa L, Ruzic T, Cornelis B, Dooms A, Martens M, Dubois H, Devolder B, De Mey M, Daubechies I. Digital image processing of the Ghent Altarpiece: supporting the painting’s study and conservation treatment. IEEE Signal Process Mag. 2015;32(4):112–22. https://doi.org/10.1109/msp.2015.2411753.
 6.
Ružić T, Pižurica A. Contextaware patchbased image inpainting using markov random field modeling. IEEE Trans Image Process. 2015;24(1):444–56.
 7.
Yin R, Cornelis B, Fodor G, Ocon N, Dunson DB, Daubechies I. Removing Cradle Artifacts in Xray images of paintings. SIAM J Imaging Sci. 2016;30:1247–72.
 8.
Kirchner E, van der Lans I, Ligterink F, Hendriks E, Delaney J. Digitally reconstructing Van Gogh’s Field with Irises near Arles. Part 1: varnish. Color Res Appl. 2017;43(2):150–7. https://doi.org/10.1002/col.22162.
 9.
Kirchner E, van der Lans I, Ligterink F, Geldof M, Ness Proano Gaibor A, Hendriks E, Janssens K, Delaney J. Digitally reconstructing Van Gogh’s field with Irises near Arles. Part 2: pigment concentration maps. Color Res Appl. 2018;43(2):158–76.
 10.
Kirchner E, van der Lans I, Ligterink F, Geldof M, Megens L, Meedendorp T, Pilz K, Hendriks E. Digitally reconstructing Van Gogh’s Field with Irises near Arles part 3: determining the original colors. Color Res Appl. 2018;43:311–27.
 11.
Fornasier M, Toniolo D. Fast, robust and efficient 2D pattern recognition for reassembling fragmented images. Pattern Recognit. 2005;38(11):2074–87. https://doi.org/10.1016/j.patcog.2005.03.014.
 12.
Fornasier M, March R. Restoration of color images by vector valued BV functions and variational calculus. SIAM J Appl Math. 2007;68:437–60.
 13.
Baatz, W, Fornasier M, Markowich P, Schönlieb CB. Inpainting of Ancient Austrian Frescoes. In: Proceedings of bridges, 2008. p. 150–56.
 14.
Baatz W, Fornasier M, Markowich P, Schönlieb CB. Binary Based Fresco restoration. In: Proceedings of bridges, 2009. p. 337–38.
 15.
Abry P, Wendt H, Jaffard S. When Van Gogh meets Mandelbrot: multifractal classification of painting’s texture. Signal Process. 2013;93(3):554–72. https://doi.org/10.1016/j.sigpro.2012.01.016.
 16.
Abry P, Roux SG, Wendt H, Messier P, Klein AG, Tremblay N, Borgnat P, Jaffard S, Vedel B, Coddington J. Multiscale anisotropic texture analysis and classification of photographic prints: art scholarship meets image processing algorithms. IEEE Signal Process Mag. 2015;32(4):18–27.
 17.
van Noord N, Postma E. A learned representation of artistspecific colourisation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 2017. p. 2907–15.
 18.
Van Noord N, Hendriks E, Postma E. Toward discovery of the artist’s style: learning to recognize artists by their artworks. IEEE Signal Process Mag. 2015;32(4):46–54.
 19.
Remondino F, Pelagotti A, Mastio AD, Uccheddu F. Novel data registration techniques for art diagnostics and 3d heritage visualization. 2009.
 20.
Conover DV, Delaney JK, Ricciardi P, Loew MH. Automatic controlpoint selection for image registration using disparity fitting. Med Imag. 2012. https://doi.org/10.1117/12.912471.
 21.
Davari A, Lindenberger T, Häberle A, Christlein V, Maier AK, Riess C. Image registration for the alignment of digitized historical documents. CoRR (abs/1712.04482). 2017.
 22.
Henriques F, Bailao A, Bordalo R, Le Gac A, Goncalves A, Cardeira L, Vieira E, Candeais A. Mapping lacunae forretouching paintings with computer graphics software. In: Proceedings of 4th meeting retouching cultural heritage. 2017.
 23.
Geldof M, Proaño Gaibor AN, Ligterink F, Hendriks E, Kirchner E. Reconstructing van gogh’s palette to determine the optical characteristics of his paints. Heritage Sci. 2018;6(1):17. https://doi.org/10.1186/s4049401801816.
 24.
Bertalmio M, Sapiro G, Caselles V, Ballester C. Image inpainting. In: Proceedings of the 27th annual conference on computer graphics and interactive technique. New York: ACM Press; 2000. p. 417–24.
 25.
Masnou S, Morel JM. Level lines based disocclusion. In: ICIP’98 international conference on image processing IEEE Comput. Society. p. 259–63.
 26.
Bornemann F, März T. Fast image inpainting based on Coherence transport. J Math Imag Vision. 2007;28(3):259–78. https://doi.org/10.1007/s1085100700176.
 27.
Burger M, He L, Schönlieb CB. CahnHilliard inpainting and a generalization for Grayvalue images. SIAM J Imaging Sci. 2009;2(4):1129–67. https://doi.org/10.1137/080728548.
 28.
Schönlieb CB. Partial differential equation methods for image inpainting., Cambridge monographs on applied and computational mathematicsCambridge: Cambridge University Press; 2015. https://doi.org/10.1017/CBO9780511734304.
 29.
Criminisi A, Perez P, Toyama K. Region filling and object removal by exemplarbased image inpainting. IEEE Trans Image Process. 2004;13(9):1200–12. https://doi.org/10.1109/TIP.2004.833105.
 30.
Arias P, Facciolo G, Caselles V, Sapiro G. A variational framework for exemplarbased image inpainting. Int J Comput Vision. 2011;93(3):319–47. https://doi.org/10.1007/s1126301004187.
 31.
Weickert J, Hagenburg K, Breuß M, Vogel O. Linear Osmosis models for visual computing., Energy minimization methods in computer vision and pattern recognitionBerlin: Springer; 2013. https://doi.org/10.1007/9783642403958_3.
 32.
D’Autume M, Panayotova S, Ricciardi P, Schönlieb CB. Digital and mathematical reconstructions. In: Panayotova S, editor. Colour: the art and science of illuminated manuscripts. London: Springer; 2016. p. 171–4.
 33.
Hocking LR, MacKenzie R, Schönlieb CB. Guidefill: GPU accelerated, artist guided geometric inpainting for 3D conversion of film. SIAM J Imaging Sci. 2017;10(4):2049–90. https://doi.org/10.1137/16m1103737.
 34.
Chan TE, Shen JJ. Image processing and analysis. Variational, PDE, Wavelet, and Stochastic Methods. Soc Ind Appl Math. 2012;1:1. https://doi.org/10.1137/1.9780898717877.
 35.
Efros AA, Leung TK. Texture synthesis by nonparametric sampling. In: Iccv. 1999, IEEE. p. 1033
 36.
Newson A, Almansa A, Fradet M, Imaging, Y.G.S.J.o., Video inpainting of complex scenes. SIAM. 2014;1:1. https://doi.org/10.1137/140954933.
 37.
Barnes C, Shechtman E, Finkelstein A, Goldman DB. PatchMatch—a randomized correspondence algorithm for structural image editing. ACM Trans Graph. 2009;28(3):1. https://doi.org/10.1145/1531326.1531330.
 38.
Shen J, Chan TF. Mathematical models for local nontexture inpaintings. SIAM J Appl Math. 2002;62(3):1019–43. https://doi.org/10.1137/S0036139900368844.
 39.
Chan TF, Vese LA. Active contours without edges. IEEE Trans Image Process. 2001;10(2):266–77. https://doi.org/10.1109/83.902291.
 40.
Finlayson GD, Drew MS, Lu C. Entropy minimization for shadow removal. Int J Comput Vision. 2009;85(1):35–57. https://doi.org/10.1007/s112630090243z.
 41.
Wyszecki G, Stiles WS. Color science: concepts and methods. Quantitative Data and Formulae. Wiley Series in Pure and Applied Optics: Wiley; 2000.
 42.
Newson A, Almansa A, Gousseau Y, Pérez P. Nonlocal patchbased image inpainting. Image Process Line. 2017;7:373–85. https://doi.org/10.5201/ipol.2017.189.
 43.
Varma M, Zisserman A. A statistical approach to texture classification from single images. Int J Comput Vision. 2005;62:61–81. https://doi.org/10.1007/s1126300546354.
 44.
d’Autume M, MeinhardtLlopis E, Morel JM. A flexible solution to the osmosis equation for seamless cloning and shadow removal. In: IEEE international conference on image processing. 2018.
 45.
Daffara C, Parisotto S, Ambrosini D. Multipurpose, dualmode imaging in the \(35 \mu m\) range (MWIR) for artwork diagnostics: a systematic approach. Optics Lasers Eng. 2018;104:266–73. https://doi.org/10.1016/j.optlaseng.2017.10.006.
 46.
Parisotto S, Calatroni L, Daffara C. Digital cultural heritage imaging via osmosis filtering. 2018. arXiv.org.
 47.
Pérez P, Gangnet M, Blake A. Poisson image editing. ACM Trans Graphics. 2003;22(3):313–8.
 48.
Fedorov V, Facciolo G, Arias P. Variational framework for nonlocal inpainting. Image Process Line. 2015;5:362–86. https://doi.org/10.5201/ipol.2015.136.
 49.
Museum B. British museumhotung gallery. 2018. http://www.fgreatstudio.com/portfolio/britishmuseumhotung/.
 50.
Tapei Flora Expo. 2011. https://www.youtube.com/watch?v=p5mu2LuDVfI.
 51.
Jiang R, Pan Z, Liu G, Shen C. Animating and interacting with ancient Chinese painting—qingming festival by the riverside. In: International conference on culture and computing(CULTURE COMPUTING), 2011. p. 3–6.
 52.
New frontiers in art appreciation. 2016. https://www.shine.cn/archive/sunday/art/Newfrontiersinartappreciation/shdaily.shtml.
 53.
Starry Night Flythrough Video. 2016. https://www.youtube.com/watch?v=dSixcSx5h0.
Authors’ contributions
The work of LC focused in particular on the use of clustering and image inpainting techniques to produce the results in Sect. "Retrieving missing contents via image inpainting" and on the drafting and the proofreading of the whole manuscript at its initial stage and during the revision process. MA has contributed to the manuscript with her work on osmosis and the overall writing of the manuscript. RH was responsible for the portion of the paper devoted to 3D conversion of manuscripts and paintings. PR and SP selected the manuscript images and contributed data on their origin and production. SP worked on the actual implementation of the segmentation and inpainting methods in “Retrieving missing contents via image inpainting” and the writing of the manuscript. CBS was responsible for the overall content and structure of the manuscript, contributed to several technical parts of the image processing and the writing of the manuscript. All authors read and approved the final manuscript.
Acknowledgements
Luca Calatroni acknowledges the support of Fondation Mathémathique Jacques Hadamard (FMJH).Simone Parisotto acknowledges the UK Engineering and Physical Sciences Research Council (EPSRC) grant EP/L016516/1 for the University of Cambridge Centre for Doctoral Training, the Cambridge Centre for Analysis. CarolaBibiane Schönlieb acknowledges support from the Engineering and Physical Sciences Research Council (EPSRC) ’EP/K009745/1’, the Leverhulme Trust project ’Breaking the nonconvexity barrier’, the EPSRC grant ’EP/M00483X/1’, the EPSRC centre ’EP/N014588/1’, the Alan Turing Institute ‘TU/B/000071’, CHiPS and NoMADS (Horizon 2020 RISE project grant), the Isaac Newton Institute, and the Cantab Capital Institute for the Mathematics of Information.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Author information
Additional file
40494_2018_216_MOESM1_ESM.avi
40494_2018_216_MOESM2_ESM.avi
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
About this article
Received
Accepted
Published
DOI
Keywords
 Sample
 Mathematical image reconstruction
 Image inpainting
 Image osmosis
 3D visualisation