Skip to main content

Validation of a photogrammetric approach for the objective study of early bowed instruments

Abstract

Some early violins have been reduced during their history to fit imposed morphological standards, while more recent ones have been built directly to these standards. We propose an objective photogrammetric approach to differentiate between a reduced and an unreduced instrument, whereby a three-dimensional mesh is studied geometrically by examining 2D slices. Our contribution is twofold. First, we validate the quality of the photogrammetric mesh through a comparison with reference images obtained by medical imaging, and conclude that a sub-millimetre accuracy is achieved. Then, we show how quantitative and qualitative features such as contour lines, channel of minima and a measure of asymmetry between the upper and lower surfaces of a violin can be automatically extracted from the validated photogrammetric meshes, allowing to successfully highlight differences between instruments.

Graphical Abstract

1 Motivation

1.1 Historical context and previous work on violin classification

The morphology of today’s violin family differs greatly from that of the instruments built between the late 16th and the mid-18th century. After 1750, in order to meet the standards imposed by famous orchestras and conservatories, many early violinsFootnote 1 have been reduced. Figure 1 shows on the left a reduced violin from the first half of the 18th century and an estimation of its original dimensions [1]. We illustrate two types of reduction on the right: re-cutting the top and bottom parts of the violin body (also called the sound box), and removing a slice of wood along the axis of the instrument to reduce its width. As historical testimonies about this process are imprecise, a common issue for today’s musicologists, organologists and luthiers is to determine whether an early violin has been reduced and, if so, to quantify the alterations it has undergone. This problem has been little studied but is nevertheless important because it changes the image of pre-1750 music. A detailed historical account of this issue can be found in [2]. It is therefore desirable to evaluate the violin geometry in a completely objective way, which is the problem we address.

Fig. 1
figure 1

Reduced violin vs. its estimated original dimensions (left) [1]. Reduction of the height of the sound box vs. reduction of the width (right)

Our aim is to detect differences between reduced and unreduced violins on the basis of 3D models representing the instruments, since the violins themselves are the best witnesses of their morphological evolution (rather than written sources, for instance). Two particular cases of reduced violins attributed to Andrea Amati bear a painted heraldic shield. A complete study [3] of the modified coat of arms (using notably X-ray fluorescence spectroscopy and historical knowledge) allowed to identify precisely how the instruments were reduced. Unfortunately, this approach is rather unique and cannot be generalised, as most violins are devoid of pictural ornamentation.

To the best of our knowledge, no additional work on the quantification of reduction of early violins has ever been carried out. We can however mention studies related to the violin, such as the 2D classification [4] describing the morphological evolution of the violin body over 400 years of history (depending on time, luthier style, geographic area, etc.). This work, performed on top view pictures of more than 9000 instruments, aimed to isolate and study the contours of the violins. These contours were represented with elliptical Fourier descriptors and then classified using Principal Component Analysis (PCA) and Linear Discriminants. The study has finally shown that violin shapes tend to cluster into four major groups based on factors such as dimensions, curvatures and bout placement. An extension [5] has been proposed to a larger and publicly available database, namely the Musical Instrument Museums OnlineFootnote 2 (MIMO), which offers information on numerous instruments held in public museums. From the violin images, the authors derived a set of measurements that reflect relevant geometric features of the instruments. The application of PCA uncovered similarities between violin makers and their respective copyists, as well as among luthiers belonging to the same family lineage, in the context of a historical narrative.

Other researchers have used deep learning and convolutional neural networks (CNN) for stylistic recognition of historical violins [6, 7]. The CNNs had to automatically determine whether an instrument was made by Antonio Stradivarius or not (binary classification). Photos were given as input, focusing on either the violin body, the head or both. Once again, an exclusively 2D approach was implemented whereas our goal is to consider a 3D model.

1.2 Previous work on 3D violin models

None of the aforementioned works are concerned with violin reduction. However, in contrast to the literature on instrument reduction, several studies have been performed on the reconstruction of 3D models of violins. We can first mention the use of laser scanning on violins by Antonio Stradivari and Giuseppe Guarneri ‘del Gesù’ to recreate 3D models and assess their quality by comparing numerical and true measurements [8,9,10,11]. Several research works have also been conducted with medical X-ray computed tomography (CT) scans, again with the goal to establish accurate models and make measurements on instruments [12,13,14,15] or to perform a modal analysis on a Stradivarius instrument [16,17,18]. Even more accurate models have been obtained with high resolution industrial CT scans (\(\mu\)-CT scans) to faithfully reproduce cultural heritage objects from museums [19,20,21] or to develop isogeometric models, with an application to a vibro-acoustical simulation for a violin bridge [22, 23]. We can also mention the creation of 3D models through UV fluorescence with the use of a Kinect device [24], through neutron imaging [13] and finally, through photogrammetry [25, 26]. Photogrammetry consists of digitally recreating a 3D object based on 2D photographs. It is this last technique, the most accessible, that we have focused on.

The advantages of photogrammetry are that it is non-invasive to instruments and that it is a mobile technique [20]. A study in which measurements were made on a photogrammetric 3D model of a violin and then compared to a synthetic version of that violin showed that the reconstructed surface matched the model with an average error of a few hundredths of mm [25], encouraging confidence in photogrammetry. Furthermore, this technique has already made its mark in several other areas. In a medical context, it offers an alternative to scanners for patients who are too sensitive to radiation [27, 28]. This use was validated by comparison to CT scans using statistical tools such as Bland-Alman graphs [29] or Student’s t-test [28]. We aim to validate it here with CT scans by means of geometric properties instead of statistical ones.

While several studies have recreated 3D models of violins to analyse their morphology (measurements of the instruments, analysis of vibro-acoustic deformations, study of wood thickness, etc.), none of them have addressed the issue of the reduction of the sound boxes through time. Furthermore, as we would also like to assess quantitatively how the instruments were reduced, accurate models are important.

In this paper, we study two instrumentsFootnote 3, one of which is strongly suspected to be reduced. Our main tools are 3D photogrammetric and CT scan meshes whose acquisition is described in Sect. "Mesh acquisition". We validate the use of the photogrammetric meshes by estimating how accurate they are with respect to CT meshes in Sect. "Mesh validation". Finally, in Sect. "Geometric analysis of the plates", we use the validated photogrammetric meshes to highlight the contour lines of the violins, their minima channel and the asymmetry between the upper and lower surfaces of their body, allowing us to illustrate differences between a reduced and an unreduced instrument.

2 Mesh acquisition

Both studied instruments will be referenced by their luthier’s name: HofmansFootnote 4 (which is believed to be reduced) and CuypersFootnote 5 (which is not). As the necks have been replaced over time [30], we focus exclusively on the upper and lower surfaces of their body, respectively called the ‘sound board’ (not to be confused with the sound box) and the ‘back’. In this section, we first describe the two methods with which we acquire our meshes, and then show how to isolate the sound board or back of the instruments for fair comparisons between representations.

2.1 Photogrammetric mesh

We have benefited from the valuable help of Iona Thys, photographer at the Royal Museums of Art and History (Brussels), to create the photogrammetric models. About 160 photos for each instrument were taken by a Nikon D850 camera with a 60 mm focal lens. The two violins were placed on an automatic turntable, rotated through \({360}^{\circ }\) and photographed every \({10}^{\circ }\) from three different perspectives (heights). Each picture contains \(8256 \times 5504\) pixels (\(\approx\) 50–60 μm per pixel) and is about 20 MB. The software that creates the 3D model must receive enough information, and especially overlapping pictures. The main challenges encountered during our photogrammetry campaign are related to lighting, since photogrammetry cannot reconstruct varnished, reflecting or transparent surfaces. Hence, a light tent was used to provide an indirect soft lighting and avoid strong reflections due to the violin varnish, see Fig. 2 (left). Both instruments were photographed lying down on the turntable and upright on a wooden stick (still at different heights), as can be seen in Fig. 2 (centre, right).

Fig. 2
figure 2

Setup and photographed violin (left: light tent, centre: laid down, right: upright)

Once all the pictures were taken, their background was eliminated with Adobe Photoshop.Footnote 6 This procedure aims to delineate each instrument with a mask, which is helpful in the meshing process to better detect the key points of the violin and its contour. Each masked photo is also double-checked and adjusted in case the automated masking procedure has failed and retained some artefacts. The mask of each violin picture and the original images are then sent to the photogrammetry software Agisoft MetashapeFootnote 7 to create the meshes. When creating the model, Metashape takes into account the relative measurements of the violin, but the software is unable to calculate the actual dimensions of the instruments. Thanks to the RadiAnt DICOM Viewer software,Footnote 8 we can measure distances on the CT scans. By averaging a few typical distances, we scale up the photogrammetric mesh to make it correspond to the actual dimensions of the violin. We will explain in Sect. "Registration between photogrammetric and CT representations" how we have corrected this manual scaling. Eventually, the sound board and the back are separated (more on this in Sect. "Contour delineation") and the sound holes are delineated and removed manually. The resulting sound board and back meshes contain about 400k–500k vertices.

2.2 CT scan mesh

Both violins were scanned at the University Hospital Saint-Luc (UCLouvain, Brussels-Woluwe), which produced \(512 \times 512\) pixels slices with an overlap rate of \(50\%\) (around 2300 slices with 0.67 mm thickness for Hofmans and 1600 slices with 0.9 mm thickness for Cuypers). The medical images were then converted into meshes using the ITK-SNAPFootnote 9 software based on the contour segmentation algorithm detailed in [31]. As with the photogrammetric meshes, the sound board and the back were separated. In addition, the part of the mesh corresponding to inner walls was removed manually. Indeed, unlike photogrammetry which only acquires the outer surface of an object, CT scans detect their inner surfaces as well. The resulting sound board and back meshes contain about 330k–430k vertices.

The use of CT scans for the two instruments we study here was available thanks to the work conducted in [12]. Unfortunately, despite the good accuracy they provide, the use of medical scanners is somehow restrictive. First, all the instruments brought from the museum to the hospital are historical artefacts which need to be insured. Then, the number of instruments that can be scanned is limited and the scanners themselves must remain available in case of medical emergency. For obvious ethical reasons, it is difficult to find a time slot for this type of research when a patient’s health may be at stake. Moreover, two technologists must be present. The first one adapts the scanner settings to the density and material of the violin wood while the second handles the scanner itself. Finally, some instruments carry pathogens and cannot be scanned at all. As we plan to extend our research to a corpus of about forty more instruments, all those reasons led us to consider a simpler way to proceed and motivated us to focus on photogrammetry, which we will validate with the CT scan information we already own.

2.3 Contour delineation

Before validating our photogrammetric meshes by comparing them to the CT scan meshes, we need to make sure that we are dealing with similar digital representations of the objects. We are mainly interested in the sound board and back of the violin, and have developed an automatic method to ‘delineate’ these two surfaces from a complete instrument mesh, which is not a trivial problem.

We first pre-process our mesh by manually removing the neck in the MeshLabFootnote 10 editing software. Then, when only the body of the instrument remains, we orient it with respect to the principal axes of the frame using Principal Component Analysis (PCA). This aims to align the (approximate) plane of symmetry (more on this in Sect. "Symmetry plane between the sound board and back") between the sound board and the back with the Oxy plane (i.e. orthogonal to the z axis) and the left-right plane of symmetry with the Oxz plane (i.e. orthogonal to the y axis). Finally, we manually delimit a mesh that roughly represents the sound board or back. In addition to the sound board/back we want to delineate, this mesh contains a surface that extends over the lateral parts of the violin (which are called the ribs). We then automatically process this overhanging surface to achieve a refined contour isolation. Figure 3 illustrates the contour resulting from our algorithm (displayed with the Plotly library [32]). In purple we can see the manually delineated mesh extending over the ribs and in blue the refined contour that delimits the sound board. As our method is the same for the sound board and the back surfaces, we will use the general ‘plate’ denomination for both in the following steps.

\({\textbf {Step\,1:}}\):

We use the orientation of the above-mentioned PCA to compute the ‘extreme points’ that will serve as a starting point for the contour isolation. These extreme points come from vertical cross sections of the plate every millimetre (orthogonal to either the x or y axis), as we can see in Fig. 4 (left). We used the Python package MeshcutFootnote 11 to compute the planar cuts. These cuts are polylines whose vertices are the intersection between the cutting plane and the edges of the mesh. From each of these cross sections, orthogonal to the horizontal plane, we keep the extreme points, which are the two most distant points on the cut axis (we make an exception at the level of the manually removed neck to keep four points instead of two).

\({\textbf {Step\,2:}}\):

The extreme points derived in Step 1 are located on the edges of the original mesh but are not necessarily mesh vertices. Since we want to delineate the sound board by connecting only actual vertices from the initial mesh, we map each extreme point onto their nearest neighbour (NN) on the mesh vertices. Figure 4 (right) shows these nearest neighbours. They are computed efficiently with the Fast Library for Approximate Nearest Neighbours (FLANN) [33].

\({\textbf {Step\,3:}}\):

We then consider the manually delineated mesh as a graph and gather all nearest neighbours into a single closed loop. We first reorder them with a Travelling Salesman Problem (TSP) solverFootnote 12 and then link them with a shortest path algorithm,Footnote 13 using the Euclidean distance as a distance metric between the vertices. If two consecutive nearest neighbours (according to the TSP order) are not adjacent on the graph of the manually delineated mesh, we insert all intermediate vertices between them in the contour in order to obtain a connected loop to isolate the plate. We can see the added vertices in Fig. 4 (right).

\({\textbf {Step\,4:}}\):

The final step consists in removing the vertices and faces lying ‘outside’ of the closed loop contour determined in Step 3, in order to keep only the ‘inner mesh’. Once the contour has been calculated, we remove all its vertices and edges from the whole graph, and keep the largest connected component, which corresponds to the ‘inner mesh’ of the plate, and finally add back the closed loop contour.

Fig. 3
figure 3

Top: isolated sound board contour (blue) and manually delineated mesh extending over the ribs of the violin (purple). Bottom: zoom on the contour at the level of the ribs

Fig. 4
figure 4

Contour delineation process. Left: Extreme points computed on a wide grid (Step 1). Right: points and shortest path on the mesh. Red: extreme points (Step 1), filled blue: nearest neighbours (Step 2), empty blue: added intermediate vertices (Step 3), dashed gray: shortest paths between two consecutive nearest neighbours (Step 3)

3 Mesh validation

Here we validate the quality of our photogrammetric meshes by comparing them to the CT scan meshes, and more particularly by comparing their vertices. First, we select a specific metric to register the two point clouds, based on the average distance between corresponding vertices, and show that we obtain a sub-millimetre precision. Then, we compare this metric with other registration techniques present in the literature. Afterwards, we comment and interpret the errors that arise from these metrics and, finally, we show that we can simplify the mesh to speed up computations without losing too much accuracy.

3.1 Registration between photogrammetric and CT representations

We have now well-defined plates and we proceed to quantify the similarity between a photogrammetric and a CT scan mesh of the same instrument. To do so, we compare the corresponding point clouds of the plates, whose contours were isolated following the procedure detailed in Sect. "Contour delineation". Those representations contain 330k to 480k vertices. We have considered that the CT mesh is a priori the most accurate, and that it will therefore be used as the reference mesh. The classical Hausdorff distance between two sets does not fit our purpose, as it does not quantify the overall similarity, but only focuses on the worst-case distance between corresponding vertices. Instead, following [34], we introduce a specific distance metric between two meshes, denoted s and p, based only on the position of their vertices:

$$\begin{aligned} D(s,p) = \frac{1}{N_s}\sum _{i=1}^{N_s}\left\| {\varvec{s_i}} - {\varvec{p_{nn(i)}}}\right\| \end{aligned}$$
(D)

where

$$\begin{aligned} {\varvec{p_{nn(i)}}} = {\arg \min }_{{\varvec{p_j}} \in p} \left\| {\varvec{s_i}} - {\varvec{p_j}} \right\| . \end{aligned}$$
(NN)

This metric D is the average Euclidean distance between each vertex \({\varvec{s_{i}}} \in {\mathbb {R}}^3\) of the CT cloud s (which contains \(N_s\) points) and its nearest neighbour \({\varvec{p_{nn(i)}}} \in {\mathbb {R}}^3\) within the photogrammetric cloud p (which contains \(N_p\) points). However, as the two point clouds are not aligned, we need first to identify the optimal translation, rotation and scaling factor that produce the minimum average distance as defined in D. We therefore optimise the three parameters \({\varvec{X}}\in {\mathbb {R}}^3\), \({\varvec{\theta }}\in {\mathbb {R}}^3\) and \(K \in {\mathbb {R}}\) (seven variables in total) describing respectively the translation, rotation and scalingFootnote 14 that the photogrammetric point cloud has to undergo in order to best match the CT point cloud, as in Fig. 5 (left), and we solve:

$$\begin{aligned} \min _{({\varvec{X}},{\varvec{R_{\theta }}},K)} D\left( s,{\hat{p}}\left( {\varvec{X}},{\varvec{R_{\theta }}},K\right) \right) \end{aligned}$$
(MinD)

where \({\hat{p}}\) denotes the p cloud after the transformation has been applied, meaning that for each vertex \({\varvec{p_{j}}} \in {\mathbb {R}}^3\) of the photogrammetric cloud p a rigid body transformation (RBT) is applied:

$$\begin{aligned} {\varvec{{\hat{p}}_j}} = K \left( {\varvec{R_\theta }}{\varvec{p_j}} + {\varvec{X}} \right) \end{aligned}$$
(RBT)

with \({\varvec{R_\theta }}\) the rotation operator for a rotation sequence \(\theta _1 \rightarrow \theta _2 \rightarrow \theta _3\)

$$\begin{aligned} \hspace{-.15cm} {\varvec{R_\theta }} = \left( \begin{array}{ccc} \cos \theta _3 &{} \sin \theta _3 &{} 0 \\ -\sin \theta _3 &{} \cos \theta _3 &{} 0 \\ 0 &{} 0 &{} 1 \end{array}\right) \left( \begin{array}{ccc} \cos \theta _2 &{} 0 &{} -\sin \theta _2 \\ 0 &{} 1 &{} 0 \\ \sin \theta _2 &{} 0 &{} \cos \theta _2 \end{array}\right) \left( \begin{array}{ccc} 1 &{} 0 &{} 0 \\ 0 &{} \cos \theta _1 &{} \sin \theta _1 \\ 0 &{} -\sin \theta _1 &{} \cos \theta _1 \end{array}\right) . \end{aligned}$$

In simpler terms, MinD is the minimum distance between the CT scan mesh and the photogrammetric mesh, which has undergone a rigid body transformation to best match the position, orientation and size of the CT scan mesh. In NN, we still compute efficiently the nearest neighbour of each of the \(N_s\) vertices \({\varvec{s_{i}}}\) among the \(N_p\) transformed vertices \({\varvec{{\hat{p}}_{j}}}\) with the FLANN. The minimisation MinD is performed with the Powell method [35] and its Python implementation scipy.optimize.fmin_powell.Footnote 15 The total computation time is about one hour on a standard laptop. Both clouds were first oriented using the principal axes from Principal Component Analysis (PCA), while their relative positions and scaling were adjusted manually (see Sects. "Photogrammetric mesh" et "CT scan mesh" for the scaling). Results of the matching problem are presented in Table 1.

Fig. 5
figure 5

Matching problem (left) and average distance D when varying a single angle \(\theta _2\) before optimisation (right)

Table 1 Average distance [mm] between the CT and photogrammetric sound board clouds, optimal angles \([^{\circ }]\) and scaling factor K [/]

We achieve an average sub-millimetre accuracy between both representations. Angles in the optimal alignment range from \({0.024}^{\circ }\) to \({0.794}^{\circ }\), indicating that the initial orientation obtained from PCA was relatively accurate, especially since the distance D is very sensitive to the orientation of the angles. We see in Fig. 5 (right) that when we vary the value of a single angle (here \(\theta _2\), before optimisation), this distance increases.

3.2 Alternative registrations

An alternative to our approach is to directly use a point cloud registration algorithm, such as the Iterative Closest Point (ICP) [36, 37]. This algorithm has the advantage of being faster, but as it only optimises the position of a subsample of the points on the whole cloud, it could be less accurate than our method described in Sect. "Registration between photogrammetric and CT representations". We compared the accuracy obtained with our method and the SimpleICP implementationFootnote 16 proposed in [38]. Rather than using a classical point-to-point distance [37] between corresponding vertices, the ICP algorithm uses a point-to-plane distance [36], whose convergence has proven to be faster [39]. The squared point-to-plane distance between two meshes is:

$$\begin{aligned} D^2_{plane}(s,p) = \frac{1}{N_s}\sum _{i=1}^{N_s} \left| ({\varvec{s_i}}-{\varvec{p_{nn(i)}}})^T \cdot {\varvec{n_i}} \right| ^2\quad \left({D^2_{plane}}\right) \end{aligned}$$

with \({\varvec{p_{nn(i)}}}\) the nearest neighbour of each vertex \({\varvec{s_i}}\) as defined in NN and \({\varvec{n_i}}\) the normal vector of each vertex \({\varvec{s_i}}\) of the CT scan mesh. The normal vector of each vertex can be estimated using a principal component analysis of the covariance matrix of the coordinates of neighbouring points [38, 40]. Figure 6 illustrates the difference between the two error metrics. Incidentally, it shows that the point-to-plane metric is always smaller than the point-to-point metric.

Fig. 6
figure 6

Comparison between point-to-point and point-to-plane approaches

Once again, we want to minimise this distance and therefore optimise the seven parameters of the rigid body transformation that the photogrammetric point cloud has to undergo to best match the CT point cloud. We then solve:

$$\begin{aligned} \min _{({\varvec{X}},{\varvec{R_{\theta }}},K)} D^2_{plane}\left( s,{\hat{p}}\left( {\varvec{X}},{\varvec{R_{\theta }}},K\right) \right) \quad {\left({MinD^2_{plane}}\right)} \end{aligned}$$

with \({\hat{p}}({\varvec{X}}, {\varvec{R_{\theta }}}, K)\) the photogrammetric mesh after a rigid body transformation RBT.

We notice that there is no scaling factor K in the SimpleICP implementation, and thus tried a second ICP run after applying a fixed external scaling factor (obtained with our point-to-plane implementation). In addition, as the point-to-plane approach minimises an average of square distances, we also tried to minimise the point-to-point approach adding a square term in D, which is nothing more than minimising the Mean Square Error (MSE) of the matching:

$$\begin{aligned} D^2(s,p) = \frac{1}{N_s}\sum _{i=1}^{N_s}\left| {\varvec{s_i}} - {\varvec{p_{nn(i)}}}\right| ^2 \quad\quad {\left(D^2 \right)}\end{aligned}$$

and

$$\begin{aligned} \min _{({\varvec{X}},{\varvec{R_{\theta }}},K)} D^2\left( s,{\hat{p}}\left( {\varvec{X}},{\varvec{R_{\theta }}},K\right) \right) \quad\quad {\left( MinD^2 \right)},\end{aligned}$$

with \({\hat{p}}({\varvec{X}}, {\varvec{R_{\theta }}}, K)\) the photogrammetric mesh after a rigid body transformation RBT.

The results of optimising all the objective functions described above are shown for both instruments in Tables 2 and 3. The left-hand column shows which metric was used to calculate the optimal parameters and each row displays the value of the three considered metrics \(\left( D, D^2 \text { and } D^2_{plane} \right)\) for each set of optimised parameters. Note that a square root is applied to the \(D^2\) and \(D^2_{plane}\) metrics to obtain a distance in mm, so that it becomes more comparable to the D metric. However all three metrics express slightly different measures of similarity between meshes, and none of them can be considered a priori superior to the others. The main point of the comparison we make here is to show they all behave similarly and lead to comparable final measures of accuracy.

Table 2 Optimal distances [mm] for Hofmans’ instrument
Table 3 Optimal distances [mm] for Cuypers’ instrument

First, the first three rows in Tables 2 and 3 show that each metric appears to be the smallest for its optimisation criterion, as expected. Interestingly, the four methods considered with scaling provide almost identical results, e.g. optimising the point-to-plane distance almost provides the optimal result for the point-to-point distance, and vice versa. The optimised angles and scaling (not shown here) vary very little from one metric to another. As the metrics provide similar results in the final point cloud registration, we chose to continue working with the point-to-point distance D that was introduced first.

Then, we observe that all our errors are sub-millimetre, which again confirms that our photogrammetric approach is accurate. Furthermore, we see that the point-to-point distance D (arithmetic mean of the errors) is smaller than the squared point-to-point distance metric \(\sqrt{D^2}\) (quadratic mean of the errors, also called RMSE). This also makes sense because optimising the (R)MSE is more sensitive to outliers than optimising the arithmetic mean of the errors, and we know from the Arithmetic Mean (AM)–Quadratic Mean (QM) Inequality that \(AM \le QM\).

Next, we focus on the point-to-plane distance \(\sqrt{D^2_{plane}}\) (expressed in mm). We first see that, as expected, it is smaller than the point-to-point square distance \(\sqrt{D^2}\), also in mm (RMSE). We compare our approach, which allows scaling, to SimpleICP, which does not, and immediately see that this K factor greatly improves the results. Nevertheless, when we provide an external scaling factor to SimpleICP, we see that the two results become quite comparable. SimpleICP has the advantage of being faster than our method because it is based on a sample of points and not the whole cloud. On the other hand, it does not allow scaling, which was necessary in our case. Moreover, this algorithm is based on ‘artificial’ normals \({\varvec{n_i}}\), which corresponds to information initially absent from our mesh. In any case, we have performed an exhaustive validation.

Are the above metrics the most appropriate for interpreting the results? We have used two types of approaches: point-to-point which matches a point with a point of the other cloud, and point-to-plane which matches a point with an infinite plane of the other cloud. A proposal for a third metric would be to compute the point-to-face distance that matches a point with a face of the other mesh, namely a finite planar section (i.e. a polygon). It is easy to see that values of this metric will always lie between the other two, and would probably provide a more intuitive definition of the distance between two meshes. Moreover, it would explicitly rely on the use of faces, which are explicit elements of the mesh but are not used in the point-to-point metric. However, projecting a large number of points onto a corresponding face appears to be too computationally expensive for this intent.

3.3 Error assessment and validation

The average error between the CT and photogrammetric point clouds lies in the sub-millimetre range for both instruments, which is rather small. The distribution of the point-to-point distances between vertices of the CT mesh to the nearest photogrammetric vertices can be observed using heat maps and histograms in Fig. 7 (left: Hofmans, right: Cuypers), showing very good agreement throughout the plates, and very few distances larger than 2 mm (respectively \(0.10 \%\) and \(0.19 \%\) for the Hofmans and Cuypers instruments). We conclude from the small average errors in Table 1 and from our histograms and heat maps in Fig. 7 that our photogrammetric approach with respect to medical scans is validated. More comparisons to strengthen our validation can be found in Appendix A.

Fig. 7
figure 7

Distribution of point-to-point distances [mm] from CT point cloud to the nearest neighbour in photogrammetric cloud (left: Hofmans, right: Cuypers)

3.4 Simplification

The meshes validated in Sects. "Registration between photogrammetric and CT representations", "Alternative registrations" and "Error assessment and validation" are very dense and therefore will slow down all calculations we perform on them. Before turning to the geometric analysis of the instruments in Sect. "Geometric analysis of the plates", we study a simplification procedure that would offer a trade-off between computational speed and accuracy.

MeshLab includes a simplification process based on the Quadric Edge Collapse Decimation algorithm [41]. Very generally, the algorithm iteratively calculates the contraction of vertex pairs which cause the least possible error (with a quadric error metric), contracts the minimum cost pair and repeats. The procedure ends when a prespecified number of faces is reached. Concretely, for two given vertices \(v_1\) and \(v_2\) on the same edge to be contracted, the new vertex \({\bar{v}}\) lies somewhere on that edge connecting \(v_1\) and \(v_2\). Thus, a simplified mesh no longer shares exactly the same vertices as the original mesh.

Fig. 8 shows the increment in absolute error for the point-to-point (left) and point-to-plane (right) metrics when comparing the CT scan mesh and several versions of the simplified photogrammetric meshes (in terms of number of faces). The error between the CT mesh and the original full photogrammetric mesh is considered to be the reference, i.e. corresponds to an increment of 0 mm (also displayed as horizontal dashed line).

Fig. 8
figure 8

Increment of the absolute error with respect to the original sound board meshes for point-to-point (left) and point-to-plane metrics (right)

Noticeably, the relationship between the error increment and the number of faces is not linear. It seems that a simplification to 500k faces (about 250k vertices) leads to barely any increment of the error, while decreasing further the number of faces results in a more noticeable increase. The fact that the point-to-point error is relatively stable when the number of faces is 500k or greater could also be a consequence of the definition of that metric, which includes some irreducible error due to the mismatch between vertices even when taken from the same surface (see the end of Appendix A for a discussion).

Observe that these point-to-point and point-to-plane error metrics (see Sects. "Registration between photogrammetric and CT representations" and "Alternative registrations") are sensitive to the exact positions of the vertices on the surface, which are repeatedly modified by the Quadric Edge Collapse Decimation algorithm, while the general shape of the surface can be relatively unaffected. Therefore we consider below another metric to assess the impact of the simplification process.

We propose to sample the meshes on a regular horizontal grid (in the sense of the PCA plane of the CT violin), with nodes equally spaced every 1 mm × 1 mm. Thus, for each node on the grid, we draw a vertical normal, identify the face through which this normal intersects the mesh and calculate its \(z-\)coordinate based on the 3-dimensional plane equation of the intersected face. We compared the vertical differences between the Cuypers CT scan sound board and the original photogrammetric mesh of the sound board (970k faces, 490k vertices) then its simplifications of 250k, 100k and 50k vertices respectively (about twice, five times and ten times less dense than the original photogrammetric mesh). We used the parameters of the rigid body transformation found in Sect. "Registration between photogrammetric and CT representations" (see Table  1) to make the four photogrammetric sound boards match the reference CT sound board before calculating the vertical differences, which are summarised in Table 4. As we see almost no difference between the CT scan and the photogrammetric simplifications, we decided to compare these simplifications to each other (see Table 5). Note that for this second comparison, the vertical distances were calculated in the PCA plane computed on the photogrammetric violins, and not from the CT scan. We see that, in contrast to the errors observed for the point-to-point criterion, the simplification leads to an extremely small vertical error. This made us reconsider the observable kink in Fig. 8, at the level of the 500k faces. It may be due to the enlargement of the mesh faces (triangles) that occurs when simplifying the photogrammetric meshes, rather than actually indicating a poor quality mesh. Considering this new metric, which is certainly more relevant to study the impact of the simplification, we ultimately decided to select photogrammetric meshes of 100k vertices to analyse the geometry of instruments in Sect. "Geometric analysis of the plates". This number of faces seems to be a good trade-off between computational speed (about five times faster than the full photogrammetric mesh) and accuracy.

Table 4 Vertical distances [mm] between the CT mesh and various simplified photogrammetric meshes of the Cuypers sound board on a regular grid (linear interpolation). The original photogrammetric sound board without simplification contains about 490k vertices
Table 5 Vertical distances [mm] between the original photogrammetric mesh (about 490k vertices) and three simplified photogrammetric meshes of the Cuypers sound board on a regular grid (linear interpolation)

4 Geometric analysis of the plates

In this section we highlight several characteristics that can help distinguish a reduced instrument from an unreduced one, namely the contour lines, the asymmetry between the back and the sound board and the minima channel. Our results are based on the geometric analysis of the photogrammetric meshes obtained and validated in Sect. "Mesh validation". A preliminary report on this analysis can be found in [42]. In order to calculate the three characteristics aforementioned, we need first to compute the plane of symmetry of the violin between the back and the sound board, which will serve as a reference for the computations.

4.1 Symmetry plane between the sound board and back

We explained in Sect. "Contour delineation" that we oriented the body of the violin with a PCA before delineating the contours of the sound board and back. However, as the point cloud of the body contains the ribs and some artefacts, the plane of the PCA does not exactly match what can be considered the natural horizontal plane of symmetry between the sound board and back, although they are close. We therefore propose a way to correct the orientation of this symmetry plane, which is crucial because we will use it to calculate the contour lines, quantify the asymmetry and identify the channel of minima. This reorientation does not put us at odds with the validation performed in Sect. "Mesh validation" as we are only applying a rotation operator to our mesh. Thus, the optimised angles will differ slightly from those in Table 1 but the overall result leads to the same average distance D. Furthermore, as the orientation of this symmetry plane is very close to that of the plane originally identified by the PCA, we consider that the contour isolation proposed in Sect. "Contour delineation" (which depended on the PCA orientation) is still coherent and entirely suitable for our analysis.

We need however to mention clearly that the ‘plane of symmetry’ of a violin is a misnomer. There is no exact planar symmetry between the sound board and the back. First, the ribs are generally smaller near the end of the sound board and back than on the rest of the body. Also, wood ages and warps over time. What we defined here as a plane of symmetry is the closest notion to an ideal symmetry and best conceptualises something that does not actually exist.

We identify this plane of symmetry using the individual orientations of both the sound board and the back, by calculating the best plane that passes through each of the two surfaces and ‘averaging’ them, namely defining the plane of symmetry as the plane bisecting the dihedral angle between the planes of the sound board and the back. We then rotate the meshes to make this average plane of symmetry parallel to the horizontal plane \(\Pi \equiv z=0\) and we finally adjust its offset (see later in this section).

We compute the two planes best approximating the sound board and the back with an orthogonal regression, which does not favour any direction (and removes any influence from the initial axes computed by PCA). We actually consider threeFootnote 17 options for this procedure before averaging those planes. The orthogonal regressions are therefore performed:

  • on all the vertices from the plates, considered as two independent meshes (‘Two meshes’).

  • on the vertices of the contour of the plates (as computed in Sect. "Contour delineation") (‘Two contours’).

  • on the vertices of the contour of the plates (as computed in Sect. "Contour delineation"), with the raised part of the sound board removed manually (‘Two contours (manual)’). Indeed, because of our delineation process, the contour of the sound board contains a raised part that may bias the regression (see Fig. 9).

For each of these three configurations, we calculated the angle between the average plane of symmetry (before rotation) and the horizontal plane \(\Pi \equiv z=0\) for Hofmans’ instrument. The results are given in Table 6. All three angles are similar and indicate that a realignment of the violins was necessary. Because of their close value, we finally retained the configuration that made the most sense to us, namely the third option ‘Two contours (manual)’. Indeed, the wooden board used by the luthiers to build the sound board and the back is flat on its inside. Thus, the manually corrected contours best characterise what we mean by plane of symmetry and ‘horizontality’. Moreover, the angle provided by this approach is almost the average between the other two. We also mention that the values in Table 6 are nearly identical when computed with a linear regression in the z-values (least squares) rather than an orthogonal regression.

Fig. 9
figure 9

Contour of the sound board (green) with highlight on the raised part to be manually removed (red)

Table 6 Orthogonal regression on three configurations and angle between the average symmetry plane (before rotation) and the horizontal plane for Hofmans’ instrument

We apply now a rotation so that the average plane of symmetry is horizontal. Then, we adjust its offset. To do so, we compute the \(z-\)values of the sound board and the back on a horizontal regular grid with nodes equally spaced every 1 mm × 1 mm. As in Sect. "Simplification", we draw a vertical normal for each node i of the grid, and identify the points through which this normal intersects the surfaces. We denote these intersection points \(sb_i\) for the sound board and \(b_i\) for the back respectively. We then compute, for each node i of the grid, the mean point \(z_i = \frac{sb_i+b_i}{2}\) located at equal distance from the point \(sb_i\) of the sound board and its corresponding point \(b_i\) on the back. If one of the two points \(sb_i\) or \(b_i\) is not defined on a node i of the grid, we do not calculate \(z_i\) at this node (this happens for example for sound holes, that are empty on the sound board but not on the back). We then compute the offset of the horizontal plane by averaging all midpoints, \(z_{sym} = {\bar{z}} = \frac{1}{N_g} \sum _{i=1}^{N_g} z_i\), where \(N_g\) is the total number of valid nodes on the grid, i.e. for which \(z_i\) is defined. Finally, now that the offset is calculated, we translate the meshes along the z-axis by \(z_{sym}\) so that the symmetry plane matches the plane \(\Pi \equiv z=0\). The shifted sound board and back points now become \(sb_{i,shift} = sb_i - {\bar{z}}\) and \(b_{i,shift} = b_i - {\bar{z}}\). Figure 10 shows a 2D example of the calculation of the offset of the plane of symmetry \(z_{sym}\) before the shift.

Fig. 10
figure 10

Computation of the offset of the plane of symmetry

4.2 Contour lines

We compute horizontal sections of four surfaces (two sound boards and two backs) every 2 mm based on the symmetry plane defined in Sect. "Symmetry plane between the sound board and back". The four sets of contour lines are represented according to the same relative convention: the level closest to the plane of symmetry is in dark blue and the range of the altitude is up to 24 mm from this closest level. The sound board is to be seen as a ‘hill’ while the back is to be seen as a ‘valley’. In addition, positive contour lines (sound board) are represented with continuous lines and negative lines (back) are dashed. Figure 11 shows, especially in the zoomed area (red frame, refinement every mm), that the contour lines are rounder on the unreduced Cuypers, and sharper on the Hofmans. We suppose that this sharpness is due to a slice of wood removed along the main axis of the violin (width reduction), as illustrated in Fig. 1 (right). A similar behaviour is also observed for the back of both instruments in Fig. 12. Finally, it is worth noting that the contour lines at the bottom of the Hofmans back are almost perpendicular to the main axis of the instrument.

Fig. 11
figure 11

Contour lines of the Hofmans (left) and Cuypers (right) sound boards [mm]

Fig. 12
figure 12

Contour lines of the Hofmans (left) and Cuypers (right) backs [mm]

4.3 Asymmetry between sound board and back

Interestingly, when a violin is reduced, the sound board and back do not necessarily follow the same reduction pattern. Hence we are interested in studying the asymmetry between the two surfaces facing each other. To do so, we compute the vertical differences between the sound board and the back on a horizontal regular grid with nodes equally spaced every 1 mm × 1 mm. We reuse the values \(sb_{i,shift}\) and \(b_{i,shift}\) (or equivalently \(sb_i\), \(b_i\), \(z_i\) and \({\bar{z}}\)) from Sect. "Symmetry plane between the sound board and back" and we calculate the asymmetry \(a_i\) at any point on the grid as the difference in the distances of the sound board and the back from the horizontal plane, which is a signed quantity:

$$\begin{aligned} \begin{aligned} a_i&= sb_{i,shift} - |b_{i,shift}| \\&= (sb_i - {\bar{z}}) - |b_i-{\bar{z}}| \\&= 2(z_i-{\bar{z}}) \end{aligned} \end{aligned}$$
(A)

assuming that \(sb_{i,shift} > 0\) and \(b_{i,shift} < 0\) (or equivalently, \(sb_i > {\bar{z}}\) and \(b_i < {\bar{z}}\)). The asymmetry \(a_i\) is not defined at nodes i for which either \(sb_i\) or \(b_i\) is not defined.

Fig. 13 shows the topography of the vertical distances and their (absolute) distribution. A positive value means that the sound board is further from the plane of symmetry than the back and a negative value means that the back is further from the plane of symmetry than the sound board. We also provide histograms of the distribution of the absolute values of all distances for each instrument.

Fig. 13
figure 13

Heat map of the asymmetry between the sound board and the back (top) and distribution of the absolute values of the vertical distances (bottom) for the Hofmans (left) and Cuypers (right). For the heat maps, a positive value means that the sound board is further from the plane of symmetry than the back and a negative value means that the back is further from the plane of symmetry than the sound board

We immediately see that the difference between the sound board and the back of the Hofmans instrument is much more pronounced than the one of the Cuypers. The distances go up to almost 7 mm for the reduced violin while they stop at 2.5 mm for the unreduced one.

Analysing two instruments is not sufficient to draw general conclusions, but we believe that this technique provides interesting and relevant insights about the presence of a reduced violin.

4.4 Channel of minima

The sound board and the back of a violin feature a ‘channel of minima’ running close to their outer contour. To identify this channel, we first interpolate the vertices of the contour of the sound board or the back (obtained with the procedure in Sect. "Contour delineation" and reoriented in Sect. "Symmetry plane between the sound board and back") using cubic splines. Then, we compute a large number of cross-sections through the mesh (separately for the sound board and back). These sections are orthogonal to the symmetry plane from Sect. "Symmetry plane between the sound board and back" and chosen to be perpendicular to the tangents of the sound board contour (computed from the cubic spline interpolation), as shown in orange in Fig. 14 (left). In each cross-section, minima are identified as the points with the lowest \(z-\)height among those close to the tangent point. They can be seen in green in Fig. 14 (right). The channel of minima of the backs shows a similar behaviour.

Fig. 14
figure 14

Top view (left) and cross-section (right) of the Hofmans sound board. The interpolation of the contour with cubic splines is in brown in the left figure

The ‘raw’ channel of minima is shown in Fig. 15 (top), exhibiting clear differences between the instruments. Indeed, we see that, in a reduced violin, the distance from the channel to the contour tends to decrease in some areas close to the top and the bottom of the sound board. Note however that the apparent recess in the channel at the bottom of the Hofmans sound board is due to the lower nut (the small ebony rim over which the strings pass), and not to the actual channel, which has disappeared at this point (see [2] for a detailed explanation). The spline approximation of the channel displayed in Fig. 15 (bottom) shows a more realistic trace. We finally mention that a similar behaviour is also observable for the back surfaces.

Fig. 15
figure 15

Channel of minima for the Hofmans (left) and Cuypers (right) sound boards. Raw data (top) and spline approximation (bottom)

5 Conclusion and future work

We proposed a geometric approach for the objective study of early violins. This research was motivated by the fact that historical testimonies about the reduction of violin bodies through time are imprecise and surprisingly neglected in musicological literature. However, in order to understand the morphology of the violin family in the Baroque period, it is essential to bear this parameter in mind, given the scarcity of instruments preserved in their original state.

We based our geometric approach on photogrammetric meshes, validated with sub-millimetre accuracy by comparison to reference CT scans. The accuracy of photogrammetry is similar to that obtained by [25] for violin reconstruction, but has been validated with a physical representation of the instrument, and not with a synthetic version of it. Our accuracy is also similar to that obtained in [28, 29], also comparing photogrammetry to CT scans (of human skulls), but we validate it here with geometric and not statistical tools. The main steps in our mesh validation process include the delineation of the plates, a careful registration of the photogrammetric and CT scan meshes with appropriate metrics and a study of the simplification of the meshes.

After confirming the validity of our photogrammetric approach, we compute morphological characteristics such as contour lines, asymmetry and channel of minima which permit the objective study of instruments. Musicians, luthiers and music lovers rarely realise how much the 17th and 18th century violins, violas and cellos that circulate on the art market today are altered. Often these instruments are unreliable witnesses to the era in which they were made, despite the aura that usually surrounds them.

The three aforementioned features allow to characterise whether or not instruments were reduced. To the best of our knowledge, this type of objective approach, based on the geometric analysis of three-dimensional meshes, has not been considered yet in the literature. As the comparison was made for only two instruments, it would be somewhat risky to attempt to generalise our results immediately to all violins, whether or not they were reduced. However, the conclusions are encouraging. In the future, we plan to apply our techniques to a larger collection of approximately forty instruments including violins, violas and cellos. We hope that this corpus will allow us to detect automatically features of reduced instruments, using clustering or classification techniques applied to appropriate mathematical representations of the surface of the sound boards and backs.

Despite the fact that we start from three-dimensional data (meshes and point clouds), some parts of our geometric analysis rely on two-dimensional techniques (contour isolation, use of cross-sections). Developing an exclusively three-dimensional processing is left for future research, and would allow us to consider other features such as the location of the inflection points, and identifying the true shape of the minima channel as a three-dimensional curve. Our ultimate goal would be to predict what the original dimensions of the reduced instruments were, by quantifying the removed crescent of wood at the top and bottom of the sound box and/or the slice of wood along the axis, and comparing them with unreduced violins.

Availability of data and materials

The data and materials supporting the conclusions of this article are available from the corresponding author on reasonable request.

Notes

  1. In this article, the term ‘violin’ will be used generically to refer to all instruments in the family. Prior to 1750, their sizes and names were not standardised, making it impossible to differentiate between violins in the proper sense, violas and cellos as we know them today. The procedure we have developed is applicable to all sizes of instruments.

  2. https://mimo-international.com/MIMO/.

  3. Today, both instruments are referred to as violas. However, the original name of the reduced instrument is uncertain. More on this can be found in [2, pp. 109–112 and pp. 125–126].

  4. Hofmans Matthys IV, inv. no 2846, Antwerp, before 1679 (Musical Instruments Museum Brussels).

  5. Cuypers Johannes Th., inv. no 2833, The Hague, 1761 (Musical Instruments Museum Brussels).

  6. https://www.photoshop.com/en.

  7. https://www.agisoft.com.

  8. https://www.radiantviewer.com/.

  9. http://www.itksnap.org/pmwiki/pmwiki.php.

  10. https://www.meshlab.net/.

  11. https://github.com/julienr/meshcut.

  12. https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.approximation.traveling_salesman.traveling_salesman_problem.html.

  13. https://networkx.org/documentation/stable/reference/algorithms/shortest_paths.html.

  14. As the scaling has been previously performed manually (see Sect. "Photogrammetric mesh"), we insert this factor K to correct the potential error induced by this operation.

  15. We set an objective function tolerance of \(10^{-5}\) as a convergence criterion.

  16. https://github.com/pglira/simpleICP.

  17. We also tried to apply an orthogonal regression on the whole cloud (both the sound board and the back as a single cloud), but it did not lead to convincing results. Indeed, because the two surfaces contain different number of vertices, the regression is biased towards the larger point cloud.

References

  1. Moens K. Les voix médianes dans l’orchestre français sous le règne de Louis XIV : les instruments conservés comme source d’information. Duron, Jean; Gétreau, Florence. L’orchestre à cordes sous Louis XIV : instruments, répertoires, singularités. 2015. pp. 119–38

  2. Ceulemans A-E, Beghin P, Fisette P, Glineur F, Thys I. Baroque violas with reduced soundboxes: an evaluation method. Galpin Soc J. 2023;76(LXXVI):109–26.

    Google Scholar 

  3. Radepont M, Échard J-P, Ockermüller M, de la Codre H, Belhadj O. Revealing lost 16th-century royal emblems on two Andrea Amati’s violins using XRF scanning. Herit Sci. 2020;8(1):1–12.

    Article  Google Scholar 

  4. Chitwood DH. Imitation, genetic lineages, and time influenced the morphological evolution of the violin. PLoS ONE. 2014;9(10): 109229.

    Article  Google Scholar 

  5. Peron T, Rodrigues FA, Costa LDF. Pattern recognition approach to violin shapes of MIMO database. arXiv:1808.02848 [Preprint] 2018. http://arxiv.org/abs/1808.02848. Accessed 8 Aug 2022.

  6. Dondi P, Lombardi L, Porta M, Rovetta T, Invernizzi C, Malagodi M. What do luthiers look at? An eye tracking study on the identification of meaningful areas in historical violins. Multimed Tools Appl. 2019;78(14):19115–39.

    Article  Google Scholar 

  7. Dondi P, Lombardi L, Malagodi M, Licchelli M. Stylistic classification of historical violins: a deep learning approach. In: International Conference on Pattern Recognition. Springer; 2021. pp. 112–125.

  8. Dondi P, Lombardi L, Malagodi M, Licchelli M. Measuring stradivari ‘cremonese’(1715) by 3D modeling. In: IMEKO International Conference on Metrology for Archeology and Cultural Heritage. MetroArcheo; 2016. pp. 29–33.

  9. Dondi P, Lombardi L, Malagodi M, Licchelli M. 3D modelling and measurements of historical violins. Acta IMEKO. 2017;6(3):29–34.

    Article  Google Scholar 

  10. Fioravanti M, Goli G, Carlson B. Structural assessment and measurement of the elastic deformation of historical violins: the case study of the guarneri ‘del gesù’violin (1743) known as the ‘cannone’. J Cult Herit. 2012;13(2):145–53.

    Article  Google Scholar 

  11. Fiocco G, Gonzalez S, Invernizzi C, Rovetta T, Albano M, Dondi P, Licchelli M, Antonacci F, Malagodi M. Compositional and morphological comparison among three coeval violins made by giuseppe guarneri “del ges`u” in 1734. Coatings. 2021;11(8):884.

    Article  CAS  Google Scholar 

  12. Lothaire R. Characterization of violins: a digital tool at the service of organology. Master’s thesis, Ecole polytechnique de Louvain, UCLouvain. 2019.

  13. Kirsch S, Mannes D. X-ray CT and neutron imaging for musical instruments-a comparative study. Science. 2015;55:188–96.

    Google Scholar 

  14. Stanciu MD, Mihălcică M, Dinulică F, Nauncef AM, Purdoiu R, Lăcătuş R, Gliga GV. X-ray imaging and computed tomography for the identification of geometry and construction elements in the structure of old violins. Materials. 2021;14(20):5926.

    Article  CAS  Google Scholar 

  15. Frohlich B, Sturm G, Hinton J, Frohlich E. The secrets of the Stradivari string instruments. a non-destructive study of music instruments from the Smithsonian Institution, the library of congress, and private collections. A pilot study of seven violins made by Antonio Stradivari in Cremona, Italy, between 1677 and 1709. Ann Arbor, MI, USA, Washington, DC, March 2009.

  16. Pyrkosz M, Karsen C.V, Bissinger G. Converting CT scans of a Stradivari violin to a FEM. In: Structural Dynamics, Volume 3: Proceedings of the 28th IMAC, A Conference on Structural Dynamics. Springer; 2010. pp. 811–20.

  17. Pyrkosz M, Van Karsen C. Comparative modal tests of a violin. Exp Tech. 2013;37(4):47–62.

    Article  Google Scholar 

  18. Pyrkosz M.A, Van Karsen C. Coupled vibro-acoustic model of the Titian Stradivari violin. In: Topics in Modal Analysis I, Volume 7: Proceedings of the 32nd IMAC, A Conference and Exposition on Structural Dynamics. Springer; 2014. pp. 317–32.

  19. Plath N, Kirsch S. Post-processing of musical instrument 3D-computed tomography data for conservational applications. In: Proc. WoodMusICK FP 1302 Cost Action Conf. 2017.

  20. Plath N. 3D imaging of musical instruments: methods and applications. Comput Phonogr Arch. 2019;321–34.

  21. Fuchs T, Wagner R, Kretzer C, Scholz G, Bär F, Kirsch S, Wolters-Rosbach M, Fischeidl K. Musices-musical instrument computed tomography examination standard: results of the measurements and guidelines derived therefrom. Schweden: Gothenburg; 2018.

  22. Marschke S, Ring W, Wohlmuth B. Modeling, identification, and optimization of violin bridges. PAMM. 2018;18(1):201800405.

    Article  Google Scholar 

  23. Marschke S, Wunderlich L, Ring W, Achterhold K, Pfeiffer F. An approach to construct a three-dimensional isogeometric model from \(\mu\)-CT scan data with an application to the bridge of a violin. Comput Aided Geom Des. 2020;78: 101815.

    Article  Google Scholar 

  24. Dondi P, Lombardi L, Rocca I, Malagodi M, Licchelli M. Multimodal workflow for the creation of interactive presentations of 360 spin images of historical violins. Multimed Tools Appl. 2018;77(21):28309–32.

    Article  Google Scholar 

  25. Pinto L, Roncella R, Forlani G. Photogrammetric survey of ancients musical instruments. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, ISPRS Congress, Vol. XXXVII, Part B5. 2008. pp. 309–14.

  26. Motte L. Conception d’un outil mathématique/informatique pour la classification d’instruments à archet. Master’s thesis, Ecole polytechnique de Louvain, UCLouvain. 2017.

  27. Liu C, Artopoulos A. Validation of a low-cost portable 3-dimensional face scanner. Imaging Sci Dent. 2019;49(1):35–43.

    Article  Google Scholar 

  28. Ho OA, Saber N, Stephens D, Clausen A, Drake J, Forrest C, Phillips J. Comparing the use of 3D photogrammetry and computed tomography in assessing the severity of single-suture nonsyndromic craniosynostosis. Plast Surg. 2017;25(2):78–83.

    Article  Google Scholar 

  29. Donato L, Cecchi R, Goldoni M, Ubelaker DH. Photogrammetry vs CT scan: Evaluation of accuracy of a low-cost three-dimensional acquisition method for forensic facial approximation. J Forensic Sci. 2020;65(4):1260–5.

    Article  Google Scholar 

  30. Stowell R. Violin technique and performance practice in the late eighteenth and early nineteenth centuries. Cambridge: Cambridge University Press; 1990.

    Google Scholar 

  31. Yushkevich PA, Piven J, Cody Hazlett H, Gimpel Smith R, Ho S, Gee JC, Gerig G. User-guided 3D active contour segmentation of anatomical structures: Significantly improved efficiency and reliability. Neuroimage. 2006;31(3):1116–28.

    Article  Google Scholar 

  32. Inc. PT. Collaborative data science. https://plot.ly

  33. Muja M, Lowe DG. Fast approximate nearest neighbors with automatic algorithm configuration. VISAPP (1). 2009;2(331–340):2.

    Google Scholar 

  34. Tang J, Wu G, Xu B, Gong Z. Fast mesh similarity measuring based on CUDA. In: 2010 IEEE International Conference on Progress in Informatics and Computing. 2010. vol. 2, pp. 911–15. https://doi.org/10.1109/PIC.2010.5687883

  35. Powell MJ. An efficient method for finding the minimum of a function of several variables without calculating derivatives. Comput J. 1964;7(2):155–62.

    Article  Google Scholar 

  36. Chen Y, Medioni G. Object modelling by registration of multiple range images. Image Vis Comput. 1992;10(3):145–55.

    Article  Google Scholar 

  37. Besl PJ, McKay ND. Method for registration of 3-D shapes. In: Schenker PS, editor. Sensor fusion IV: control paradigms and data structures, vol. 1611. Bellingham: Spie; 1992. p. 586–606.

  38. Glira P, Pfeifer N, Briese C, Ressl C. A correspondence framework for ALS strip adjustments based on variants of the ICP algorithm. Photogramm Fernerkund Geoinform. 2015;2015(4):275–89.

    Article  Google Scholar 

  39. Rusinkiewicz S, Levoy M. Efficient variants of the ICP algorithm. In: Proceedings Third International Conference on 3-D Digital Imaging and Modeling, pp. 145–152 (2001). https://doi.org/10.1109/IM.2001.924423

  40. Shakarji CM. Least-squares fitting algorithms of the NIST algorithm testing system. J Res Natl Inst Stand Technol. 1998;103(6):633.

    Article  Google Scholar 

  41. Garland M, Heckbert P.S. Surface simplification using quadric error metrics. In: Proceedings of the 24th Annual Conference on Computer Graphics and Interactive Techniques, 1997. pp. 209–16.

  42. Beghin P. A digital tool at the service of organology: validation of a photogrammetric approach. Master’s thesis, Ecole polytechnique de Louvain, UCLouvain. 2021.

Download references

Acknowledgements

The authors thank Iona Thys (Royal Museums of Art and History, Brussels) for her photogrammetric work, and Alain Vlassenbroek, Emmanuel Coche, Etienne Danse and the University Hospital Saint-Luc (UCLouvain, Brussels-Woluwe) for their help with violin CT scans. We also thank Jean-Philippe Echard (Musée de la Musique, Paris) for the interesting discussions we shared.

Funding

This work was supported in part by the Fonds de la Recherche Scientifique - FNRS and the Fonds Wetenschappelijk Onderzoek - Vlaanderen under EOS Project 30468160

Author information

Authors and Affiliations

Authors

Contributions

AEC and PF proposed the study of bowed instruments through data acquisition methods. AEC is responsible for the historical motivation and context. FG coordinated the computational aspects. PB performed all computations and wrote the manuscript. All authors took part in the review and revisions of the article, and approved the final manuscript.

Corresponding author

Correspondence to Philémon Beghin.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A

Appendix A

Comment on the error assessment and validation

In Sect. "Error assessment and validation" we compared CT and photogrammetric point clouds from two instruments. We obtained mean errors of 0.301 mm and 0.215 mm (point-to-point metric) and we saw that the heat maps showed an excellent agreement between the representations. For the sake of experiment and to better interpret those average errors, we made two additional comparisons for crossed instruments, i.e. CT Cuypers vs. photogrammetric Hofmans and CT Hofmans vs. photogrammetric Cuypers.

As the two instruments to be compared have different sizes, we did not consider the scaling factor K in the matching problem (see MinD and RBT). Indeed, it would not make sense to favour the results with an artificial transformation. However, for a fair comparison between CT and photogrammetric representations, we applied the scaling of Table 1 so that the sizes of the photogrammetric instruments still correspond to their size in CT representation. Specifically, the photogrammetric Hofmans and Cuypers were scaled respectively by a factor 1.024 and 1.029.

Comparing two different violins does not make much sense in the study of musical instruments and should reveal a poor correspondence. However, the average point-to-point distances for the two comparisons are 1.062 mm and 1.290 mm, a surprisingly small value despite the fact that the two instruments are different and one of them has been reduced. To explain this, we first see in Fig. 16 that the heat maps and histograms clearly indicate a much poorer match than in Fig. 7. In addition, we observe that even if the photogrammetric mesh was perfectly describing the sound board, its vertices cannot be expected to be located exactly in the same places as the vertices of the CT mesh, as illustrated in Fig. 17 (left). The average length of the edges in our CT meshes are equal to 0.59 mm (Hofmans) and 0.51 mm (Cuypers). Assuming that the average distance between two independent meshes of the same object can not be significantly smaller than the third of that average edge length (\(\approx\) 0.20 mm, see right of Fig. 17), meaning that the D metric that was computed earlier between meshes was likely overestimating the actual error, we conclude that the match of both the Hofmans and the Cuypers instruments was excellent in Sect. "Error assessment and validation", and finally that the average error between mismatched instruments shown in this appendix is actually significantly larger than the average error between meshes of the same instrument.

Fig. 16
figure 16

Distribution of point-to-point distances [mm] from CT point cloud to the nearest neighbour in photogrammetric cloud (left: CT Cuypers vs. photogrammetric Hofmans, right: CT Hofmans vs. photogrammetric Cuypers)

Fig. 17
figure 17

Example of two acquisition techniques that both detect points belonging to exactly the same surface but sampled at different locations (left). Representation of the four comparisons errors (CT scan vs. photogrammetry) with respect to an unreachable accuracy of about 0.2 mm (right)

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Beghin, P., Ceulemans, AE., Fisette, P. et al. Validation of a photogrammetric approach for the objective study of early bowed instruments. Herit Sci 11, 170 (2023). https://doi.org/10.1186/s40494-023-00979-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40494-023-00979-4

Keywords