Skip to main content

Enhancing traditional museum fruition: current state and emerging tendencies

Abstract

Galleries, libraries, archives, and museums are nowadays striving to implement innovative approaches to adequately use and distribute the wealth of knowledge found in cultural heritage. A range of technologies can be used to enhance the viewing experience for visitors and boost the expertise of museologists, art historians, scholars, and audience members. The present work aims to provide an overview of current methods and most pertinent studies addressing the use of the innovative technologies for enhancing the fruition of artifacts in traditional museums in an effort to improve the public experience and education. For all the technologies discussed, the paper focuses on the main results obtained in literature and on their possible implementation in the museal context. The overview demonstrates the liveliness of the world of research in the field of technologies for the digital development of museums and how many technologies commonly used in industry are increasingly finding their way into the cultural sphere.

Introduction

According to [1], the Global Museum Market is experiencing a worldwide growth, estimated to be valued at $9.3 billion in 2024, after several years of declining revenues and reduced visitor numbers.

In fact, COVID-19 pandemic and subsequent lockdowns had a significant impact on museums since visitors stopped interacting directly with the exhibition, and only in the last two years the market returned to a positive progression. In the period of economic uncertainty, Museums faced the challenge of maintaining a connection with their audiences by looking more keenly at new ways to leverage their art collections with the help of technology [2]. Therefore, they actively expanded their digital offerings, introducing new formats such as podcasts, digital guided visits, and online mediation. These initiatives were aimed at bridging the physical distance and enabling museums to stay in touch with the public.

More in general, the key for Museums’ resilience and growth lied in their capacity to invest in innovation that combined with creativity can offer visitors a compelling impression. Accordingly, Museums supported new Information and Communication Technology (ICT) tools to improve access to existing and new digital content, through meaningful narratives and personalised story-led interpretation. In the post-pandemic phase, these tremendous efforts started to produce positive effects in the museums economy, drawing visitors back to the museum and boosting their engagement and interest [1].

According to [2], ICT tools were, and still are, crucial for creating tools to be adopted by museums to enhance the visiting experience. As stated in [3], such tools allow to: (1) establishing a multimodal interactive environment for the active adoption and research of museum values; (2) establishing a new standard of cooperation and interaction among museum educators and improving their professional development; (3) forming visual culture as the fundamental component in the development of individual creativity.

Therefore, museums “not only represent existing exhibitions, or repositories of collections or archives” [3]; they also serve as a framework for narrative and educational tools that, using modern communication technologies, explain the context of the displayed objects or collections and interact with the user through personalization based on his experience, intentions, and level of understanding. This encompasses the implementation of methodologies “to combine historical and cultural information about an artwork with other features like its physical properties (for example, 3D shapes, visible colour), hidden data revealed by Infrared (IR) and X-ray imaging, augmented or virtual itineraries, and storytelling by implementing technology-based tools and software platforms” [4].

Accordingly, it is not surprising that several studies are currently being conducted in the scientific literature to enhance the experience provided by traditional museums by leveraging a number of technologies, mostly based on ICT. Such an assertion can be demonstrated by analysing the trend of research in this important field, as explained below.

On November 14, 2023, the following query utilizing the keywords (and synonyms) “ICT” “museum(s)”, “digital museum(s)”, “enhancement”, “technology”, “3D”, “digital transition” was handled by the authors of this paper using the SCOPUS database. The query is as follows:

Query“(ALL (ICT) AND ALL (museums) AND ALL (enhancement) OR ALL (technologies) OR ALL (3d) OR ALL (encounter) OR ALL (digital AND transition))”

As shown in Fig. 1, a total of 4857 works were published on these subjects. The trend indicates an exponential rise beginning in 2008 with a small reduction of works in 2018 (statistics pertaining to 2023 are not combined).

Fig. 1
figure 1

Results of database search for the Query: “(ALL (ICT) AND ALL (museums) AND ALL (enhancement) OR ALL (technologies) OR ALL (3d) OR ALL (encounter) OR ALL (digital AND transition))”

Most of the research is conducted either in accordance with the European Commission’s directives or within the framework of the Horizon 2020 program. United Kingdom, United States and Italy are the most productive nations, although the grouping of European nations dominates in this area. Computer Science and Social Science are the major players in these fields.

To be included in the state of art research, studies had to meet all the following criteria:

  1. 1.

    The study must be empirical, which implies that empirical data is gathered and analysed. Consequently, theoretical studies, meta-analyses, secondary data analyses, and simulated results (from simulation models) were not accepted.

  2. 2.

    Studies conducted in classrooms, simulated outdoor learning environments, or cultural heritage sites were not allowed since the study environment had to be a museum or scientific centre.

  3. 3.

    The full text of the literature must be available.

A further refinement of search is performed by using the “refine research” tab in Scopus. In detail, the refinement is carried out by researching the most common keywords declared in the overall set of 4857 papers. Since more papers can share two or more keywords, it is not useful to show the percentage of papers dealing with a particular topic. However, it is possible to retrieve the top terms adopted by authors in the fields of interest.

One of them, for instance, is the term related to 3D technologies (i.e., 3D imaging, 3D scanning, 3D printing). Using such keywords, 1262 papers emerge from the search, as depicted in Fig. 2. Again, the three major players are UK, USA, and Italy, even if with a slightly modified order.

Fig. 2
figure 2

Trend of the research dealing with 3D imaging technologies and Reverse Engineering for enhancing fruition of artworks in traditional museums

Given for grant that the terms “ICT”, “Cultural Heritage (CH)” and “Museums” are of general interest, the main keywords can be grouped together in a set of macro-areas of interest, as depicted in Table 1. It must be noticed that the number of occurrences for the keywords sums up to a number greater than the overall number of papers retrieved using the SCOPUS database, since the same keyword can be repeated in several papers. Moreover, common terms such as “paintings”, “sculptures”, “algorithms”, “education” and “teaching” are excluded in the classification task.

Table 1 Macro-areas grouping together main keywords related to the main technologies for enhancing the fruition of artefacts in traditional museums

Considering the aforementioned classification, the present paper aims to provide an overview of current methods and most pertinent studies addressing the use of the innovative technologies, defined in the macro-areas of Table 1, for enhancing the fruition of artworks in traditional museums in an effort to improve the public experience and education. For all the technologies discussed, the paper focuses on the main results obtained in literature and on their possible implementation in the context of museums.

Furthermore, this work aims to outline how the state of the art is changing as technology advances and to draft consideration on where we are going in the future for expanding the possibilities of museums.

The overview demonstrates the liveliness of the world of research in the field of technologies for the digital development of museums and how many technologies commonly used in industry are increasingly finding their way into the cultural sphere. From an examination of the technologies proposed in this work, a clear picture emerges of systematic innovation, often supported by European bodies and organisations, which may hopefully in the future lead to the realisation of virtuous circles of basic and applied research dedicated to cultural heritage. Finally, this work aims to stimulate researchers in the field to exchange ideas, methodologies, technologies, and good practices for the development of systems that improve the user experience when approaching museums.

The paper is structured as follows: in Sect. "Advances beyond the state of the art" the current state of the art of the main technologies implemented for enhancing VMs is provided. Such an analysis is focused on most relevant technologies spread all over the world and puts an emphasis on the rising topic of Artificial Intelligence (AI) for Cultural Heritage. Section 3 provides an insightful view of future trends for the technologies and how these will probably impact VMs in the next few years.

State of the art

As previously stated, the present paper seeks to give readers a close-up view of the following methodologies and tools currently employed to enhance traditional museums activities: 2D and 3D imaging technologies, 3D models made from 2D artwork, annotated 3D models to support metadata, watermarking, creation and virtual/augmented reality methodologies, gamification methodologies, AI for Cultural Heritage, and storytelling methodologies. For each of the strategies, the essential literature studies are presented. The overview also provides several suggestions for how to advance technology soon.

2D Imaging technologies

High-resolution images are necessary to communicate information about virtual exhibitions in virtual museums. As is common knowledge, the resolution of digital photos affects how detailed they are. Due to their dependence on available bandwidth, high-resolution photographs have recently been challenging to preserve and send over networks. Because of this, image servers adopted an imaging architecture that could offer the user scalability and options for interactivity by storing many image resolutions in a single file [5]. Even though internet connections are now faster as well as more dependable, this architecture is still in use. The Google Art Project, which was created in partnership with some of the most prominent art institutions in the world to allow anyone to discover and examine works of art online in detail, is one of the most pertinent examples of this type of architecture. Using gigapixel technology, more than 45,000 works of art have been scanned in collaboration with more than 250 institutions [6]. For art historians and curators, gigapixel photography enables precise documentation and examination of works of art. Additionally, because this form of image may be used to construct virtual representations, anybody with Internet connection can see the artwork. Visitors will be able to fully immerse themselves in the artwork, allowing them to see many subtleties that would otherwise escape their notice with their own eyes. There are very few organizations, outside Google, who specialize in getting gigapixel images of pieces of art outside of museums; this is mainly due to the technological difficulty and to the need of specialized equipment. According to [7], “some examples include the French state organization Centre de Recherche et de Restauration des Musées de France CR2MF [7], the Italian firm Haltadefinizione [8] and the Spanish Madpixel [9]”.

Apart from Google's experience, several studies in the literature procedurally report on the acquisition of high-resolution images of paintings in museums. For monitoring, electronic recording, and display purposes, many operators employ high-resolution orthophotos of paintings. Common, undistorted digital pictures are believed to be more than enough for creating digital collections in museums, but specialized applications have more challenging resolution and dimensions requirements [10]. These two factors are crucial, especially when it comes to painting restoration and monitoring purposes [11]. A master orthophoto that will serve as the primary reference in the collecting of future multiple photos is the focal point of the typical approach. Its objective is to create a picture with a gigapixel resolution that is quantifiable (a measurement made in the image equals the equivalent measurement on the painting). These pictures are enlarged to closely match the image of the orthophoto master, like tiles in a mosaic.

The outcome is a photograph that is incredibly highly detailed, reveals the finer aspects of the artwork, and accurately captures the painting's actual size. As a result, the success of the entire technique depends on how accurate the orthophoto is. The orthophoto is created starting with an undistorted image of the painting using metric references (points or planes), often produced by utilizing perceptual characteristics [12, 13]. The so-called “cross-ratio” method is one of the most often used techniques for analysing metrics data from images [14, 15]. This approach tries to determine the plane on which the subject (in this example, the painting) resides by using a sequence of proportions (the cross-ratio, see Eq. 1) between known points that are moved along convergent lines (with at least three points on each line).

$$\frac{{\underline{{AC}} \cdot \underline{{BD}} }}{{\underline{{BC}} \cdot \underline{{AD}} }} = \frac{{\mathop {A^{\prime}C^{\prime}}\limits_{\_} \cdot \mathop {B^{\prime}D^{\prime}}\limits_{\_} }}{{\mathop {B^{\prime}C^{\prime}}\limits_{\_} \cdot \mathop {A^{\prime}D^{\prime}}\limits_{\_} }}$$
(1)

Let consider the four positions A, B, C, and D in Eq. 1 as being aligned, with D being the point on the painting surface. On the equation's left side, where the coordinates are defined in terms of the physical world, distances are expressed in meters; on the equation's right side, where the coordinates are defined in terms of the image reference system, they are expressed in pixels. It is possible to A', B', C', and D' by using a correctly constructed tool that consists of a table on which a series of convergent lines with matching points are drawn, and a laser line projector. The laser line is positioned such that it is perpendicular to the desk in this last item. Then, this object is placed between the camera and the artwork so that it is entirely framed in the picture. The intersection of the “table” and the painting’s surface is highlighted by a laser line. This makes it feasible to apply the cross-ratio and proceed with the points and plane retrieval since each convergent line of the support and the painting surface can be readily located in the image. Table 2 shows a summary of the most relevant technologies related to 2D acquisition.

Table 2 Tools, challenges, and advantages related to the use of 2D imaging

The use of gigapixel images (and/or orthophoto-based digital twins of artworks) is undoubtedly helpful to improve the fruition of paintings both on site and within a virtual environment. Advanced digitization techniques allow the user to enlarge images without losing resolution thus making it possible to zoom in on the works and reveal even details that are not visible to the naked eye.

Eventually, digital acquisition provides artistic creations a secure refuge. Ultra-high quality digital photographs are crucial for conservators and restorers to monitor the state of art conservation both before and after restoration, as well as to assist in the creation of conservation strategies.

Therefore, the use of this technology is nowadays spreading for enhancing the fruition of several collections like, for instance, Frescoes of Luca Signorelli in the Cathedral of Orvieto and Perugino’s selected masterpieces of the Umbrian master, including some of the most famous works in the National Gallery of Umbria as well as the celebrated Marriage of the Virgin, which is held in the Musée des Beaux-Arts of Caen, France.

3D imaging technologies and reverse engineering

The digital and/or physical 3D reconstruction of artworks for use in museums or to create virtual collections is a common idea in the scientific literature [16,17,18,19,20,21,22,23,24]. The fact that the Europeana Tech community is working on 3D digitization processes and publication pipelines is a consistent proof that the 3D digitization of cultural assets has only recently become a standard practice. The technique used in all published processes, which include 3D data collection and 3D model reconstruction, is almost the same. However, depending on the historical or artistic source, different outcomes could be expected, as stated in [17]. They are important for the long-term preservation, research, and public access to physical, intangible, and digital cultural material in the context of architectural history because they make research and presentation easier. Instead, 3D reconstruction is intended to support data collection in the broader context of cultural heritage, such as through digitization, data retrieval from database records with the transfer of knowledge, and the reconstruction, replication, and production of artifacts as well as the examination of visual humanities issues, such as a collection of intricate figurative paintings. Investigation and evaluation of sources also include three-dimensional reconstruction.

As in the case of study into the Vitruvian system of architectural orders [18], there are occasions when the emphasis is on plans and systems rather than a specific thing. Given this, archetypes are commonly derived using 3D reconstruction techniques. Such a process involves the question concerning the originality of reconstructed digital models i.e., how much the digital twin resembles the original artwork. According to [18] “the relationship between the artwork and its digital 3D representation depends on data quality assessment, visualization, historical preparation processes, conceptualization, and contextualization”.

The ability to digitize things quickly and easily with high resolution while also gathering exact surface data and distance information makes 3D imaging/scanning one of the greatest technologies for recording reality. This is particularly valid if the artwork is a sculpture, bas-relief, or building. The above-mentioned artworks may be digitally recreated thanks to 3D recording, which also makes it possible to build enormous archives of important objects. The main advantage of 3D scanning for the preservation of cultural and historical assets is the capacity to precisely recreate the dimensions and volumetric representation of scanned items.

A 3D model of a three-dimensional piece of art can be obtained in a variety of methods, as is well known. The most important methods include computer tomography, photogrammetry, laser scanning, structured light scanning, and RGB-D imaging. A method for producing 3D models from several images of an item taken from various angles is called photogrammetry [19]. In such a method, the 3D coordinates of points on an object surface are determined based on overlapping images with camera position and orientation information known as exterior orientation. The surveying industry produced the initial improvements in photogrammetry to mimic terrain [11], but this field quickly expanded to include the study of architectural sites. Agisoft Metashape, Bentley ContextCapture, and RealityCapture were the three digital photogrammetry processing software programs assessed in recent research [20], which showed their efficacy despite minor variations in overall performance. Authors in [21] provides an overview of optical 3D measuring sensors and 3D modelling approaches, together with their constraints and potentials, needs and requirements. Even if this study is old, especially considering the amazing advancements in the previous five years, these methods were cutting edge 15 years ago. However, some issues that are still a challenge in this field today were brought up in such a work: first, it is critical to choose the right methodology (i.e., sensor, hardware, and software) and data processing procedure. The right production workflow should then be designed to ensure that the finished product meets all the required technical requirements. The data processing time is required to be sped up with as much automation as is practical, but accuracy must always come first.

In Structured light, a series of structured light patterns are projected onto an item during 3D scanning; a line of illumination that is created when a narrow band of light is projected onto a surface that has been formed in three dimensions can be used to mathematically recreate the geometry of the surface from viewpoints other than the projector's. Since it can capture a vast number of samples at once, pattern projection, which consists of many stripes at once or of arbitrary fringes, is a speedier and more flexible method. The fundamental idea behind laser scanning is the exchange of a laser signal between an emitter and a receiver, who then take in the return signal. During the receiving phase, the scanner utilizes distance-calculating algorithms to determine the kind of equipment. Several 3D scanners are in the market, typically used for industrial applications. Among them Romer Absolute Arm (Hexagon, Stockholm, Sweden), FARO ScanArm (FARO Technologies Inc., Lake Mary, USA), Minolta Vivid (Konica Minolta, Tokyo, Japan) are the most renowned professional devices. Other interesting devices, providing low-cost solutions for 3D acquisition (but also lesser resolution 3D point clouds with respect to professional 3D scanners) are the so-called RGB-D devices. Among the plethora of devices available in the market, the new Intel® RealSense™ sensors family is one of the most promising close range devices (Intel, Santa Clara, California, USA). Depth camera systems are capable of capturing millions of surface points in seconds, reporting them as raw point clouds or polygonised meshes. The trustworthiness of such systems is crucial to determine whether the acquired data meets the requirements for the specific application (fit for purpose). The specific purpose specifications are also important in defining test methods that highlights the fair strengths and weaknesses of the systems.

When the computation is based on comparing the phases of the emitted signal and the return signal, the distance between the laser's emission and reception is calculated in terms of "time of flight" (TOF) for 3D laser scanners, or it is calculated in terms of "phase difference" (Phase shift based). Data may be acquired at speeds of up to a million points per second since the body and the mirror move quickly. Some widely known devices based on TOF and Phase Shift are produced by FARO (FARO Technologies Inc., Lake Mary, USA).

Whatever method is used, the result is a “3D points cloud of the scanned object, which can then be processed further using specialized software programs capable of reconstructing the 3D geometry of the scanned object in terms of surfaces” [21]. Accordingly, it is possible to build a 3D model of the original artwork or architectural/archaeological site. Two examples, related to previous works made by some of this paper authors, are the reconstruction of the Brancacci Chapel (Firenze, Italy) and the reconstruction of the Statue of the Penitent Magdalene (Donatello)—Museo dell’Opera del Duomo, Firenze, Italy (see Fig. 3). The Digital Humanities project Florence as It Was (http://florenceasitwas.wlu.edu), which intended to recreate the architectural and decorative look of late Medieval and early Modern structures, is another pertinent illustration. Such a project combines 3D generated representations of the artworks that were mounted inside of buildings throughout the fourteenth and fifteenth centuries with 3D point cloud models of the buildings (i.e., actual structures like chapels, churches, etc.).

Fig. 3
figure 3

Reconstruction of the Brancacci Chapel (Firenze, Italy) and the reconstruction of the Statue of the Penitent Magdalene by Donatello (Museo Opera del Duomo, Firenze, Italy) [10]

Additionally, in this work, the crucial steps outlined in the optimized workflow are based on conducting art historical research to identify the original artworks in each building, using 3D scanning (e.g., LiDAR) to obtain 3D data, using high resolution photogrammetry to capture artworks, and producing point clouds that can be further modified.

A crucial point to keep in mind when dealing with 3D acquisition of artworks is that there are frequently uncontrollable metric errors involved in the creation of three-dimensional virtual models using optical technologies when enormous objects are being reconstructed using small, high-resolution 3-D imaging devices. There are no existing options for controlling and enhancing metric accuracy, which is a major challenge within Cultural Heritage [22]. To address this problem, research is working to integrate several acquisition approaches, such as 3D range camera systems with optical tracking techniques or 3D reality-based models created from image fusion with range-based techniques.

As a final remark, it is important to highlight the relevance that 3D scanning could have in terms of students and young people’s engagement in CH. Using existing low-cost 3D collection tools (which are very user-friendly and widely available on the market), “it was possible to create an organized system of a production cycle that begins with the museum, involves the visitor, has them return to the museum, and then moves to the community”, as stated in [23]. In such a way young people and students become an active part in the process of gaining knowledge in the field, acting as “digital 3D invaders” engaged in a bottom-up system of social media (Facebook, Twitter, Instagram)-based cultural heritage enhancement. Table 3 lists tools, challenges and advantages related to the use of different 3D scanning technologies.

Table 3 Tools, challenges, and advantages related to the use of 3D imaging

Another aspect to be considered is the integration of 3D models with Virtual Reality (VR) and Augmented Reality (AR). Integration into a virtual environment, such as for instance a VR Museum, is useful to provide information layers over static content, such as prints, or real-world settings, such as actual locales. Using VR headsets, desktop PCs, or mobile devices, visitors may experience an immersive and engaging way to explore your museum with 360 virtual museum tours. Because virtual reality (VR) is assisting museums in resolving two of their biggest contemporary challenges—authenticity and new museology—its function in the museum setting is becoming more and more significant. Stated differently, modern museums must: (1) offer a genuine experience; and (2) improve the experience of its users by offering edutainment, or the fusion of entertainment and education. Virtual reality (VR) helps allay these worries since it allows users to enjoyably learn about collections and view virtual pictures of objects as real [24].

The market has successfully adopted this VR method after numerous applications demonstrated its efficacy. VR headsets attracted a lot of attention in CH, as expected. These headsets operate on the principle of stereo vision and user-tracked displays, enabling richer and more immersive viewing experiences. These tools must be made available for the optional viewing mode of the Virtual Museums since they will significantly affect computer and mobile interfaces.

Companies including Oculus Rift, Sony's Project Morpheus, HTC Valve, Vove VR, Avegant Glyph, and Razer OSVR have participated in beta testing throughout the years, and consumer versions of these devices were just recently made available on the market. Google Cardboard, Samsung Gear, and Zeiss VR One are examples of other technologies that have already developed gear and are well-known in the market. Additional wearable technologies augment and show content that combines virtual and real-world situations using small displays that are positioned in front of specialized eyeglasses. Google Glass, Microsoft Hololens, Sony Smarteyeglasses, Epson Moverio, VUZIX M100, Optivent Oral, and many other products are some of the rivals.

Some interesting examples of 3D virtual tours using Oculus Rift, are the tour of the Santa Maria della Scala Museum Complex in Siena, Italy [25] and the virtual tours developed by The British Museum, Museo del Prado, and Vatican City.

It is worth mentioning that most of these solutions are still in the beta stage and are now pricey, which prevents their mass adoption. They will develop and become accessible, just as VR headsets. In addition to these techniques, tracked 3D glasses and a tracked input device (stylus pen) offer the most complete 3D experience for this type of VR vision. Even though some of these techniques seem dated, it is feasible that they will advance because of the auto-stereo and tracked stereoscopic devices mentioned above. These techniques continue to be a crucial and practical way to distribute VR 3D material.

With AR, a user of a smartphone or tablet may point the device at a specific location and see a still scene come to life. AR technology overlays layers of virtual material on the actual world. Three qualities define augmented reality, according to [26]: (a) merging real and virtual things into reality; (b) fostering cooperation between real and virtual items; and (c) enabling real-time interaction between real and virtual objects. Thanks to these features, AR is becoming a useful tool for visitors to get more information when they view exhibitions. An important Review of methods based on this technique is in [27]. Table 4 shows tools, challenges and advantages related to the adoption in museums of different VR/AR technologies.

Table 4 Tools, challenges, and advantages related to the use of VR/AR technologies

Summing-up, two main aspects are beneficial for traditional museums willing to implement 3D technologies. The immediate benefits of 3D scanning include virtual examination and research. The objects can be brought into the virtual workroom, and there is essentially no impact on the object’s physical integrity. According to some research, studying in museums with AR/VR support fosters higher-order thinking abilities in students, including inquiry, critical thinking, and creative thinking [28]. Visitors are more satisfied and enjoy themselves more thanks to these technologies, and wearable technology helps to customize their educational experience. Additionally, by giving students the opportunity to experience different historical situations firsthand, multisensory enhanced museum spaces can improve empathy.

Computer-based reconstruction in archaeology

The exhibition, including exhibitions in museums, onsite, or on the Web, is one of the essential activities recommended by “UNESCO, ICCROM, ICOMOS (1994)”, “for the preservation of the authenticity and integrity of archaeological excavations and finds”. This activity becomes particularly relevant when not only the 3D renderings of archaeological objects are shared and displayed but all the high-level information associated and associable with them. To this end, this section reviews computer-based methods for extracting high-level information from low-level information obtainable from discretized models of archaeological objects. Methods applied to archaeological ceramics will be considered. These findings are particularly intriguing since they are the most frequent in archaeological excavations and offer crucial details about the history, culture, and art of a location.

At this purpose, the published methods are grouped into the following groups:

  • Fragment features processing:

    • Axis identification

    • Profile evaluation

    • Feature segmentation and recognition

    • Dimensional features evaluation

  • 3D Vessel reconstruction from its fragments

Fragment features processing

Identifying semantic and morphological features on archaeological shards is essential for sharing information about human practices in various cultural contexts, such as the economy, daily life, and the material expression of religious beliefs. Typically, the archaeological potteries are handmade objects at the wheel so that they can be schematized with an axis of symmetry, the representative profile, and a set of non-axially symmetric elements such as handles, ribs, and decorations. Hence, the axis of symmetry evaluation is a fundamental preliminary activity affecting subsequent analyses. In Cultural Heritage applications considering discrete geometric models of finds, this process is complicated since it is based on the elaboration of information that is:

  • Of low-level such as the points’ coordinates and the triangles’ normal.

  • Limited in the case of fragments with small sizes.

  • Blurred by errors for handmade production at the wheel and several defects for extensive wear from weathering, encrustations, chipping, and other damage.

Based on these restrictions, published symmetry axis estimate algorithms examine several axially symmetric surface features using discrete 3D models. Recently, a few computer-based techniques have been put out to automatically assess the representative profile of ceramic sherds. Researchers have made this attempt to get beyond the standard archaeological procedures, which rely on hand-drawn sketches by archaeologists and have low reproducibility and repeatability.

Also in this case, in presence of poor and noisy information none of the methods investigated always guarantees a reliable result. The encoding of archaeologists' knowledge in their conceptual categorization of ceramics represents a major challenge for researchers developing automatic methods for segmenting semantic and morphological features. The implementation of robust rules starting from a codebook defining the articulated knowledge is not trivial because the features to be recognized:

  • Are not associable with analytical surfaces.

  • Are generally damaged and worn, blurring their geometric properties.

These difficulties explain the few automatic algorithms available in the literature. An automatic method should first divide the axisymmetric part (ASP) from the non-axisymmetric one (NASP) and then recognize the characteristic features of each.

The automatic implementation of the dimensional features evaluation of archaeological ceramics used by archaeologists is linked to the results of feature segmentation. The results proposed by the authors in [29,30,31] show that, at the state-of-the-art, the implementation of methods that automatically assess dimensional features from the 3D model can be based on the methods developed in [32, 33]. Table 5 and Table 6 show, respectively, a comparison between different axis-based methods and the essential aspects of methods published in literature for representative profile detection.

Table 5 Comparison of axis detection methods
Table 6 Essential aspects of methods published in the literature for representative profile detection

3D Vessel reconstruction from its fragments.

Because archaeological ceramics are sometimes discovered in fragments, their assembly is valuable for investigation, categorization, and display. By examining ornamentation, technical traits, and colour through visual analysis, form, and size through the graphic depiction of the discoveries, archaeologists can identify fragments that might be from a vessel. Then archaeologists proceed with assembly within each cluster of fragments. Testing the joins, matching the sherds, and temporarily fastening the joins are all steps in the factorial operation. The shards' surfaces are damaged by chipping and erosion, their number is unclear, and some are missing because they have been destroyed or have not yet been found, which complicates this time-consuming task [32]. In any case, these activities require a skilled operator, and since they can introduce degradation, they are avoided in the case of fragile and precious fragments. Several methods have been proposed to automatically perform the operations mentioned above in a virtual environment.

Regarding clustering, Biasotti et al. [33] “identify the essential similarity criteria used in traditional studies” and apply the concept of compatibility to comparing fragments with databases. Instead of extracting a clear answer, the authors offered the compatibility to show the significance of each piece of information to reasoning. This technique, while improving upon the state-of-the-art for automated clustering, still has certain drawbacks. For example, it cannot handle fragment similarity when the original model is absent or for small fragments.

The reassembly of an object from its fragments in the literature is called mosaicking. The more recent published computer-based methods can be classified as follows:

  • semi-automatic methods [43]: the operator identifies the first assembly solution that is refined by tools available in commercial software;

  • automatic methods:

    • local methods [36, 38,39,40,41,42]: aligning only two fragments at a time; the pottery reassembly is performed incrementally;

    • global methods [43,44,45]: the pottery reassembly is performed by considering all its fragments.

The analysis of the semi-automatic methods proposed by Kotoula [37], shows these methods, typically, even when based on commercial software, have limitations due to being difficult to use for non-experienced users and limited functionality in managing specific Cultural Heritage features. Table 7 lists the essential aspects of methods for archaeological ceramics automatic reassembly.

Table 7 Essential aspects of methods published in the literature for archaeological ceramics automatic reassembly

Additive manufacturing

Another key aspect to consider when creating engaging exhibitions is the possibility for 3D printing replicas that can be accessible by visitors or even used to replace original artworks, for example, when the latter is damaged or destroyed. This is possible by using additive manufacturing (AM) methods to make these replicas.

As is well known, additive manufacturing (AM), commonly known as 3D printing or Rapid Prototyping, is a computer-controlled manufacturing method that builds three-dimensional things by depositing materials, often in layers. In several technical sectors, additive manufacturing (AM) techniques such as binder jetting directed energy deposition, material extrusion, powder bed fusion, sheet lamination, vat polymerization, and wire arc have been employed [52]. Accompanied with their own standards, such methods are successfully employed for CH. The New York Metropolitan Museum of Art has significantly encouraged people to engage with its collections online. Visitors can take pictures of museum exhibits to use as the basis for their own digital models. The Met has even provided instructions for doing this in a booklet that is available online. The article points readers toward online tutorials and suggests which software to use. They also go through how to buy a 3D printer or kit or how to use a 3D printing service.

It is both great and difficult to employ 3D printing to conserve cultural treasures and relics. Due to physical deterioration, robbery, and demolition, society continues to lose priceless relics from the past. 3D printing provides a novel way to preserve these items and make them accessible to future generations. To illustrate this point, two archaeologists from the Harvard Semitic Museum recreated a ceramic lion in 2012 using 3D modelling and printing (see https://www.wired.com/2012/12/harvard-3d-printing-archaelogy/).

Three thousand years ago, during an assault on the historic Mesopotamian city of Nuzi, the original sculpture was destroyed. The fragments are kept in the museum's collection. These were painstakingly captured from hundreds of different perspectives. A computer model was then made using the photographs. The incompleteness of the fragments resulted in certain holes in the model. They had to use entire statue scans that were also located in the same area because of this. After digitally reassembling it, they were able to produce a 3D printed replica for display.

A further example is the restoration of Iraq's demolished Nimrud Lion (see Fig. 4) performed by Promo Design (PIN s.c.r.l., Prato, Italy).

Fig. 4.
figure 4

3D printed replica of the Nimrud Lion

Additionally, accurate replicas can be constructed for exhibitions where the original work cannot be transported, as in the case of Michelangelo's David, which will be on display at the Dubai Expo in 2021.

Finally, AM-based models of artwork can be utilized to enhance visitor engagement or for teaching objectives [53]. 3D imaging, combined with Additive Manufacturing, allows to take irreplaceable artefacts seen only in museums and “put them in the hands” of learners. Some important Museums which worked on this topic are the Smithsonian Museum and the British Museum.

Reproductions are possible for pieces that must be handled carefully. This makes it possible to examine things closely without endangering the originals. Items that are too delicate to show can be kept safely in storage while a copy is used in their place. Even replicas of damaged artefacts are possible. Before printing a "fixed" model, fragments are scanned and digitally pieced back together. These can be shown side by side in museums to give visitors a better idea of the object's previous appearance.

Overall, AM offers various benefits for preservation over traditional manual restoration and digital archiving. Most notably, AM can create a realistic duplicate that people can "feel" by touching. Moreover, AM offers designers a fresh approach to manufacture goods that have a distinct connection to cultural history in addition to replicating artefacts. AM may contribute to bridging the "old" and the "new" and bringing the museum's cultural heritage experience into "people's" everyday life through these creative endeavours. Accordingly, it appears that 3D printing will play a significant role in the fields of research, documentation, preservation, and education in addition to the area of object reconstruction. Furthermore, it has the capacity to provide these applications in a way that is both inclusive and accessible.

According to [50] Of the materials currently used in AM, which include ceramics, metals and polymers, the latter are probably going to represent the biggest challenge to conservation. Tensile strength, impact resistance, and the effects of short-term curing and aging on these qualities were the primary areas of focus. For most technologies, the impacts of construction characteristics have been investigated, and anisotropy has been emphasized in several research. Given that it may eventually cause deformation, this might pose a significant conservation concern. To fully grasp the potential role anisotropy may have in the RP product deterioration, additional study is necessary.

From 2D artworks to 3D models

Most of the research in computer vision has traditionally been focused on finding 3D information in 2D pictures, photos, or paintings. It was via these research that the problem of 3D reconstruction from a single 2D picture first took form. The two pioneering works on this topic are the ones of Horry et al. [51] and in Hoiem et al. [52]. In [51] a brand-new technique that makes it simple “to create animations from a single 2D image or photo of a scene is presented”. Named TIP (Tour Into the Picture), “gives a new type of visual effect for making various animations” by means of an appositely devised user interface. The user can interactively process 2D images by adding a set of virtual vanishing points for the scene, by segmenting foreground objects from the background and to semi-automatically model the background scene and the foreground objects in a polyhedron-like form. Finally, a virtual camera can be placed in the model to animate the 3D scene. Authors in [52] proposes a fully automated approach for building a 3D model from a single image. The model is made up of multiple texture-mapped planar billboards and has the complexity of a typical children's pop-up book image. The main discovery is that instead of attempting to restore exact geometry, researchers statistically model geometric classes based on their orientations in the image. This method divides the areas of the input image into the major groups of “ground,” “sky,” and “vertical.” These labels are then used to “cut and fold” the image into a pop-up model using a few simple assumptions. Due to the intrinsic ambiguity of the problem and the statistical nature of the method, the algorithm is not expected to work on every image. But it works remarkably well for a range of situations taken from regular people's photo albums.

These outstanding contributions aim to construct a 3D virtual representation of the scene when elements are virtually separated from one another, along with related approaches. A more interesting technique to transition from 2D artwork to 3D models is to create digital bas-reliefs, especially if a 3D printed prototype can make the 3D information accessible to those with visual impairments. The literature features some significant works that deal with relief reconstruction from single photographs, particularly those that deal with coins and commemorative medals. To simplify the 3D reconstruction, Shape from Shading (SFS) based techniques [55] are combined with non-photorealistic rendering [53] in [54] to automatically create bas-reliefs from single photos of human faces. In [56], volume is used to convert the input image into a flat bas-relief. Typical elements of the input image include logos, stems, human faces, and figures that are projected from the backdrop of the image. These techniques can extract 3D information from 2D photographs, but their principal uses are in the development of logos, coats of arms, and numismatics. Additionally, they significantly rely on SFS techniques, which start with shaded pictures and involve 3D reconstruction using reduction strategies. Therefore, the main needed effort to ameliorate these techniques is to reduce the computational speed, as performed, for instance, in [57] where a novel method to retrieve shaded object surfaces interactively is proposed (see Fig. 5).

Fig. 5
figure 5

a synthetic shaded image of Matlab® peak; b retrieved surface using the approach provided in [52]

The proposed approach intends to recover the expected surface using easy-to-set boundary constraints, such that most of the human–computer interaction takes place before the surface retrieval. The method, which has been put to the test on several case studies, has potential for satisfactorily recreating scenes with both front and side illumination.

Bas-relief reconstruction features have also been integrated into commercial applications like Autodesk ArtCAM® and JDPaint.

Users may “inflate” the surface bounded by the object outlines in such software packages and employ a vector representation of the item to be rebuilt to use the software solutions. Therefore, such methods may be applied to models to produce figures that are volumetrically isolated from the backdrop while being compressed in depth, such as those produced by embossing a copper plate.

In addition, a substantial interaction is required to generate an accurate surface reconstruction; specifically, vectorizing the subject's contours for complex structures like faces is inadequate; each component that must be inflated must be both delineated and vectorized.

Face characteristics like the lips, cheeks, nose, eyes, brows, and others must be sketched by hand. Working with paintings requires a lengthy procedure since they typically contain numerous themes that are hidden in the background (or have a backdrop that detracts from the primary subjects).

To overcome these difficulties and create models that aesthetically mimic sculptor-made bas-reliefs from paintings, several strategies have been developed so far. The most pertinent techniques are discussed in [58,59,60], where tactile bas-relief is produced using several techniques like Shape From Shading, perspective- and volume-based scene reconstruction, and rapid prototyping.

In detail, authors in [58] proposed a computer-assisted approach for producing tactile reproductions of paintings that may be utilized as a teaching aid during guided tours of museums or galleries. The approach enables” an artist to swiftly create the desired form and generate data suitable for fast prototyping machines to produce the physical touch tools, starting from high-resolution pictures of original paintings.” Laser-cut layered depth diagrams that also improve their depth relations are used to communicate the different elements of the artwork and their spatial arrangement. The best successful translation strategy for giving blind persons a correct understanding of graphical artworks is tested utilizing four various translation processes in [55] and computer-based technologies. Giorgio Morandi and Fernando Botero's interpretations of the iconographic subject of “still life,” were selected as case studies to test the response of blind and visually impaired people (see Fig. 6).

Fig. 6
figure 6

(a) Exploration of bas-reliefs replicas of "still life" from Botero and Morandi created by researchers in [55]; (b) a zoom on the hands of the blind person during the tactile exploration of the two replicas

Using vanishing point identification, foreground from background segmentation, and scene polygonal reconstruction, authors of [56] provide a method for obtaining a 3D representation of a painted scene with single point perspective. They specifically suggested four different computer-based ways for the semi-automatic creation of haptic 3D models from RGB digital paints photos.

The findings of this study add fresh knowledge to the field of visually impaired user-oriented 3D reconstruction and make it obvious what approach must be used to create an accurate recreation of a bi-dimensional work of art.

Based on the research in [59], authors in [60] predict the location of the horizon and automatically create a rough, scaled 3D model from a single shot by identifying each image pixel as ground, vertical, or sky. They set out to provide a systematic process for the semi-automatic production of 2.5D models from paintings as their main goal. Several ad hoc techniques were used to solve many of the basic problems that come up when dealing with artistic representation of a situation.

To produce a reliable reconstruction of the scene and of the subjects/objects envisioned by the artist in a painting, these systematic methodologies concentrate on an interactive computer-based modelling procedure that includes the following tasks:

  1. 1)

    Preliminary image processing-based operation on the digital image of a painting. This phase mainly focuses on segmenting the scene's objects and fixing image distortion. To create a high-resolution image that keeps shading since this information is to be used for virtual model reconstruction, a digital copy of the source image to be reconstructed as bas-relief is generated using the suitable image capture method and lighting. The different elements of the scene, such as the people, clothes, buildings, and other elements, are appropriately identified after the image has been recorded. This process, known as “segmentation,” can be finished using any of the methods outlined in the literature [55].

  2. 2)

    Perspective geometry-based scene reconstruction. To organize the segments of the beginning picture into a coherent 2.5D scene, it is important to define the attributes of the areas after segmentation. This is so that the subject representing the scene may be positioned in the space geometrically and consistently while still being characterized in terms of flat regions, which is required by the volumetric information retrieval approach created. In the literature, there are several techniques for creating 3D models from perspective scenes that have shown to be quite successful in resolving this problem (see, for example, [58]). Most of them, and in particular the technique described in [59], may be employed effectively to complete this task. This type of spatial reconstruction may be carried out by using [59] findings in conjunction with the layered depth diagrams created using the approach outlined in [58]. In contrast to similar approaches in the literature, the proposed method may be able to model oblique planes, which are planes represented by trapezoids whose vanishing lines do not converge in the vanishing point. The process begins by creating an RCS (Reference Coordinate System). The “vanishing point coordinates on the image plane \(V=\)(\({x}_{V}{, y}_{V}\)) are computed thus allowing the definition of the horizon \({l}_{h}\) and the vertical line through V, called \({l}_{v}\)”. After finding the vanishing point, the x, y, and z axes are placed on the image plane, perpendicular to the horizon (pointing right), perpendicular to the image plane (according to the right-hand rule), and perpendicular to the image plane, respectively, to create the RCS. The origin is taken in the bottom left corner of the image plane. Thus, while seeing a simple perspective-painted landscape, the following 4 types of planes may be identified: Vertical planes perpendicular to the image plane and whose normal is parallel to the x axis; oblique planes, all other planes not included in the perspective view; horizontal planes perpendicular to the image plane and whose normal is parallel to the y axis (among them, it is possible to define the "main plane" corresponding to the ground or floor of the virtual 2.5D scene). After the planes have been located and assigned to one of the categories, a virtual flat-layered model may be constructed by giving each plane a suitable height map. The foreground, or the virtual scene element closest to the observer, is represented by a white value while the backdrop, or z = 0, is represented by a black value. Since the main plane should, in principle, extend from the foreground to the horizon line, a gradient that is represented by a linear graded ramp that runs between two grey levels was used to generate the main plane's grayscale depiction: “the level \({G}_{0}\) corresponding to the nearest point \({p}_{0}\) of the plane in the scene (with reference to an observer) and the level \({G}_{1}\) corresponding to the farthermost point \({p}_{1}\). Consequently, to the generic point \(p\in [{p}_{0},{p}_{1}]\) of the main plane is assigned the gray value \(G\) given by the following relationship” [54]:

    $$G = {G}_{0}+\left(\left|p-{p}_{0}\right|\cdot {S}_{grad}\right)$$
    (2)

where \({S}_{grad}= \frac{{G}_{0} }{\left|{p}_{0}-V\right|}\) is the slope of the linear ramp.

  1. 3)

    Volume reconstruction. Once the height map of the scene and the space distribution of the depicted figures have been set up, the volume of each painted subject must be determined for the observer to discern its genuine quasi-three-dimensional shape. As was previously said, to achieve this goal, it is needed to transform all the information gleaned from the painting into shape details. First, a straightforward user-guided image processing-based approach is used to recreate any objects in the picture that resemble primitive geometry. The final projected geometry may be assigned to a simple form like a cylinder or sphere. The user is requested to select the clusters using a GUI. Each selected cluster thus stands for a single blob (i.e., a region with constant pixel values), making it simple to compute the geometrical properties of each cluster, such as its centroid, major and minor axis lengths, perimeter, and area. It is easy to distinguish between a shape that is approximately circular (i.e., a shape that must be reconstructed in the form of a sphere) and an approximate rectangular (i.e., a shape that must be reconstructed in the form of a cylinder) based on such values using well-known geometric relationships used in blob analysis. The cluster is uniformly subjected to a gradient after being identified as a certain shape item. If an object is only partially visible in the scene, it must be manually classified by the user. The user must then choose at least two points that define the primary axis of cylinders, while for spheres, he must choose two points that roughly define the diameter and a point that is roughly in the centre of the circle. The greyscale gradient is automatically computed once these inputs are given.

The response given by a panel of end users demonstrated the technology’s ability in generating models reproducing, using a tactile language, works of art that are often completely inaccessible [60].

The technique was used to create a 2.5D replica (tactile bas-relief) of Masolino da Panicale’s “The Healing of the Cripple and the Raising of Tabitha” painting in the Brancacci Chapel of the Church of Santa Maria del Carmine in Florence, Italy (see Fig. 7). The reconstruction is mostly carried out utilizing SFS-based methods, with reference to topics that are not repeatable using simple geometries (such as Tabitha). This specific decision was made because, to accurately depict on the flat surface of the canvas the various grayscales of the real form under scene illumination, the artist often creates the three-dimensional illusion of a subject using the chiaroscuro technique. This raises the hypothesis that the only significant information that can be used to reconstruct the volume of a painted figure in a painting is the brightness of each individual pixel. The performance of most techniques used on real-world photographs is currently insufficient, while SFS approaches demonstrate to be effective for extracting 3D information (for instance, a height map) for synthetic photos (i.e., images made starting from a predefined normal map). Additionally, since paintings are handmade works of art, many details of the scene that are depicted (such as the silhouette and tones) cannot be perfectly replicated in the image. Additionally, painters commonly paint diffused light since it is difficult to predict the direction of the light and because imagined surfaces are not always completely diffusive.

Fig. 7
figure 7

Masolino da Panicale’s “The Healing of the Cripple and the Raising of Tabitha” painting was recreated in bas-relief and may be found in Florence, Italy’s Brancacci Chapel of the Santa Maria del Carmine church. [56]

The SFS problem's solution in relation to real-world photographs becomes much more challenging because of these limitations. For these reasons, the current research suggests a streamlined method in which the height map \({Z}_{final}\) of all the subjects in the image is created by combining three separate height maps: (1) “rough shape” \({Z}_{rough}\); (2) “main shape” \({Z}_{main}\) and (3) “fine details shape” \({Z}_{detail}\):

$${Z}_{final}={\lambda }_{rough}{Z}_{rough}+{\lambda }_{main}{Z}_{main}+{\lambda }_{detail}{Z}_{detail}$$
(3)

It must be remembered that since the eventual answer is created by adding together various contributions, several simplifying hypotheses that are true for obtaining each height map can be presented for each of them.

The developed method was also widely disseminated among specialists working in the field of cultural heritage, with special mention for specialists from the Musei Civici Fiorentini (Florence Civic Museums, Italy) and from Villa la Quiete (Florence, Italy), as well as the Italian Union of Blind and Visually Impaired People in Florence (Italy).

According to their recommendations, authors created a variety of bas-reliefs of well-known works of art from the Italian Renaissance, including “The Annunciation” by Beato Angelico (see Fig. 8) on display at the Museo di San Marco (Firenze, Italy), some figures from the “Mystical marriage of Saint Catherine” by Ridolfo del Ghirlandaio (see Fig. 9), and the “Madonna with Child and Angels”.

Fig. 8
figure 8

Prototype of “The Annunciation” of Beato Angelico placed in the upper floor of the San Marco Museum (Firenze), next to the original Fresco [56]

Fig. 9
figure 9

Prototype created for “Madonna with Child and Angels” by Niccol Gerini that resembles the Maddalena and the Child figures taken from the “Mystical marriage of Saint Catherine” by Ridolfo del Ghirlandaio [56]

Main findings in the field of 2.5 models’ retrieval from paintings are listed in Table 8.

Table 8 Most relevant methods for the reconstruction of digital bas-reliefs starting from paintings

As mentioned above, the main aim of 2.5 reconstruction starting from paintings is to help Museums in manufacturing several bas-reliefs resembling a painted scene to help blind people to access inherently bi-dimensional works of art. The reviewed methods allow a semi-automatic reconstruction of a painted scene and drafts several methodologies to create a digital bas-relief, to be eventually manufactured using AM technologies.

Table 9. shows main solutions arising from 2.5D reconstruction starting from painted images.

Table 9 Main solutions arising from 2.5D reconstruction starting from painted images

Watermarking

In the CH world, access to digital contents that resemble artworks is tightly related to copyright protection. According to [61], the management of the generated 3D models' digital rights has come under scrutiny as three-dimensional modelling and digitalization methods are used more frequently in cultural heritage field. Therefore, even though the issue of digital rights management protecting data from theft and misuse has previously been addressed for a variety of other information types (software code, digital 2D images, audio, and video files), there is a need to develop technological solutions specifically devoted to protecting interactive 3D graphics content.

A true innovation in this field may be “the adoption of digital watermarking to link copyright information on the newly proposed type of cultural data (visible and invisible annotated 3D representations of artworks), as it enables the rapid exchange of crucial information to promote access to and sharing of European cultural knowledge” [62]. The development of new geometric data processing technologies is particularly beneficial for 3D watermarking technology since geometric data includes inherent curvature, topology, and no implicit ordering (regarding the normal sampling of an image). However, 3D watermarking introduces a brand-new class of issues that were not present in the image and video situations. As a result, it is not a straightforward 2D to 3D extension. In terms of picture and video media type, a 3D model may also be subject to more intricate and sophisticated attacks. Therefore, adapting conventional image and video watermarking algorithms to this novel type of media is quite challenging. As a result, only a small number of algorithms to conceal sensitive information (for IPR, authentication, and other purposes) within a 3D model have been established, even though many strategies and methods to embed copyright information in images and videos have been developed and tested with good results.

Transparency, robustness, and capacity are the primary requirements of a generic watermarking system, according to the academic literature [63]. The first one means that “the original image shouldn't be harmed by the inserted watermark.” The capacity of the watermark to respond to different attacks, whether unintentional (such as cropping, compression, or scaling) or intentional (i.e., intended to destroy the watermark), is known as robustness. “Capacity is the maximum amount of data that can be stored in digital data to guarantee accurate watermark recovery” [64].

Due to the distinctive features of each form of data, “specific algorithms and implementations had to be created. Audio, video, stereoscopic video, pictures, and 3D data have all been carefully examined in literature” [64, 65].

Artificial intelligence in the CH field

Several daily services, such as online shopping and streaming of music and video, employ artificial intelligence (AI). In museums, artificial intelligence has been applied in several applications, both visible to visitors and hidden from view.

Indeed, according to the EU Briefing PE 747.120—April 2023 [66], AI has unexpectedly entered the CH scene with both promising and surprising applications, including the ability to reconstruct works of art, finish a great musician's unfinished composition, identify the author of an ancient text, and provide architectural details for potential architectural reconstructions.

Within AI, Computer Vision (CV) is an enabling technology, since it is a powerful artificial sense to extract information from images: about places, objects, people, etc. It is possible to use it to automatically understand both contextual behaviours and situational conditions of people to provide the right information at the right time and place, e.g., to improve user interactions in museums. Regarding this, in [67] has been presented a system that performs artwork recognition and gesture recognition using computer vision, allowing an interaction between visitors and artworks in an exhibition (see Fig. 10).

Fig. 10
figure 10

Scheme of the system presented in [63]: users have a wearable system that provides an ego-vision video stream processed by a central server that recognizes gestures and framed artworks

A CV system based on neural networks for artwork recognition on mobile devices has been proposed in [68], as shown in Fig. 11; the goal is to implement a smart audio guide that, using also machine learning techniques for audio event recognition and user movement, is able to engage with the user when it is more appropriate, e.g. when he’s paying attention to an artwork and not when he is moving around or participating in a conversation.

Fig. 11
figure 11

Examples of interfaces of the system presented in [64]. On the left, the user is listening to the description of the artwork; on the centre, the user is reviewing an item in the history; on the right the user is speaking with someone not focusing on any artwork

CV can help to manage digital collections, considering both high-quality archive materials and images from web and social media.

Convolutional neural networks have been proposed in [69] to recognize artworks in collections of heterogeneous image sources. More recently multimodal neural networks like CLIP have improved the results in this context [70], adding zero-shot capabilities.

Generative AI models have been used to restore missing parts of the “Night Watch” painting by Rembrandt [71], or to perform inpainting of damaged areas (automatically detected through segmentation) as in [72]. AI can be used also to revamp and restore iconographic materials such as postcards and videos of historical archives. Deep neural networks have been proposed for colorization and restoration of B/W photos [73], and to restore and colorize films using a deep neural network [74], eliminating scratches exploiting temporal coherence of neighbouring frames (see Fig. 12). A method to recover videos of damaged analogic archives has been represented in [75].

Fig. 12
figure 12

Example of restoration of old films using the method proposed in [70]; Top row shows the input, bottom row the results of the restoration

AI and Computer Vision can help to improve the planning of an exhibition, evaluating how visitors interact with it; this technology can be applied also to cultural sites. In [76] the authors have shown that tools for facial expression recognition can be successfully used as alternatives to self-administered questionnaires for the measurement of customer satisfaction, evaluating this approach in a heritage site using a commercial tool (see Fig. 13).

Fig. 13
figure 13

Diagram of a system presented in [73] which uses egocentric visitor localization to aid the user and augment his visit (left) and to provide useful information to the site manager (right)

The authors of [77] take into account the issue of localizing visitors in a cultural site from egocentric (first-person) images, i.e., obtained from a wearable device; the theory is that “localization information can be useful both to assist the user during his visit (by suggesting where to go and what to see next, for example) and to provide behavioural information to the manager of the cultural site” (e.g., how much time has been spent by visitors at a given location? What has been liked most?). The authors have released a dataset that can help future researchers in the field, as well as the AI models used to recognize the locations of the site. It is interesting to note that one of the devices used to capture the images of the dataset is a Microsoft HoloLens, which can be used also for A/R applications. The dataset has been expanded to include object recognition and retrieval tasks in [78].

The Menmosyne system (see Fig. 14) placed at the Bargello Museum in Florence [78, 79] employed visitor observation through cameras, tracking of their movements within a museum hall, and how long they spent viewing an artwork to create a list of the artworks that each visitor would be interested in seeing. Then, using user re-identification, these favourite artworks are utilized to give tailored information and targeted recommendations of other items of interest on an interactive table.

Fig. 14
figure 14

The tabletop of the Mnemosyne system [74, 75] installed in the “Sala di Donatello” of the Bargello Museum

From the studies reviewed above, it is evident that AI is considerably changing the audience engagement, both inside and outside a museum’s four walls. Although the public is drawn to visually appealing AI applications when they engage with visitors, the technology can be even more useful when used in museum operations. Several technologies employ artificial intelligence (AI) to make choices and enhance museums for both staff and visitors, including websites, chatbots, and analytics tools. How significant technologies are adapted to institutions' public purposes and maintain their worth in the public domain is a crucial concern for both the Museums developing them and the technologies themselves. AI has a lot to offer, but it should be ensured that there are issues related to moral obligations and the rights of audiences to be uphold.

Gamification

Gamification, which is the application of game design features beyond the typical environment of games, has emerged as one of the key methods for socializing and enhanced communication with people in several fields, including cultural heritage. In this situation, computer vision approaches can aid in boosting user engagement [80, 81]. In the learning and education fields, gamification is growing in relevance, in view of the high effectiveness of ‘learning by doing’: students can be immersed in complex scenarios that are not representable in easier ways. Thanks to gamification, it is also possible to improve the students’ problem-solving ability, to stimulate cooperation exploiting tools they are already familiar with, and to enhance long-term memory thanks to recurring references in the game. The paradigms, and relative tools, included in the platform, will greatly help to develop story led interpretations, allow scalability to more artwork, lowering the burden on the platform user, i.e., the creative side, and eventually foster the production of significant engaging materials in a relatively short time.

In fact, gamification has been implemented at several museums worldwide. Just to cite a few, Petrosains Museum, located in Kuala Lumpur, Malaysia, carried out early experiences in developing systems for engaging stories about the science and technology of the petroleum business. The Nintendo DS Louvre Guide, which features a GPS and 3D images especially for the museum, was developed in 2012 by Nintendo and the Louvre. More than “700 photographs, 30 + hours of audio commentary, high-resolution images, 3D models, and video commentary” were included in the guide.

Visitors are requested to mimic a sculpture’s position through the “Strike a pose” application created for the ArtLens Exhibition [82], and they receive feedback on how accurately they did so. Visitors had the option of sharing their positions, seeing others’ poses, and trying out new stances. The site user is instructed to mimic a sculpture's unusual attitude after viewing a photograph of it. To determine how successfully the visitor captured the sculpture's stance, a Kinect sensor assesses how closely their pose resembles the original and calculates a percentage. The percentage achieved increases with improved matches. The skeleton matching software measures how well each sculpture matches the poses of museum visitors using a library of human-generated skeleton data obtained using Kinect data. Visitors can view other visitors’ photos, attempt a different stance, and send their own image capture [83] (see Fig. 15).

Fig. 15
figure 15

Example of “Strike a pose” use [79]

Instead, the “Make a Face” app focuses on faces, matching visitors’ expressions with a piece of art from the museum’s collection using facial recognition and landmark detection. Visitors see a portrait to determine the sentiment of the subject before matching their facial expression to another image. Facial recognition software instantly correlates a visitor’s facial expression with pieces of art in the CMA's collection. The system records the visitor's expression while measuring the nodal points on the face, the separation between the eyes, the contour of the cheekbones, and other recognizable traits. Then, to identify a match, these nodal points are set up against the nodal points computed from a database of 189 artwork images. The matching faces are gathered into strips in the form of a photo booth, and these strips are subsequently exhibited on the Beacon close to the gallery's entrance. Additionally, the user has the option of emailing their “photo strip” to oneself and sharing it with others [83], as depicted in Fig. 16. Given the difficulty in implementing such approaches on mobile devices at the time of their creation, the “Strike a pose” and “Make a face” applications are both available as installs.

Fig. 16
figure 16

Example of “Make a Face” used in [79]

Despite the two aforementioned methods do not provide a reward to the user, they are based on a social activity that not only stimulates the user's interest in the artistic work, but also conveys the message to acquaintances and friends who may be intrigued firstly by the app and, more importantly, secondly by the work itself.

Gamification can lead to short-term motivation, but it may not last long. User-centred design, which states that “a game has to give experiences of competence, autonomy and relatedness to the players, [83]” must be incorporated into the design process. These components of game design ought to make sense to the user and influence players’ perceptions in a favourable way. These components need to be connected to an event or an activity. Research indicates that the platform(s) utilized to include viewers in serious games should be user-friendly and participatory. Higher degrees of cognitive involvement are a result of both elements. When utilizing the platform, visitors' cognitive engagement increases, which improves learning outcomes and their overall visit happiness. As a result, there is a greater chance that guests will visit the communications museum again. These outcomes support the gamification platform's implementation in the telecom museum. For this reason, it seems like a potential effort that might be expanded upon and modified for use in different types of museums or establishments that preserve cultural material.

Language as a tool for enhancing artworks fruition

As already mentioned, GLAM actors require new techniques and modern technologies to deploy and disseminate the amount of knowledge found in cultural material. With reference to digitized items, these must be able to tell a story to be valorised. This means that museums must be able to deliver insightful, detailed, and didactic content to both general and specialized audiences by providing content that can pique the interest of an increasingly digital audience and raise awareness of the patrimony of cultural institutions worldwide.

Most literature in this context share the idea that digital storytelling is the key approach to engage the visitors [84,85,86]. Digital storytelling, implying a creative use of digital (meta)data, and “narrative metadata” (specific descriptors related to the possible employment in specific scenarios), is a powerful tool to develop engaging “encounters” which satisfy these prospects (depending on the target audiences, the developed “encounters” will be customized for specific backgrounds and expectations). In many areas, what was formerly largely an authoritative voice addressing the public through publications and exhibition displays has drastically changed into a multifaceted experience that encourages engagement and conversation with visitors.

This holds true for both traditional and Virtual Museums (VM), if not more so. A VM is a digital entity that combines elements of a physical museum to enhance, augment, or supplement the museum experience. Virtual museums retain the authoritative status granted by the International Council of Museums (ICOM) in its definition of a museum, and they can function as the digital equivalent of a physical museum or as autonomous entities [87]. Like a traditional (physical) museum, a VM can be built around particular artifacts, like in an art museum or a museum of natural history, or it might be made up of online displays made from primary or secondary materials, like in a scientific museum. Additionally, a VM may be defined as a typical museum’s mobile or online services (e.g., exhibiting digital replicas of its collections or exhibitions). To create an engaging experience within a VM, digital storytelling is required to build multimedia strategies to involve new audiences without alienating regular visitors; therefore, this has been a direct reaction to a more diversified audience. With storytelling, museum curator will be able to build interactive and engaging experiences without worrying about technical or presentation issues, allowing them to focus on content creation.

From a technical point of view, digital storytelling requires several ICT-based components:

  • an authoring platform i.e., a web application where professional users will find the necessary tools to create scenes, arrange the story narrative through these scenes, include content and setting the transitions between them.

  • A set of tools for managing different kinds of objects and services such as annotated 3D models, 2D content, videos, maps, avatars, etc.).

  • A WYSIWYG (“What you see is what you get”) visual scene editor with a set of properties to configure its behaviour and look, like position, size, event listener, source and much more, depending on the characteristics of the artwork.

  • A database where information is stored (content repository).

  • A set of graphical user interfaces and/or devices for human–machine interaction.

While all these technical tools are easily implementable (even if a huge work is required to create the contents), interestingly only a few literature methods consider linguistic or language-based tools and methods [90]. Consequently, only a few works deal with the implementation of linguistic tools in museums and/or exhibitions.

From a more linguistic point of view, in an Italian context, the locution “virtual museum” was introduced around the 1980s, and is a polysemous expression that has taken on, over the years, “an increasingly elusive meaning, especially due to the considerable changes in the scenario produced by the continuous technological transformations of communication and information” [88].

This terminology refers first to “the virtual reconstruction (navigable or not, immersive or not) of a monument or a more or less extensive site” [89] which can be used on site to improve and implement the museum itinerary in terms of both reading and interpretation of the work and, more generally, of the narrative that one wants to convey within the museum.

Examples include the Domus Aurea, the ancient house of the Roman emperor Nero, of which only a small part located on the Colle Oppio, in the city of Rome, is still accessible to tourists. Placed inside the Parco Archeologico del Colosseo (https://colosseo.it/area/domus-aurea/), the Domus, thanks to the use of innovative multimedia technologies and interventions (e.g., video mapping, immersive reality, projections, virtual reality installations), can now be virtually visited by the user in almost its entirety. In this specific case, the creation of a virtual museum offers a cognitive and emotional contribution, corroborating the storytelling of the archaeological route through the virtual reconstruction of the historical site.

In this case there is no need for virtual or physical reconstruction of artworks, since the reproduced works are actually visible in physical museums (a different case would be if the virtual museum were to compensate for the loss or destruction of an object or an archaeological site), but there is however the function "of enabling the reading and interpretation of the works […]; a compromise that could be useful as a preparatory (or post-paratory) to viewing the real work, especially in the very frequent case where the museum hosting it lacks adequate tools for such functions" [91]. Again, storytelling plays a significant role in this situation; one may choose to employ a variety of narrative techniques to convey the history of the new museum rather than just arranging the pieces according to importance or chronological order. Instead, one could create educational and enjoyable pathways for the visitor. An example of such a virtual museum comes from the Italian marketing agency “Digital to Asia”, which, together with the Italian production company “Way Experience”, created a virtual museum containing the works of Leonardo da Vinci. The museum has been made available on the Chinese platform “Alipay”.

The concept of “website accompanied by the physical museum” is another definition of “virtual museum”. It is typically regarded as “a web site that shows all or a substantial part of the exhibition of the real museum, with more or less rich complements concerning the works and collections” [91]. The visitor can use such a museum type for one of two purposes: either to explore the museum in person or virtually while they are there (see, for example, https://www.museoegizio.it/scopri/tour-virtuali/), or to conduct research in the collection database (see, for example, https://collezioni.museoegizio.it). If the first objective is chosen, the virtual museum can support the narrative of the museum by providing access to relevant multimedia material.

Whatever the type of museum (traditional or virtual), it seems clear that storytelling, is one of the main tools that the museum must adopt to engage the visitor. For storytelling to be effective, further consideration must be given to the language utilized, in addition to the creative design of the museum route and all elements of the exhibition and setting. Accordingly, the exhibition panels and captions serve as the gold standard for evaluating the level of care devoted to language in museums. However, at present, this textual typology has not been sufficiently considered by linguistic studies, not least because the drafting of captions and panels has often been entrusted in the first instance to museum experts, some writers and communication experts. In other words, the involvement of the linguist is almost always missing. The language used in captions and panels is frequently assessed; however, from a textual and linguistic standpoint, this sort of material is frequently inconsistent. Exhibition panels frequently make use of specialized terminology (without explanatory notes) and present complexity in sentence structure (subordination is used more), and a lack of information hierarchy, just to mention a few lacks in textual and linguistic terms. Further efforts are therefore required to improve these aspects.

Advances beyond the state of the art

The techniques and related technologies covered in Sect. "Advances beyond the state of the art" are the most up-to-date state-of-the-art in the field of geometry retrieval for CH and 2D and 3D modelling. Based on the important study carried out by scholars worldwide, several potential future enhancements of these approaches might be proposed.

2D Imaging technologies

Digital pictures of gigapixel resolution can be challenging to record due to physical issues like light diffraction, which serves as a barrier and restricts the sharpness that an optical device and a digital sensor can achieve. Additionally, advances in digital sensor resolution have already surpassed the optical resolution provided by lenses [92] and the upper limit imposed by light diffraction, which are incomparable to the resolution of the greatest digital sensors available today. To employ the sensor's effective resolution, the optical-sensor assembly would need to be larger than it already is, which is unthinkable for the development of traditional cameras in the future.

Multi-shot panorama capture is a useful technique for overcoming diffraction and getting gigapixel pictures with conventional cameras. To create a higher-resolution image, picture-stitching software must be able to join many photographs taken from the same angle with appropriate overlap between them. A panoramic head is required to fix the optical centre or no-parallax point of the lens while spinning the camera to gather the various photographs that will make up the final image to provide a perfect stitch between images and eliminate parallax errors.

Unfortunately, there are several issues with filming medium-sized works of art because of the small depth of field long focal length lenses offer.

Unfortunately, paintings on canvas that need to be repaired and/or presented in digital museums are not flat, so there is another thing to consider. Therefore, knowledge of the 3D structure of the paintings can enhance professional papers referring to an artwork that will be maintained in digital collections or museums.

Additionally, the substrate is rarely perfectly straight and additional paint and varnish layers can affect the surface topography. The paint was employed to create a 3D impression; therefore, the textured appearance may have been intentional, or it might have resulted through drying, hardening, or degrading. Therefore, it is believed that the creation of techniques to deal with high-resolution 3D scanning data of non-rigid objects, like paintings, is a crucial next step to enable (over time) data comparison of paintings. For instance, in [93], the surface topology of Girl with a Pearl Earring by Johannes Vermeer (about 1665) has been captured using multi-scale optical coherence tomography, 3D scanning based on fringe-encoded stereo imagery (at two resolutions), and 3D digital microscopy. In [92], a portable, inexpensive device that can consistently acquire a canvas' 3D geometry is presented. It uses a method that can get over the two problems with cross-ratio approaches. Using such a system involves a lot of human point identification, which makes it sluggish and prone to mistakes. Additionally, since paintings on canvases or wood panels are often the ones that need to be maintained and/or documented for digital museums, the painting surface is far from being comparable with a flat surface. Moreover, again in [92], 3D triangulation is used in conjunction with a consumer single-lens reflex (SLR) camera that has already been calibrated to address this issue and deliver high precision 3D measurements of thousands of points on the surface. Using the same camera and laser that were used to triangulate the 3D scene, 2D data for the orthophoto could also be collected.

3D Imaging technologies and reverse engineering

Despite the use of 3D technologies is becoming increasingly common in the CH field, the use of 3D devices has some drawbacks and technological challenges to be considered during their use. In detail:

  • When there is insufficient illumination, photogrammetry applications are limited by the difficulty in matching points between images with low contrast, especially for uniformly textured surfaces. Furthermore, where there are canopy covers, the measurement's precision decreases, mostly due to light ray blockages and the inability to project its own light source.

  • Specular reflections and ambient light provide the main difficulties for structured light systems. Low ambient light is required since the volumetric 3D image is created using grey levels. Additionally, the light projector needs to be of excellent quality because results can frequently be harmed by projector defocusing (unless binary coded patterns are used). The entire scene must be calibrated, adjusted, and focused on by the projector. Additionally, it should be noted that any vibration or movement can lead to 3D representations that are warped and have inaccurate geometric measurements. In the future, 3D reconstruction will require strong processing units for complex calculations.

  • The sensor is one of the most limiting factors for speed and overall performance in laser-camera systems, and the acquisition system is heavily reliant on its properties. Additionally, the resolution of systems is decreased by the laser speckle's inherent noise.

  • Due to their low cost and rising performance, RGB-D cameras are being used increasingly for 3D scanning. Although several studies [92] on the performance of RGB-D cameras suggested some dependability when using such tools as 3D scanners, they were not intended for this use. Because of their low resolution, this technology's fundamental limitations in the 3D acquisition of artworks. Future developments in this technology, by Intel (which currently dominates the market), will enable a more precise reconstruction.

Regarding the availability of datasets for 3D models, the information provided at this time is very inconsistent. Consumers frequently cannot tell the difference between 3D things that can be directly modified and simple pictures or movies of 3D models.

Experts are currently working to create better standards for data producers and aggregators as a result. This would facilitate the accurate tagging of 3D content and encourage the availability of more beneficial 3D content for users to find and appreciate. Not by chance, a task force within Europeana is working to improve the availability of this content for use in research, education, and the creative industries as well as the support for 3D cultural heritage.

The development of FAQs and suggestions for 3D creators and CH institutions about uploading 3D media online (both in the context of Europeana and in other databases) will be the focus of future research in this area. Future 3D content must be linked and embedded, as well as techniques to identify viewers and the 3D media formats that should be included in collections.

Recently, 3D online representations of cultural items have been created using WebGL technology. 3DCOFORM, 3DICONS, and RE@CT are a few of the recent EU projects in this area to receive funding. For a variety of digital cultural artifacts, the 3DCOFORM project proposes and implements an open standard of 3D model annotation based on X3D. They performed demonstrations both live and online. The metadata of the 3D model can be used by the virtual navigation interface to concentrate on areas of the cultural artifact that are of interest. To build on the success of 3DCOFORM, the 3DICONS project digitized larger environments, such as structures and archaeological sites. Accurate 3D reconstruction and texture estimation of real-world models were the objectives of this investigation. The perspective camera, which can be controlled with the mouse and arrow keys, might be used to navigate the virtual models. By recording actor performance in 3D video, RE@CT presents a novel production method for developing interactive characters of filmic quality. The virtual navigation algorithms used in earlier cultural heritage initiatives had a critical flaw in that they did not incorporate cinematographic camera techniques and meticulously designed camera tracks. These navigational methods are currently present in certain virtual reality interfaces and computer games [94].

For data formats intended to store cultural assets, this has not been done yet. Future studies should therefore focus on finding a solution to this problem. To bridge the gap between entertainment, education, and scientific study, 3D hyperlink navigation and user communication in a virtual environment are another thing to look at [95].

The learning outcomes measured in the research that have already been conducted with reference to AR/VR techniques and devices differ, but they often involve knowledge accomplishment, thinking abilities, people's affections, etc.

Authors in [96] for instance, discovered that students utilizing AR noticeably higher knowledge gains than those who did not, even when given a little amount of time for study at a scientific museum. According to some research, studying in museums with AR/VR support fosters higher-order thinking abilities in students, including inquiry [97] and critical and creative thinking [98]. Studies have looked at the motivation and feelings of students. According to authors in [27], wearable technology makes visitors feel more satisfied and enjoy their time while also personalizing their educational experience. Furthermore, a multimodally enhanced museum environment can foster empathy by giving students the opportunity to experience different historical situations from a first-hand perspective.

Other research, however, discovered drawbacks to AR/VR-assisted museum education. For instance, authors in [99] discovered that, as compared to the conventional pen-and-paper learning technique, playing AR games at a scientific centre did not always enhance social interaction or learning performance. Furthermore, students reported feeling queasy and lightheaded after utilizing VR equipment [100]. Certain gadgets, including head-mounted displays (HMDs), also caused discomfort to visitors physically [101]. Therefore, it is still necessary to synthesize and clarify if and to what degree AR and VR applications could aid visitors in learning at museums.

Computer-based reconstruction in archaeology

The most recent investigation into computer-based techniques for pottery analysis and reconstruction highlighted the following areas for improvement, making use of them a regular procedure for material organization for virtual museums.

· Axis of symmetry: since in the case of coarsely discretized models with small angular spanning and high noise level (typical results of damaged scanned sherds), none of the methods investigated always guarantees a reliable result, researchers' efforts will be addressed to develop strategies based on fuzzy combinations of more properties of axially symmetric surfaces.

· Representative profile: To make the determination of the representative profile more robust, new methods could be based on choices made from the analysis of dimensional features and no longer on simple averages over several sections.

  • Features recognition: to significantly reduce the operator supervision in evaluating the results, the computer-based analysis method could be improved by implementing original strategies [102,103,104] to determine automatically, for each fragment, the corresponding required input parameters.

  • Dimensional features: from the available segmentation results, researchers' efforts will be addressed to develop new dimensional features, exploiting new pieces of knowledge that can be extracted from a discrete 3D model of the ceramic finds.

  • Mosaicking: considering the results available from the archaeological ceramics' segmentation and features evaluation methods, new functions to be minimized could be developed with the addition of new constraints, such as thicknesses and radius values at certain profile heights.

Additive manufacturing

As previously mentioned, there are many outstanding efforts already being implemented at museums dealing with 3D technology. There are, however, a few potential disadvantages to consider. Most prominently, there are problems with copyright and authenticity. The technology makes quick copies with ease. This implies that there is potential for mass fabrication of copies, potentially “harming” the original. Not surprisingly, there is a significant deal of public interest in how copyright law will handle and account for 3D Printing technology. Even though the visual appearance of things is an overly complex area of law, it appears evident that a 3D dataset would be seen as a derivative work of the original and would thereafter violate copyright if it was produced from an artistic work that is still covered by copyright.

However, there is an issue with how the 3D-printed object is classified. Since the dataset is based on an artwork, one could conclude that it satisfies the description of a sculpture in Sect. 4 of the CDPA 1988 [105]. However, the concept of a sculpture in legal proceedings has never been entirely clear, despite various judgements seeking to do so [106]. Furthermore, it is expected that a museum will utilize the digital file to distribute information and use it for instruction in a public context even if it may be possible for it to hold the 3D dataset. However, as Koller et al. [107] noted, a museum would also be wary since they would lose control over how the object was shown and that there was a potential that the model might be stolen if it were made available online. It should also be considered that the materials used are different from the originals. Although it might be a tremendous opportunity, 3D replicas of artifacts are typically created of less expensive materials. This means that they might not be able to faithfully reproduce the original's essence.

The variety of AM materials and techniques is always expanding and changing. Systems that are especially “green” may become ubiquitous due to the drive for low carbon manufacturing. On the other hand, the impact of RP materials’ recyclability on the long-term stability of the final product has received very little attention. Due to its biodegradability, which has a positive effect on the environment, poly(lactic) acid has become more and more popular in FDM® and SLS®. However, this might result in even more challenging conservation issues.

Overall, there is a lengthy list of advantages to employing this modern technology. It enables museums to open their holdings to a wider audience. Visitors who are blind or have vision problems can view exhibits in new ways. Curators may have the chance to visit retirement homes, hospitals, schools, and more with 3D models. Through this, people who might not otherwise visit the museum will be reached. Research, conservation, and teaching can all benefit from this technology. A museum collection can gain significantly from the use of 3D printing when combined with already available technologies and knowledge. However, there is still a long way to go before 3D printing technology is fully developed in this field. Its intimate ties to sculpture art are influenced by a variety of factors, including print size and materials. Visitors can now get even closer to the treasures than before; however, the replica of the statue's likeness lacks part of the replica's authenticity. Additionally, there are still costs and lead times for making the 3D prototypes to consider.

Future developments in this technology will make printed materials more widely available and accelerate the 3D printing process. Curators at museums will benefit from having more printed replicas available, which will aid in research, education, and conservation.

From 2D artworks to 3D models

The 3D model retrieval from paintings is subject to some limitations, including those relating to tasks that cannot be completed automatically (such as scene segmentation) and those relating to how the scene is "interpreted" (such as how self-occlusions in the painted scene result in missing information in the 3D reconstruction). Referring to scene segmentation, the rise of AI can be helpful in automating such a process. Due to their advantages in autonomously extracting high-level features from pictures, ANNs (Artificial Neural Networks) have really achieved remarkable success in image segmentation problems in numerous fields in recent years. Convolutional Neural Networks (CNN), one of the most well-liked deep learning algorithms used today to segment medical pictures, are a good example of this.

In a recent work [108], just to cite an example, image segmentation is accomplished by means of such techniques with an accuracy higher than 96%. However, presently there are still few studies focused on the segmentation of pictorial scenes. Therefore, more efforts are expected on this topic soon.

Current techniques for 3D (or 2.5D) reconstruction should be improved by scientific literature by creating an interactive and specialized tool that is capable of: (1) quick image segmentation; (2) modelling and positioning segmented objects in a consistent scene (for example, according to perspective, if any); (3) reconstructing hidden parts (with consistent geometry); and (4) providing colour and texture information for 3D reconstructed surfaces. The creation of 3D scanning-based approaches to aid in the research of bas-reliefs is also a crucial feature to consider, as stated in [109] and in [110]. In particular, in [110] a first-attempt method made to let blind persons (BP) explore bas-reliefs tactilely is described. Along with several strategies, algorithms, and details on physical design, a thorough consistent hardware architecture is presented in this work. A hand-tracking system based on the Kinect® sensor and an audio device are included in the hardware configuration. In addition, a few design options are provided to analyse the benefits and drawbacks of the created system based on experimental testing connected to the device location (see Fig. 17). Despite the good premises of this work, scientific literature fails to achieve this ambitious purpose, and it is not just because of technological difficulties. Helping a user—such as a blind person—explore an artwork requires gradual assistance in gathering information and organizing it into a “mental scheme” that becomes progressively complete and detailed. This assistance cannot be limited to a description of an artwork scene and/or touched areas.

Fig. 17
figure 17

Framework proposed in [106] for enhancing tactile experience of artworks for visually impaired people

Therefore, as was also mentioned in the Section devoted to Storytelling, the development of a system capable of automatically providing spoken information about touched locations should be regarded as a step forward in this field.

Such an automatic verbal guide could increase the user's autonomy during the exploration, enabling him to take control of the experience and explore the artworks with greater freedom (e.g., automatically calculating the amount of time needed for a thorough appreciation, allowing the hands to move freely, pausing to think, etc.).

Watermarking

Recent works examined at the challenge of 3D watermarking for CH. The authors of [111] outline a customized method to supply chain management and CAD model ownership utilizing integrated signal processing and cryptography techniques. The described approach generates unique IDs for 3D works using frequency-domain transformations and non-fungible tokens (NFTs), which are permanent on open distributed ledgers. The NFTs have the dual functionality of (a) permitting ownership transfer and (b) verifying the owner of a CAD model. They are created using smart contracts on the Ethereum blockchain. Authors in [112] describe the development of a robust semi-public invisibly blind watermarking system to defend 3D models from data compression assaults. The idea was implemented using Autodesk's 3DS Max program, which generated a 3D polygonal mesh model. To produce the watermark on the mesh's vertices, a 3-bit sequence was put in the mesh's binary file on the chunk of the file containing the vertices list and on its zero bits. In this research, writers in [113] provide a non-blind watermarking approach for 3D point clouds. The method relies on the graph Fourier transform, a signal processing method that has been used to manage data dispersed across domains of any shape. The bits in this work are contained in the colour information associated with each cloud point, as opposed to being injected into the model's spatial coordinates, as they are in earlier published studies on point cloud watermarking.

The development of more trustworthy algorithms that can merge the raw data watermarking techniques now in use for 2D, 3D, audio, and video into a blind and reliable watermarking scheme for the annotated multi-textured 3D model will be crucial in the future. It will be feasible to guarantee the possibility of a reliable distribution, publication, and distribution of such models for everyone, from anywhere at any time, by including watermarks into the revolutionary form of data.

Artificial intelligence in the CH field

Multimodal neural networks like OpenAI CLIP [114], capable of dealing with multiple types of inputs, like text and images, are becoming increasingly used because of their capabilities in zero-shot tasks and their transferability in different domains. In [115] CLIP has been trained to perform fine-grained art classification, and in [116] a novel method based on CLIP and textual inversion has addressed the problem of zero-shot combined image retrieval, i.e., an application of content-based image retrieval where a user searches images based on a query image and some natural language text that describes the desired changes to it.

The proposed approach eliminates the need of training datasets that are expensive to create, an issue that affects particularly the Cultural Heritage domain, which often lacks large scale datasets needed to research deep learning approaches. Visual Question Answering is a task that can be solved using multimodal data processing approaches, e.g., to create chatbots capable of answering questions regarding the visual content of the image of an artwork and considering the contextual information, e.g., regarding the artist. To train deep learning systems capable to address this task it is mandatory to develop large datasets, like the recent VISCOUNTH dataset [117]. In this context techniques like Large Language Models (LLM) popularized by tools like ChatGPT can be used, as recently shown in [118] and [119].

Gamification

Recent advancements in mobile computing, like the introduction of neural network accelerators in mobile phones, coupled with the development of neural network architectures designed to run on such mobile devices, allow to move gamification approaches from installations and workstations toward end users’ devices. This allows to follow the Bring-Your-Own-Device (BYOD) approach also for this type of application. Examples of such applications are the Strike-a-pose and Face-fit applications presented in [120], which, differently from the applications presented in the Sect. "Gamification", are web-based, and can run also on mobile phones. The first application challenges users to replicate a set of poses of artworks. The program enables the user to create a video that can be stored for any social sharing after all the poses have been matched. The user matching procedure and overall interactive experience as it was experienced at the museum are shown in the video. In the second application, users are asked to place their faces over some renowned artists' pictures in the same head and facial expressions, creating a new image that may once more be shared on social media.

Language as a tool for enhancing artworks fruition

Recent research, conducted by the Universities of Florence, Roma Tre, and La Sapienza, has shown, through the measurement of psychophysiological and behavioural stimuli triggered by the reading of captions supporting works of art, that people receive greater gratification, in terms of cognitive and emotional involvement, when the viewing of the work of art is accompanied by well-crafted explanatory texts [121].

Based on these expectations, the linguistic perspectives for the future envision a reformulation of the writing of captions and exhibition panels (of both tangible and intangible works) that could go in the direction of 1. simplification strategies; 2. the creation of texts differentiated according to sociolinguistic parameters (for example, explanatory panels reserved for an expert audience or a children's audience). The project, in fact, is part of the partnership of PE5—CHANGES “Cultural Heritage Active Innovation for Sustainable”, Spoke 4 “Virtual Technologies for Museums and Art Collections”, which, among other activities, deals with the aforementioned topic with reference to the tangible and intangible cultural heritage of a number of Italian museums, and of which the working group of the University of Florence is a member, composed in particular of Prof. Giorgio Bacci, Prof. Marco Bertini, Prof. Marco Biffi, Prof. Rocco Furferi and researchers Kevin De Vecchis and Fabrizio Federici.

For now, it is worth mentioning that such an approach, with its emphasis on simplification and accessibility, both emerges in various studies (e.g. [122, 123]) and can be found in [124] which provides a set of guidelines for communication in museums with reference to internal signage, captions, and panels. If the term “incentive factors” is restricted to the ones relevant to linguistics (mostly connected to the level of study of lexicon, syntax, and textuality), then the following aspects (quoted from [121], pp. 49 ff.) should be considered:

  • minimise the effort: reducing the number of words per paragraph (e.g., by dividing the text into small paragraphs or by using bulleted lists), associating text with graphs, diagrams, or pictures, placing text close to the object to which it refers, paying attention to the contrast between text and background, etc.;

  • start with the most important information, do not put it at the bottom;

  • express one idea per sentence and one theme per paragraph, whenever possible;

  • simplify the language: making the content easier for non-specialists to understand and defining technical terms when it is necessary to use them;

  • “entering” the theme to closely adhere to the topic at hand, avoiding side topics and too complex ideas;

  • avoid information overload: limiting the number of captions and panels to be read in a room as a whole and the number of lines of text per medium;

  • avoid academic, formal, and impersonal writing style: adopt rather a conversational style.

Even more precisely:

  • use the same syntactic structure in the sentence as in spoken language;

  • express one main concept per sentence;

  • use about 45 characters per line, dividing the text into short paragraphs of 4–5 lines maximum;

  • ideally express the subject at the beginning of the phrase and use the active form of verbs;

  • avoid using redundant adverbs, complicated grammatical structures, and subordinate phrases;

  • prepare one or more alternative versions of the text, also in summarised form, suitable for several target audiences (e.g., a Braille version for the blind, an English version, etc.)

  • explain technical/specialist words (“i.e.”, “it means…”);

  • dissolve acronyms;

  • use symbols in a shrewd manner, providing interpretative keys to them;

  • translate foreign words into the language of the Country hosting the Museum;

  • not using abbreviations in the plural;

  • use nouns sparingly in ‘-ing’, preferring the use of the verb in the infinitive.

These strategies, some of which need to be revised (e.g., the point “use the same syntactic structure in the sentence as in spoken language” is not correct from a linguistic point of view), represent a fundamental starting point from which further linguistic guidelines can be developed (e.g., use of comprehensible terms and basic vocabulary words; use of glosses of specialized terms; attention to gendered language; textual differentiation according to audience etc.).

A further development in the direction of the refinement of linguistic tools (which is one of the main goals of the research group PE5—CHANGES “Cultural Heritage Active Innovation for Sustainable”, Spoke 4) revolves around a broadening of the concept of Augmented Reality, in a free zone between virtual and physical museum. When considering AR, the aspect on which attention is most concentrated is the virtual completion of a physical reality: the virtual reconstruction of the missing portion of an archaeological artifact, the integration of a portion of the urban or rural landscape, the reconstruction at an archaeological site of what the urban situation must have been at a given time coordinate, etc. [125]. However, museum reality, or even guided tours in archaeological sites or cities of art, can also consist of the integration of other types of information, also extendable to virtual exhibitions or virtual urbanistic and geographic reconstructions.

Captions and panels in a museum can be complemented by a repository of in-depth material: texts, recordings, audio-visuals, databases, online language dictionaries and encyclopaedic dictionaries, which can complement and expand the content. The most intriguing prospects are those that go not only in the direction of expanding information but above all in the direction of content diversification according to different target groups. It is indeed obvious that when faced with a Greek vase, the interaction is different for a primary school child, for a science graduate, for an archaeologist. This focused distinction is particularly crucial for linguistic content, for which geographic origin, educational attainment, age, and other sociolinguistic factors must be taken into consideration. Materials “augmented” to the exhibits in a museum, or associated to historical or architecturally significant buildings, can easily be available to anybody with a mobile phone through a QRCode, satisfying the actual demands of users, who will be able to access an information management system that takes their profiles into account. Interaction is of course even easier when moving from physical to virtual reality.

A lot more could be done in the direction of calibrating the information in accordance with mood and attention level of the users, which can be detected and measured with specialized sensors and targeted software [126,127,128], some of which are also applicable to remote users and thus also to visitors of virtual exhibitions. This also enables calibrating the language variety used and adjusting its characteristics in accordance with the reader's or listener's level of frustration. With the help of sensors and the profiling of users who have access to this kind of AR, information can also be managed in relation to specific disabilities [129].

Conclusions

Galleries, libraries, archives, and museums are increasingly incorporating innovative methods and equipment to disseminate the plethora of information contained in cultural heritage. It is not by coincidence that several studies are being conducted presently to enhance and expand the experience provided by museums in the scientific literature.

In addition to providing an overview of the major research on the use of innovative technologies, the current study also generated some ideas about how the state of the art is evolving as technology advances. The main objective of this work paper was to get CH professionals interested in shares that may increase their understanding of technical solutions to improve the standard of researchers working on this topic.

Given the enormous amount of research in the field to date, this work may be a useful starting point for practitioners and researchers to understand the primary technologies created to date, even though it does not provide an in-depth survey of all the topics of interest. The authors also intend to stimulate an extensive debate on the topics discussed in this paper among the larger scientific community to enable the adoption of modern technologies into the cultural heritage sector more regularly. Due to these factors, it would be interesting to continue this research in the future while also investigating potential applications for other innovative technologies that may not be used or useful to cultural heritage now but will be in the future.

There are significant limitations in this work, especially considering the range of technologies reviewed. First of all, only the most recent technology (as well as techniques) supporting enhanced utilization of cultural assets are examined. For instance, while looking at additive manufacturing techniques, not all potential technologies have been thoroughly analysed; instead, a few global best practices have been looked at. Another aspect which is only hinted at, but not studied in deep, is related to the copyright issues in 3D printing. Naturally, copying is allowed in situations when permission is given explicitly or implicitly. Therefore, there wouldn't be any copyright infringement if the 3D printer had the owner of the copyright's clear permission to print his design. But for most museums this is not applicable. Museums have the right not to allow three-dimensional copies of exhibited works and this means that any research in the field must inevitably follow the lead of the major institutions. While copyright issues are a major problem in 3D printing, as the industry grows, other important legal issues will also need to be resolved. Future problems that will need to be answered include whether or not someone who uses a 3D printer to make an object will be liable for the device's manufacturing and how industry-specific laws will relate to general-purpose 3D printing. As the business develops, more legal concerns will undoubtedly surface, just as in any other growing sector.

The same applies to AI-based technologies. Artificial Intelligence is a broad field of research that encompasses very broad topics such as machine learning, deep learning, artificial neural networks, computational statistics, and machine vision. A comprehensive analysis of all these topics leads beyond the scope of the work that has been presented. It is preferable, in this instance as well, to concentrate on the most recent museum experiences, including the inventive ones developed by some of the paper’s authors.

The ethical concerns surrounding the employment of intelligent apps for museums, virtual collections, and visitor engagement are another topic that is merely touched upon in the study but not addressed. AI, for instance, has the potential to impact multiple risks and ethical dilemmas related to security, privacy, workforces, and regulations. Additionally, the use of generative AI technology may give rise to a number of new business risks, including plagiarism, false information, copyright violations, and dangerous material.

With regard to gamification and storytelling, the current review does not fully consider the social and educational effects of the approaches mentioned. It is authors’ opinion that the focus on these topics will grow significantly in the next decade. Data, proof, and personal tales are all used in effective impact storytelling to create powerful tools for promoting constructive change and inspiring people to support a museum cause; therefore, social aspects cannot be neglected when designing such experiences.

In the near future, the authors will be working on projects primarily associated with the Italian PNRR (National Recovery and Resilience Plan), specifically in relation to Spoke 4 (“Virtual Technologies for Museums and Art Collections”) and PE5—CHANGES (“Cultural Heritage Active Innovation for Sustainable”). In particular, the future work will be addressed at the implementation of ICT-based technologies for the virtualization of CH using technologies like 3D scanning and Modelling, Additive Manufacturing, IA and linguistic tools. Therefore, most of the currently adopted technologies will be deployed. However, in authors opinion, this work will require a major effort in involving researchers from all over the world and to include a number of stakeholders which can contribute with their experience. The future of museums is on improving the visitor experience consistently while retaining the conceptual, artistic, and historical elements and simultaneously involving individuals of various abilities, backgrounds, emotions, and interests.

Availability of data and materials

Not applicable.

References

  1. Global Museum Market (2023 Edition): Analysis By Source of Revenue, Museum Type (Art, History and Culture, Natural, Others), By Age Group, By Region, By Country: Market Insights and Forecast (2019–2029). https://www.researchandmarkets.com/report/museum?gad_source=1&gclid=EAIaIQobChMI6tzV7aXJgwMVmBAGAB3WAwOVEAAYASAAEgKqm_D_BwE, Accessed: 06/01/2024.

  2. Shehade M, Stylianou-Lambert T. Virtual reality in museums: exploring the experiences of museum professionals. Appl Sci. 2020;10(11):4031. https://doi.org/10.3390/app10114031.

    Article  CAS  Google Scholar 

  3. Lester P. Is the virtual exhibition the natural successor to the physical? J Soc Arch. 2006;27(1):85–101. https://doi.org/10.1080/00039810600691304.

    Article  Google Scholar 

  4. Dattolo A, Luccio FL. Visualizing Personalized Views in Virtual Museum Tours, In: Conference on Human System Interactions, Krakow, Poland, 2008, pp. 109–114. https://doi.org/10.1109/HSI.2008.4581418. 2008.

  5. Cossairt OS, Miau D, Nayar SK. Gigapixel computational imaging. In: 2011 IEEE International Conference on Computational Photography (ICCP), pp. 1–8. https://doi.org/10.1109/ICCPHOT.2011.5753115. 2011.

  6. C2RMF. https://c2rmf.fr. Accessed 12 Jan 2022.

  7. Haltadefinizione. https://www.haltadefinizione.com. Accessed 21 Jan 2023.

  8. Madpixel. https://www.madpixel.es. Accessed 20 Nov 2023.

  9. Gruen A. Development and status of image matching in photogrammetry. Photogrammetric Record. 2012;27(137):36–57. https://doi.org/10.1111/j.1477-9730.2011.00671.x.

    Article  Google Scholar 

  10. Governi L, Furferi R, Volpe Y, Puggelli L, Vanni N. Tactile exploration of paintings: An interactive procedure for the reconstruction of 2.5D models. In: 2014 22nd Mediterranean Conference on Control and Automation, MED 2014 (pp. 14–19). https://doi.org/10.1109/MED.2014.6961319. 2014.

  11. Markevicus T, Olsson N, Carfagni M, Furferi R, Governi L, Puggelli L. IMAT project: From innovative nanotechnology to best practices in art conservation. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7616 LNCS). https://doi.org/10.1007/978-3-642-34234-9_83. 2012.

  12. Pintus R, Pal K, Yang Y, Weyrich T, Gobbetti E, Rushmeier H. A survey of geometric analysis in cultural heritage. Comput Graphics Forum. 2016;35(1):4–31. https://doi.org/10.1111/cgf.12668.

    Article  Google Scholar 

  13. Bimber O, Coriand F, Kleppe A, Bruns E, Zollmann S, Langlotz T. Superimposing pictorial artwork with projected imagery. In ACM SIGGRAPH 2006 Courses on SIGGRAPH ’06 (p. 10). New York, New York, USA: ACM Press. https://doi.org/10.1145/1185657.1185805. 2006.

  14. Wu X, Tang N, Liu B, Long Z. A novel high precise laser 3D profile scanning method with flexible calibration. Opt Lasers Eng. 2020;132: 105938. https://doi.org/10.1016/j.optlaseng.2019.105938.

    Article  Google Scholar 

  15. Xie Z, Xu K, Shan W, Liu L, Xiong Y, Huang H. Projective feature learning for 3D shapes with multi-view depth images. Comput Graphics Forum. 2015;34(7):1–11. https://doi.org/10.1111/cgf.12740.

    Article  Google Scholar 

  16. Sylaiou S, Mania K, Paliokas I, Pujol-Tost L, Killintzis V, Liarokapis F. Exploring the educational impact of diverse technologies in online virtual museums. Int J Arts Technol. 2017;10(1):58–84. https://doi.org/10.1504/IJART.2017.083907.

    Article  Google Scholar 

  17. Münster S, Friedrichs K, Hegel W. 3D reconstruction techniques as a cultural shift in art history? Int J Digital Art History. 2019;3:39–59.

    Google Scholar 

  18. Muenster S. Digital 3D Technologies for humanities research and education: an overview. Appl Sci. 2022;12(5):2426. https://doi.org/10.3390/app12052426.

    Article  CAS  Google Scholar 

  19. Nicolae C, Nocerino E, Menna F, Remondino F. Photogrammetry applied to problematic artefacts. Int Arch Photogramm Remote Sens Spat Inf Sci. 2014;40:451–6.

    Article  Google Scholar 

  20. Kingsland K. Comparative analysis of digital photogrammetry software for cultural heritage. Digital Appl Archaeol Cultural Herit. 2020;18: e00157. https://doi.org/10.1016/j.daach.2020.e00157.

    Article  Google Scholar 

  21. Remondino F. Heritage recording and 3D modeling with photogrammetry and 3D scanning. Remote Sensing. 2011;3(6):1104–38. https://doi.org/10.3390/rs3061104.

    Article  Google Scholar 

  22. Barone S, Paoli A, Razionale AV. 3D virtual reconstructions of artworks by a multiview scanning process. In: 2012 18th International Conference on Virtual Systems and Multimedia (pp. 259–265). IEEE. https://doi.org/10.1109/VSMM.2012.6365933. 2012

  23. Inzerillo L, Santagati C. Crowdsourcing cultural heritage: from 3D modeling to the engagement of young generations. In: Digital Heritage. Progress in Cultural Heritage: Documentation, Preservation, and Protection: 6th International Conference, EuroMed 2016, Nicosia, Cyprus, October 31–November 5, 2016, Proceedings, Part I 6 (pp. 869–879). Springer International Publishing. 2016.

  24. Lee H, Jung TH, Tom DMC, Chung N. Experiencing immersive virtual reality in museums. Inf Manag. 2020;57(5):103229. https://doi.org/10.1016/j.im.2019.103229.

    Article  Google Scholar 

  25. Fineschi A, Pozzebon A. A 3D virtual tour of the Santa Maria della Scala Museum Complex in Siena, Italy, based on the use of Oculus Rift HMD. In: 2015 International Conference on 3D Imaging (IC3D) (pp. 1–5). IEEE. 2015; https://doi.org/10.1109/IC3D.2015.7391825.

  26. Chen C-A, Lai H-I. Application of augmented reality in museums – Factors influencing the learning motivation and effectiveness. Science Progress. 2021; 104 (3_suppl), 00368504211059045. https://doi.org/10.1177/00368504211059045

  27. Zhou Y, Chen J, Wang M. A meta-analytic review on incorporating virtual and augmented reality in museum learning. Educ Res Rev. 2022;36: 100454. https://doi.org/10.1016/j.edurev.2022.100454.

    Article  Google Scholar 

  28. Wen Y, Wu L, He S, Ng NHE, Teo BC, Looi CK, Cai Y. Integrating augmented reality into inquiry-based learning approach in primary science classrooms. Educ Technol Res Dev. 2023;71:1631–51. https://doi.org/10.1007/s11423-023-10235-y.

    Article  Google Scholar 

  29. Di Angelo L, Di Stefano P, Morabito AE. A robust method for axis identification. Precis Eng. 2015;39:194–203. https://doi.org/10.1016/j.precisioneng.2014.08.008.

    Article  Google Scholar 

  30. Di Angelo L, Di Stefano P. Axis estimation of thin-walled axially symmetric solids. Pattern Recogn Lett. 2018;106:47–52. https://doi.org/10.1016/j.patrec.2018.02.022.

    Article  Google Scholar 

  31. Di Angelo L, Di Stefano P, Pane C. An automatic method for pottery fragments analysis. Measurement. 2018;128:138–48. https://doi.org/10.1016/j.measurement.2018.06.008.

    Article  Google Scholar 

  32. Willis AR, Cooper DB. Computational reconstruction of ancient artefacts: from ruins to relics. IEEE Signal Process Mag. 2008;25(4):65–83. https://doi.org/10.1109/MSP.2008.923101.

    Article  Google Scholar 

  33. Biasotti S, Thompson EM, Spagnuolo M. Context-adaptive navigation of 3D model collections. Comput Graph. 2019;79:1–13. https://doi.org/10.1016/j.cag.2018.12.004.

    Article  Google Scholar 

  34. Pottmann H, Peternell M, Ravani B. An introduction to line geometry with applications. Comput Aided Des. 1999;31(1):3–16. https://doi.org/10.1016/S0010-4485(98)00076-1.

    Article  Google Scholar 

  35. Cao Y, Mumford D. Geometric structure estimation of axially symmetric pots from small fragments, In: Proceedings of Signal Processing, Pattern Recognition, and Applications, IASTED International Conference, June 25–28, 2002, Crete, Greece.

  36. Karasik A, Smilansky U. 3D Scanning technology as a standard tool for pottery analysis: practice and theory. J Archaeol Sci. 2008;35(5):1148–68. https://doi.org/10.1016/j.jas.2007.08.008.

    Article  Google Scholar 

  37. Han D, Hahn HS. Axis estimation and grouping of rotationally symmetric object segments. Pattern Recogn. 2014;47(1):296–312. https://doi.org/10.1016/j.patcog.2013.06.022.

    Article  Google Scholar 

  38. Rasheed NA, Nordin MJ. Reconstruction algorithm for archaeological fragments using slope features. ETRI J. 2020;42(3):420–32. https://doi.org/10.4218/etrij.2018-0461.

    Article  Google Scholar 

  39. Huang QX, Flöry S, Gelfand N, Hofer M, Pottmann H. Reassembling fractured objects by geometric matching. In ACM SIGGRAPH 2006 Papers, 2006; 569–578. https://doi.org/10.1145/1179352.1141925

  40. Kampel M, Sablatnig R. 3D Puzzling of Archaeological Fragments, In Proceedings of the 9th Computer vision Winter Workshop, 4–6 February 2004, Piran, Slovenia, 31–40. 2004.

  41. Wang J, Qian W, Liu H, Ji K. Quantitative analysis of pottery from the Tianma-Qucun site based on 3D scanning and computer technology. Archaeol Anthropol Sci. 2019;11(10):5645–56. https://doi.org/10.1007/s12520-019-00900-w.

    Article  Google Scholar 

  42. Hlavackova-Schindler K, Kampel M, Sablatnig, R. Fitting of a Closed Planar Curve Representing a Profile of an Archaeological Fragment, In: Proceedings of VAST 2001 Virtual Reality, Archeology, and Cultural Heritage, November 28–30 2001, Athens, Greece, 2001; 263–269. https://doi.org/10.1145/584993.585034

  43. Palmas G, Pietroni N, Cignoni P, Scopigno R. A computer-assisted constraint-based system for assembling fragmented objects. In: Proceedings of the 2013 Digital Heritage International Congress (DigitalHeritage), 28 Oct.-1 Nov. 2013, Marseille, France, 1: 529–536. https://doi.org/10.1109/DigitalHeritage.2013.6743793. 2013.

  44. Zheng SY, Huang RY, Wang Z, Li J. Reassembling 3d thin fragments of unknown geometry in cultural heritage. ISPRS Annal Photogramm Remote Sensing Spatial Inf Sci. 2014;2(5):393–9. https://doi.org/10.5194/isprsannals-II-5-393-2014.

    Article  Google Scholar 

  45. Willis AR, Cooper DB. Bayesian assembly of 3D axially symmetric shapes from fragments. In: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004. (Vol. 1, pp. I-I). IEEE. https://doi.org/10.1109/CVPR.2004.1315017. 2004.

  46. Stamatopoulos MI, Anagnostopoulos CN. 3D digital reassembling of archaeological ceramic pottery fragments based on their thickness profile. arXiv preprint arXiv:1601.05824. https://doi.org/10.48550/arXiv.1601.05824. 2016.

  47. Kotoula E. Semiautomatic fragments matching and virtual reconstruction: a case study on ceramics. Int J Conserv Sci. 2016;7(1):71–86.

    Google Scholar 

  48. Rasiya G, Shukla A, Saran K. Additive manufacturing-a review. Mater Today Proceed. 2021;47(19):6896–901. https://doi.org/10.1016/j.matpr.2021.05.181.

    Article  Google Scholar 

  49. Colorado HA, Mendoza DE, Valencia FL. A combined strategy of additive manufacturing to support multidisciplinary education in arts, biology, and engineering. J Sci Educ Technol. 2021;30:58–73. https://doi.org/10.1007/s10956-020-09873-1.

    Article  Google Scholar 

  50. Coon C, Pretzel B, Lomax T, Strlič M. Preserving rapid prototypes: a review. Herit Sci. 2016;4(1):1–16. https://doi.org/10.1186/s40494-016-0097-y.

    Article  CAS  Google Scholar 

  51. Horry Y, Anjyo KI, Arai K. Tour into the picture: using a spidery mesh interface to make animation from a single image. In: Proceedings of the 24th annual conference on Computer graphics and interactive techniques (pp. 225–232). 1997.

  52. Hoiem D, Efros AA, Hebert M. Automatic photo pop-up. In: ACM SIGGRAPH 2005 Papers (pp. 577–584). 2005.

  53. Wu J, Martin RR, Rosin PL, Sun XF, Langbein FC, Lai YK, Liu YH. Making bas-reliefs from photographs of human faces. Comput Aided Des. 2013;45(3):671–82. https://doi.org/10.1016/j.cad.2012.11.002.

    Article  Google Scholar 

  54. To HT, Sohn BS. Bas-relief generation from face photograph based on facial feature enhancement. Multimedia Tools Appl. 2017;76:10407–23. https://doi.org/10.1007/s11042-016-3924-y.

    Article  Google Scholar 

  55. Zhang R, Tsai PS, Cryer JE, Shah M. Shape-from-shading: a survey. IEEE Trans Pattern Anal Mach Intell. 1999;21(8):690–706. https://doi.org/10.1109/34.784284.

    Article  Google Scholar 

  56. Governi L, Furferi R, Puggelli L, Volpe Y. Improving surface reconstruction in shape from shading using easy-to-set boundary conditions. Int J Comput Vision Robotics. 2013;3(3):225–47. https://doi.org/10.1504/IJCVR.2013.056041.

    Article  Google Scholar 

  57. Governi L, Carfagni M, Furferi R, Puggelli L, Volpe Y. Digital bas-relief design: a novel shape from shading-based method. Computer-Aided Design Appl. 2014;11(2):153–64. https://doi.org/10.1080/16864360.2014.846073.

    Article  Google Scholar 

  58. Carfagni M, Furferi R, Governi L, Volpe Y, Tennirelli G. Tactile representation of paintings: an early assessment of possible computer-based strategies. In: Progress in Cultural Heritage Preservation: 4th International Conference, EuroMed 2012, Limassol, Cyprus, October 29–November 3, 2012. Proceedings 4 (pp. 261–270). Springer Berlin Heidelberg. 2012.

  59. Volpe Y, Furferi R, Governi L, Tennirelli G. Computer-based methodologies for semi-automatic 3D model generation from paintings. Int J Comput Aided Eng Technol. 2014;6(1):88–112. https://doi.org/10.1504/IJCAET.2014.058012.

    Article  Google Scholar 

  60. Furferi R, Governi L, Volpe Y, Puggelli L, Vanni N, Carfagni M. From 2D to 2.5 D i.e., from painting to tactile model. Graph Models. 2014;76(6):706–23. https://doi.org/10.1016/j.gmod.2014.10.001.

    Article  Google Scholar 

  61. Koller D, Frischer B, Humphreys G. Research challenges for digital archives of 3D cultural heritage models. J Comput Cultural Herit. 2010;2(3):1–17. https://doi.org/10.1145/1658346.1658347.

    Article  Google Scholar 

  62. Shu J, Qi Y, Cai S, Shen X. A novel blind robust digital watermarking on 3d meshes. In: Second Workshop on Digital Media and its Application in Museum and Heritages (DMAMH 2007) (pp. 25–31). IEEE. 2007.

  63. Panchal UH, Srivastava R. A comprehensive survey on digital image watermarking techniques. In: 2015 Fifth International Conference on Communication Systems and Network Technologies (pp. 591–595). IEEE. 2015.

  64. Delmotte A, Tanaka K, Kubo H, Funatomi T, Mukaigawa Y. Blind watermarking for 3-d printed objects using surface norm distribution. In: 2018 Joint 7th International Conference on Informatics, Electronics & Vision (ICIEV) and 2018 2nd International Conference on Imaging, Vision & Pattern Recognition (icIVPR) (pp. 282–288). IEEE. 2018.

  65. Cabezos-Bernal PM, Rodriguez-Navarro P, Gil-Piqueras T. Documenting paintings with gigapixel photography. J Imaging. 2021;7(8):156. https://doi.org/10.3390/jimaging7080156.

    Article  Google Scholar 

  66. PRS | European Parliamentary Research Service Authors: Magdalena Pasikowska-Schnass with Young-Shin Lim Members' Research Service PE 747.120—April 2023

  67. Baraldi L, Paci F, Serra G, Benini L, Cucchiara R. Gesture recognition using wearable vision sensors to enhance visitors’ museum experiences. IEEE Sens J. 2015;15(5):2705–14. https://doi.org/10.1109/JSEN.2015.2411994.

    Article  Google Scholar 

  68. Seidenari L, Baecchi C, Uricchio T, Ferracani A, Bertini M, Bimbo AD. Deep artwork detection and retrieval for automatic context-aware audio guides. ACM Trans Multimedia Comput Commun Appl. 2017;13:1.

    Article  Google Scholar 

  69. Del Chiaro R, Bagdanov AD, Del Bimbo A. Webly-supervised zero-shot learning for artwork instance recognition. Pattern Recogn Lett. 2019;128(1):420–6. https://doi.org/10.1016/j.patrec.2019.09.027.

    Article  Google Scholar 

  70. Baldrati A, Agnolucci L, Bertini M, Del Bimbo A. Zero-Shot Composed Image Retrieval with Textual Inversion. arXiv preprint arXiv:2303.15247. 2022; https://doi.org/10.48550/arXiv.2303.15247

  71. Feder T. Q&A: Robert Erdmann brings modern computation to centuries-old art, 2022. https://pubs.aip.org/physicstoday/online/29656/Q-038-A-Robert-Erdmann-brings-modern-computation.

  72. Gupta V, Sambyal N, Sharma A, Kumar P. Restoration of artwork using deep neural networks. Evol Syst. 2021;12:439–46. https://doi.org/10.1007/s12530-019-09303-7.

    Article  CAS  Google Scholar 

  73. Wan Z, Zhang B, Chen D, Zhang P, Wen F, Liao J. Old photo restoration via deep latent space translation. IEEE Trans Pattern Anal Mach Intell. 2022;45(2):2071–87. https://doi.org/10.1109/TPAMI.2022.3163183.

    Article  Google Scholar 

  74. Wan Z, Zhang B, Chen D, Liao J. Bringing old films back to life. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. (pp. 17694–17703).

  75. Agnolucci L, Galteri L, Bertini M, Del Bimbo A. Restoration of Analog Videos Using Swin-UNet. In: Proceedings of the 30th ACM International Conference on Multimedia. 2022. (pp. 6985–6987).

  76. González-Rodríguez MR, Díaz-Fernández MC, Gómez CP. Facial-expression recognition: an emergent approach to the measurement of tourist satisfaction through emotions. Telematics Inform. 2020;51: 101404. https://doi.org/10.1016/j.tele.2020.101404.

    Article  Google Scholar 

  77. Ragusa F, Furnari A, Battiato S, Signorello G, Farinella GM. Egocentric visitors localization in cultural sites. J Comput Cult Herit. 2019;12(2):1–19. https://doi.org/10.1145/3276772.

    Article  Google Scholar 

  78. Ragusa F, Furnari A, Battiato S, Signorello G, Farinella GM. EGO-CH: dataset and fundamental tasks for visitors behavioral understanding using egocentric vision. Pattern Recogn Lett. 2020;131:150–7. https://doi.org/10.1016/j.patrec.2019.12.016.

    Article  Google Scholar 

  79. Baecchi C, Ferracani A, Del Bimbo A. User profiling and context understanding for adaptive and personalised museum experiences. DigitCult-Scientific J Digital Cult. 2019;4(2):15–28. https://doi.org/10.4399/97888255301482.

    Article  Google Scholar 

  80. Cesaria F, Cucinelli AM, De Prezzo G, Spada I. Gamification in cultural heritage: a tangible user interface game for learning about local heritage. In: Kremers H, editor. Digital cultural heritage. Cham: Springer; 2020. https://doi.org/10.1007/978-3-030-15200-0_28.

    Chapter  Google Scholar 

  81. Bonacini E, Giaccone SC. Gamification and cultural institutions in cultural heritage promotion: a successful example from Italy. Cultural Trends. 2022;31(1):3–22. https://doi.org/10.1080/09548963.2021.1910490.

    Article  Google Scholar 

  82. https://www.clevelandart.org/artlens-gallery/artlens-exhibition, Accessed on 16/11/2023.

  83. Bozzelli G, Raia A, Ricciardi S, De Nino M, Barile N, Perrella M, Palombini A. An integrated VR/AR framework for user-centric interactive experience of cultural heritage: the arkaevision project. Digital Appl Archaeol Cultural Herit. 2019;15: e00124. https://doi.org/10.1016/j.daach.2019.e00124.

    Article  Google Scholar 

  84. Podara A, Giomelakis D, Nicolaou C, Matsiola M, Kotsakis R. Digital storytelling in cultural heritage: audience engagement in the interactive documentary new life. Sustainability. 2021;13(3):1193. https://doi.org/10.3390/su13031193.

    Article  Google Scholar 

  85. Lacet D, Van Zeller M, Martins P, Morgado L. Digital storytelling approaches in virtual museums: umbrella review of systematic reviews. J Digital Media Interact. 2022;5(13):23–44. https://doi.org/10.34624/jdmi.v5i13.29215.

    Article  Google Scholar 

  86. Sylaiou S, Dafiotis P. Storytelling in virtual museums: engaging a multitude of voices. In: Liarokapis F, Voulodimos A, Doulamis N, Doulamis A, editors. Visual computing for cultural heritage. Cham: Springer Series on Cultural Computing; 2020. https://doi.org/10.1007/978-3-030-37191-3_19.

    Chapter  Google Scholar 

  87. Agostino D, Arnaboldi M. From preservation to entertainment: accounting for the transformation of participation in Italian state museums. Account Hist. 2021;26(1):102–22. https://doi.org/10.1177/1032373220934893.

    Article  Google Scholar 

  88. P. Galluzzi, “Museo virtuale” [Virtual Museum], in XXI secolo, Roma, Istituto della Enciclopedia Italiana, https://www.treccani.it/enciclopedia/museo-virtuale_%28XXI-Secolo%29/), in Italian.

  89. Pietroni E, Adami A. Interacting with virtual reconstructions in museums: the etruscanning project. J Comput Cult Herit. 2014;7(2):9,1-29. https://doi.org/10.1145/2611375.

    Article  Google Scholar 

  90. Blunden JJ. The Language with Displayed Art (efacts): Linguistic and sociological perspectives on meaning, accessibility and knowledge-building in museum exhibitions (Doctoral dissertation), Faculty of Arts and Social Sciences. Sydney: University of Technology Sydney; 2016.

    Google Scholar 

  91. Antinucci F, Comunicare nel museo [Communicating in the museum], Roma-Bari, Editori Laterza, 2014, in Italian.

  92. Carfagni M, Furferi R, Governi L, Santarelli C, Servi M, Uccheddu F, Volpe Y. Metrological and critical characterization of the Intel D415 stereo depth camera. Sensors. 2019;19(3):489. https://doi.org/10.3390/s19030489.

    Article  Google Scholar 

  93. Elkhuizen WS, Callewaert TW, Leonhardt E, Vandivere A, Song Y, Pont SC, Dik J. Comparison of three 3D scanning techniques for paintings, as applied to Vermeer’s ‘Girl with a Pearl Earring.’ Heritage Science. 2019;7(89):1–22. https://doi.org/10.1186/s40494-019-0331-5.

    Article  Google Scholar 

  94. Vlahakis V, Ioannidis M, Karigiannis J, Tsotros M, Gounaris M, Stricker D, Almeida L. Archeoguide: an augmented reality guide for archaeological sites. IEEE Comput Graphics Appl. 2002;22(5):52–60. https://doi.org/10.1109/MCG.2002.1028726.

    Article  Google Scholar 

  95. Gherardini F, Santachiara M, Leali F. 3D virtual reconstruction and augmented reality visualization of damaged stone sculptures. IOP Conf Ser Mater Sci Eng. 2018;364(1):012018. https://doi.org/10.1088/1757-899X/364/1/012018.

    Article  Google Scholar 

  96. Yoon S, Anderson E, Lin J, Elinich K. How augmented reality enables conceptual understanding of challenging science content. J Educ Technol Soc. 2017;20(1):156–68.

    Google Scholar 

  97. Hsiao HS, Chang CS, Lin CY, Wang YZ. Weather observers: a manipulative augmented reality system for weather simulations at home, in the classroom, and at a museum. Interact Learn Environ. 2016;24(1):205–23. https://doi.org/10.1080/10494820.2013.834829.

    Article  Google Scholar 

  98. Guazzaroni G. Emotional mapping of the archaeologist game. Comput Hum Behav. 2013;29(2):335–44. https://doi.org/10.1016/j.chb.2012.06.008.

    Article  Google Scholar 

  99. Savela N, Oksanen A, Kaakinen M, Noreikis M, Xiao Y. Does augmented reality affect sociability, entertainment, and learning? A field experiment. Appl Sci. 2020;10(4):1392. https://doi.org/10.3390/app10041392.

    Article  Google Scholar 

  100. Al-khalifah A, McCrindle R. Student Perceptions of Virtual Reality as an Education Medium. In E. Pearson & P. Bohman (Eds.), Proceedings of ED-MEDIA 2006--World Conference on Educational Multimedia, Hypermedia & Telecommunications (pp. 2749–2756). Orlando, FL USA: Association for the Advancement of Computing in Education (AACE). Retrieved January 6, 2024 from https://www.learntechlib.org/primary/p/23395/.

  101. Hammady R, Ma M, Strathern C, Mohamad M. Design and development of a spatial mixed reality touring guide to the Egyptian museum. Multimedia Tools Appl. 2020;79:3465–94. https://doi.org/10.1007/s11042-019-08026-w.

    Article  Google Scholar 

  102. Di Di Angelo L, Stefano P, Guardiani E, Pane C. Automatic shape feature recognition for ceramic finds. J Comput Cult Herit. 2020;13(3):1–21. https://doi.org/10.1145/3386730.

    Article  Google Scholar 

  103. Di Angelo L, Di Stefano P, Pane C. Automatic dimensional characterisation of pottery. J Cult Herit. 2017;26:118–28. https://doi.org/10.1016/j.culher.2017.02.003.

    Article  Google Scholar 

  104. Di Angelo L, Di Stefano P, Morabito AE, Pane C. Measurement of constant radius geometric features in archaeological pottery. Measurement. 2018;124:138–46. https://doi.org/10.1016/j.measurement.2018.04.016.

    Article  Google Scholar 

  105. Dworkin G, Taylor RD. Blackstone’s guide to the copyright, designs and patents act 1988: the law of copyright and related rights. Nee York (NY), USA: Blackstone Press; 1989.

    Google Scholar 

  106. Bradshaw S, Bowyer A, Haufe P. The intellectual property implications of low-cost 3D printing. ScriptEd. 2010;7(1):5. https://doi.org/10.2966/scrip.070110.5.

    Article  Google Scholar 

  107. Koller D, Turitzin M, Levoy M, Tarini M, Croccia G, Cignoni P, Scopigno R. Protected interactive 3D graphics via remote rendering. ACM Transactions on Graphics (TOG). 2004;23(3):695–703. https://doi.org/10.1145/1015706.1015782.

    Article  Google Scholar 

  108. Li Y. Optimization of artistic image segmentation algorithm based on feed forward neural network under complex background environment. J Environ Public Health. 2022;2022:9454344. https://doi.org/10.1155/2022/9454344.

    Article  Google Scholar 

  109. Di Stefano F, Chiappini S, Gorreja A, Balestra M, Pierdicca R. Mobile 3D scan LiDAR: a literature review. Geomat Nat Haz Risk. 2021;12(1):2387–429. https://doi.org/10.1080/19475705.2021.1964617.

    Article  Google Scholar 

  110. Buonamici F, Carfagni M, Furferi R, Governi L, Volpe Y. Are we ready to build a system for assisting blind people in tactile exploration of bas-reliefs? Sensors. 2016;16(9):1361. https://doi.org/10.3390/s16091361.

    Article  Google Scholar 

  111. Mouris D, Tsoutsos NG. NFTs for 3D models: sustaining ownership in industry 4.0. IEEE Consum Electron Mag. 2022;2022:1–4. https://doi.org/10.1109/MCE.2022.3164221.

    Article  Google Scholar 

  112. Wang JT, Yang WH, Wang PC, Chang YT. A novel chaos sequence based 3d fragile watermarking scheme. In: 2014 International Symposium on Computer, Consumer and Control, Taichung, Taiwan, 2014; 745–748, https://doi.org/10.1109/IS3C.2014.198.

  113. Ferreira FA, Lima JB. A robust 3D point cloud watermarking method based on the graph Fourier transform. Multimedia Tools Appl. 2020;79(3–4):1921–50. https://doi.org/10.1007/s11042-019-08296-4.

    Article  Google Scholar 

  114. Xu J, Hou J, Zhang Y, Feng R, Wang Y, Qiao Y, Xie W. Learning open-vocabulary semantic segmentation models from natural language supervision. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2935–2944). 2023.

  115. Conde MV, Turgutlu K. CLIP-Art: Contrastive pre-training for fine-grained art classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 3956–3960). 2021.

  116. Baldrati A, Bertini M, Uricchio T, Del Bimbo A.2022. Exploiting CLIP-Based Multi-modal Approach for Artwork Classification and Retrieval. In: International Conference Florence Heri-Tech: The Future of Heritage Science and Technologies (pp. 140-149). Cham: Springer International Publishing

  117. Becattini F, Bongini P, Bulla L, Bimbo AD, Marinucci L, Mongiovì M, Presutti V. VISCOUNTH: a large-scale multilingual visual question answering dataset for cultural heritage. ACM Trans Multimedia Comput Commun Appl. 2023;19(6):193,1-2020. https://doi.org/10.1145/3590773.

    Article  Google Scholar 

  118. Bongini P, Becattini F, Del Bimbo A. Is GPT-3 all you need for visual question answering in cultural heritage? In: Karlinsky L, Michaeli T, Nishino K, editors. Computer vision—ECCV 2022 workshops. ECCV 2022. Lecture notes in computer science. Cham: Springer; 2023. https://doi.org/10.1007/978-3-031-25056-9_18.

    Chapter  Google Scholar 

  119. M Farella, G Chiazzese and GL Bosco, “Question Answering with BERT: designing a 3D virtual avatar for Cultural Heritage exploration,” 2022 IEEE 21st Mediterranean Electrotechnical Conference (MELECON), Palermo, Italy, 2022, pp. 770-774, doi: https://doi.org/10.1109/MELECON53508.2022.9843028

  120. Donadio, M. G., Principi, F., Ferracani, A., Bertini, M., Del Bimbo, A. (2022). Engaging Museum Visitors with Gamification of Body and Facial Expressions. In Proceedings of the 30th ACM International Conference on Multimedia (pp. 7000–7002). https://doi.org/10.1145/3503161.3547744

  121. Castellotti S, D’Agostino O, Mencarini A, Fabozzi M, Varano R, Mastandrea S, Baldriga I, Del Viva MM. Psychophysiological and behavioral responses to descriptive labels in modern art museums. PLoS ONE. 2023;18(5): e0284149. https://doi.org/10.1371/journal.pone.0284149.

    Article  CAS  Google Scholar 

  122. Direzione Generale Musei, Migliorare il racconto museale. [Insights for the drafting of captions and panels]. http://musei.beniculturali.it/wp-content/uploads/2019/07/Approfondimenti-per-la-redazione-di-didascalie-e-pannelli.pdf. Accessed on 16/11/2023.

  123. Miglietta A.M., I pannelli esplicativi nei musei scientifici: alcuni spunti di riflessione [Explanatory panels in science museums: some food for thought], “Museologia Scientifica Memorie”, 8/2011: 107–110.

  124. Cristina Da Milano, Erminia Sciacchitano, Linee guida per la comunicazione nei musei: segnaletica interna, didascalie e pannelli [Guidelines for communication in museums: internal signage, captions and panels], Quaderni della valorizzazione, ns. 1, 2015, in Italian.

  125. Grosman L. Reaching the point of no return: the computational revolution in archaeology. Annu Rev Anthropol. 2016;45:129–45. https://doi.org/10.1146/annurev-anthro-102215-095946.

    Article  Google Scholar 

  126. Zhang W, Qiu F, Wang S, Zeng H, Zhang Z, An, R, Ding Y. (2022). Transformer-based multimodal information fusion for facial expression analysis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2428–2437).

  127. Chaudhari A, Bhatt C, Krishna A, Mazzeo PL. ViTFER: facial emotion recognition with vision transformers. Appl Syst Innov. 2022;5(4):80. https://doi.org/10.3390/asi5040080.

    Article  Google Scholar 

  128. Canal FZ, Müller TR, Matias JC, Scotton GG, de Sa Junior AR, Pozzebon E, Sobieranski AC. A survey on facial emotion recognition techniques: A state-of-the-art literature review. Inf Sci. 2022;582:593–617. https://doi.org/10.1016/j.ins.2021.10.005.

    Article  Google Scholar 

  129. Puggelli L, Furferi R, Governi L, Santarelli C, Volpe Y. ARTE–augmented readability tactile exploration: the tactile bas-relief of Piazza San Francesco painting. In: International Conference Florence Heri-Tech: The Future of Heritage Science and Technologies. Cham: Springer International Publishing; 2022. p. 113–26.

    Google Scholar 

Download references

Funding

This work is supported by the Spoke 4 within the Italian National Research Programme (NRP)—PE05 (CHANGES), CUP: B53C22004010006. This work is also partially supported by the European Commission under European Horizon 2020 Programme, grant number 101004545—ReInHerit.

Author information

Authors and Affiliations

Authors

Contributions

RF and MB coordinated the work. RF wrote Introduction, Sects. "2D Imaging technologies"., 1.2., 1.4, 1.5 and Sects. "2D Imaging technologies", "3D Imaging technologies and Reverse Engineering", "Additive manufacturing" and "From 2D artworks to 3D models". LD wrote paragraphs 1.3 and 2.3. MB wrote Sects. "Artificial Intelligence in the CH field", "Gamification", "Artificial intelligence in the CH field". and 2.8.. MB and KD wote Sects. "Language as a tool for enhancing artworks fruition" and "Language as a tool for enhancing artworks fruition". All authors revised the manuscript. All authors wrote conclusions.

Corresponding author

Correspondence to Rocco Furferi.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Furferi, R., Di Angelo, L., Bertini, M. et al. Enhancing traditional museum fruition: current state and emerging tendencies. Herit Sci 12, 20 (2024). https://doi.org/10.1186/s40494-024-01139-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s40494-024-01139-y

Keywords