Skip to main content

Standardization of digitized heritage: a review of implementations of 3D in cultural heritage


The value of three-dimensional virtual objects are proven in a great variety of applications; their flexibility allowing for a substantial amount of utilization purposes. In cultural heritage this has been used for many years already, and the amount of users continue to grow as acquisition methods and implementations are becoming more approachable. Nonetheless, there are still many apparent issues with making use of all the possible benefits of 3D data in the field, varying from lack of knowledge, infrastructure, or coherent workflows. This review aims to underline the current limitations in implementing 3D workflows for various cultural heritage purposes. 45 projects and institutions are reviewed, along with the most prominent guidelines for workflows and ways of implementing the 3D data on the web. We also cover how each project manage and make their data accessible to the public. Prominent and recurring issues with standardization, interoperability, and implementation is highlighted and scrutinized. The review is concluded with a discussion on the current utilization’s of 3D data for cultural heritage purposes, along with suggestions for future developments.


Three-dimensional capture of an objects shape and appearance has many applied and theorized uses, and current technology has made the acquisition of such data more approachable for both experts and novices than what it used to be. By using different non-contact techniques one is able to capture the coordinates of different parts of an object in a 3D space, which can be used to visualize the object in several different ways. This approach is applicable for any object size as long as one is able to collect images of good quality or maintain line-of-sight with the object during acquisition, and is extensively researched and applied to the medical field [1], construction [2], and indeed cultural heritage (CH). This review creates an overview and critical analysis of the current application and implementation of 3D data to CH objects of any kind, independent of purpose or approach.

Collection of 3D data can be done by a variety of methods using a variety of tools, and has been extensively explored and described in prior reviews [3, 4]. Although these methods follow many of the same principles, revolving around an imaging system and the post-processing of its data, some differences in their specifications and functionality causes some to be better suited than others depending on the characteristics of the object or demands of the project [5]. Additionally, the long post-processing stage necessary in any 3D workflow consists of many steps, each of which use tools that might introduce alterations to the data to serve a specific purpose. The final results of a 3D documentation process are therefore equally dependent on the post-processing and application, as well as acquisition. Resulting data can vary from point clouds, to triangulated meshes, to fully textured and optimized 3D models depending on which step the producer regards as final.

Development of easy-to-use 3D data acquisition tools has led to the (CH) field adopting these methods more readily. A prime example is the subsequent increase in virtual museums in the CH field [6], which are collections of 2D and 3D objects of various interests that can be accessed through electronic media [7]. As an application, visualization in such a way is perhaps the most obvious result of 3D data, and fulfils the objective of making the objects accessible to an online audience. The traditional museum approach is also familiar to audiences, and follows the ’see but don’t touch’ rule-set. Utilizing this approach brings the objects to the digital space, but makes limited use of the other opportunities provided by 3D models. For example having the possibility to look at cross-sections of the objects, or inspecting the object under different lighting conditions.

While the concept of virtual museums have been around for a long time, extensive work is still being done by the likes of the Virtual Multimodal Museum (VIMM)Footnote 1 and The European Museum AcademyFootnote 2 to aid in decision-making, development, and standardization of virtual museums in the GLAM-sector (Galleries, libraries, archives, museums) across the digital space. This is to attempt to follow this rapid development of 3D acquisition and visualization techniques, as well as the subsequent flood of data. Some prominent virtual museums are Virtual Museums of Małopolska,Footnote 3 and Digitalt Museum.Footnote 4 Additionally, a few museums has taken the step towards providing digital viewing of their objects without developing a whole virtual museum platform: National Museums Scotland,Footnote 5 The Louvre Museum,Footnote 6 and The British MuseumFootnote 7are some examples.

The virtual museum is perhaps the application which represents the elements from the traditional heritage conservation perspective and computer science perspective most equally, and museum researcher Suzanne Keene captures an important aspect of moving towards CH in the digital space with the quote: “We used to build collections of objects. Now we can make collections of information, too” [8]. While her comment regards several things, including metadata and semantics, 3D objects might be appended with additional information as well. Cross sections from x-ray data, annotations, and different textures are a few. But, it is important to remember that 3D objects are not tangible objects in themselves, rather visualization of information about tangible objects. The visualizations of CH in virtual museums are only based on the information we are able to acquire in an acquisition process. Careful ethical considerations must then be made on how to visualize and communicate this to an audience, without introducing conjecture or misinterpretation in the presentation. This is perhaps especially true in the museum setting, but has equal relevance for any application of reality-based 3D methods. Heritage objects curated by museum organizations follows standardization guidelines provided by organizations like ICOM [9] and ICOMOS [10], and while work is being done on implementing 3D in virtual museums in a standardized way [11,12,13], there is still a lot of gaps in knowledge and ethically justifiable workflows. As such, the virtual museum is an example of the general digitization process of the CH field, but the contemporary and state-of-the-art projects and implementations reviewed here often take several steps away from the traditional museum setting.

Application of 3D data of CH objects is only partially covered by museums, either physical or virtual. Standalone projects, non-profit organizations, research institutions, production companies, and private persons contribute a significant portion of the 3D data that is available online. Furthermore, the data of this varied field is being used for many other applications than just digital viewing of the object, even though this is an obvious, popular, and easily implemented utilization. As this varied ensemble might not be bound to some institution or larger organization, they might not adhere to any sort of museum or heritage standards or regulations in the case of digitizing CH, and are therefore free to do “what they want” in order to get the best results. What can be classified as the “best result” again depends on the application, and literature and projects often lists the many platforms 3D data can be applied to.

Some recurring proposed and tested applications for 3D models of CH objects are: visualization and dissemination [14,15,16], simulation [17, 18] education and training [19,20,21], and research [22,23,24]. Additionally, audience interaction is a factor explored in implementations that feature gamification principles. At a glance, these are very broad suggestive implementations that are very different in their context and content. Papers and projects that utilize 3D for CH often phrases that it might lead to new knowledge and insights [16], but this application is in most cases still in a very early phase. New developments for utilization of 3D also gives virtual reality (VR), augmented reality (AR), and mixed reality (MR) increased uses and applications in many fields, but similarly lacks designated guidelines for ethical implementations and requirements in terms of quality and semantics. Especially with CH objects that are sensitive to conjecture.

Currently there is a lot of work focused on making the application of 3D data for CH more concrete; researching and developing both standards and workflows that should help institutions apply 3D to their own collection in a more uniform way. The prior mentioned reports and research papers agree that there is still a lack of knowledge for CH institutions regarding 3D implementation and processing, and that this is required to make the full utilization of 3D for CH a reality.

There is on the other hand no shortage of research conducted on the use of 3D in the CH sector. New applications that are being investigated includes conservation [25], change monitoring [26, 27], visualization [28], additive manufacturing [29, 30], BIM (Building Information Modeling), also known as HBIM (Heritage Building Information Modeling) [31, 32], and dissemination methods [33] just to mention a few. Recent investments and grants like CHANGE,Footnote 8 the n-Dame Heritage ERC project,Footnote 9 Data Service for Complex 3D Data in the Arts and Humanities,Footnote 10 JPI CH,Footnote 11 and PerceiveFootnote 12 also signifies that this field will continue to grow in the future.

This paper reviews results from contemporary and state-of-the-art projects, and how they make use of 3D for CH purposes to see if the initial imagined potential and value of 3D digitization has been met, surpassed, or limited. Additionally, we take a close look on the various frameworks in which 3D data is presented, and if interoperability might be a concern between projects. Longevity and use of the end results is also something that is scrutinized, in an attempt to evaluate the impact that these projects might have had based on the results they present. The reviewed projects, institutions, guidelines, and tools have been selected based on their recurring appearance in academic papers as well as their visibility when browsing for the subject online. We deem that since these projects are the most visible, they might also be the most influential for new projects in the future.

Section "Prior reviews and existing projects" presents various prior reviews on 3D CH data, and takes a close look at some of the most relevant selected projects. Workflow proposals for data acquisition and processing is presented and scrutinized in Sect. "Workflows for acquisition and processing of 3D data", and Sect. "Heterogeneous data and interoperability issues" presents some apparent issues with utilizing 3D CH data. In this section we also highlight a few options for solutions that are not as visible as the reviewed projects. A discussion on the presented data is done in Sect. "Discussion", before summarizing and providing conclusions in Sect. "Conclusion".

Prior reviews and existing projects

Previous reviews of applications of 3D data in the CH sector has mostly been from specific approaches, for example acquisition methods [3, 34,35,36], data fusion [37, 38], and documentation approaches [39,40,41]. Most of the papers in these reviews are very technology-oriented, and are most often concerned with the acquisition technology or post-processing of the data. As such, they primarily lean towards the discipline of computer science and the development of technology and theories to produce 3D data. This leaves an emptier space for the application of the end results, as the theorized utilization is not often described after acquisition is completed. This is trend is also found in many guidelines and proposed workflows for 3D and CH, which also mostly regard the acquisition of data.

One of the latest addition to these reviews is the final study report from the EU Horizon 2020 funded VIGIE 2020/654,Footnote 13 led by Cyprus University of Technology. This study has sought to map current formats and standards used for measuring the quality of current 3D digitisation, and propose different measures that should be implemented to ensure high quality of the resulting data. Summarized, they again found that there is a great variety in approaches, tools, formats, and knowledge-bases, highlighting the urgent need for standardization of workflows to increase interoperability of 3D data in the CH sector. This is similar to the conclusion of prior studies. Furthermore, they increasingly emphasized the evaluation of quality of the 3D object, and how it relates to the complexity of the tangible object. Both ’quality’ and ’complexity’ is hard to define, but they interpret them as a measure of 15 different parameters. Quality is evaluated by: the materials of the object, structural health monitoring, 2D image data, 3D geometric data, texture, scale, and spectral characteristics. Complexity is evaluated by: Team, Environment, Software, Hardware, Pre-processing, Stakeholder’s requirements, Object, and Project. Note that each of these parameters has several sub-parameters.

As their selected parameters are not all numerically quantifiable, the final quality metric relies on a subjective evaluation. This would require expertise of both the tangible object and the 3D process to land on a good holistic evaluation, where interdisciplinary discussions would be essential. Variations in knowledge is a fundamental problem faced in this standardization process, as it is little common ground to originate from between CH and computer science. This is an observation also made in other investigations[42] and fields [35], which emphasizes the same problem. A proposed app called DAPMS (Data Acquisition Process Management System) is under development as a result of the study, which could make it easier for other institutions to provide evaluations of the proposed parameters of the acquisition and processing of their 3D data.

Several prior studies have also used this approach: At the University of Novi Sad, Serbia, they sought to develop a system which assists in the selection of a 3D digitization method based on the object characteristics [43]. While their approach of describing the object and desired data does not fall under the same ’quality’ and ’complexity’ umbrella terms, the selected parameters are very alike. Similar parameters for method selection is also listed by Pavlidis et al. [40] in their “9-Critera Table”, and Guillaume and Schenkels “Best Practice Checklists for 3D Museum Model Publication” [44] also emphasizes the complexity that these parameters introduce.

But while all these papers highlight aspects which would affect the quality of a 3D data acquisition process for CH, they provide little in terms of standards or quality evaluation tools for each of them. As prior mentioned, many of the parameters entirely depends on subjective evaluation or project objectives. Suggested tools therefore does not necessarily help the team collecting the data make objective decisions about the parameters, but rather list the parameters they would have to make subjective decisions about. And, as the nature of 3D data is alterable, the end result of any 3D project could have great variation, even when utilizing similar methods. It is again also clear that most of the research regard the acquisition of 3D data, while the application of said data gets much less attention from a research perspective. Data is collected, the models are produced, and they are visualized using one of the many 3D viewers available. Rarely do papers tackle the issue of ’what happens next?’, leaving the application and storage of the data, which might be a cause for the interoperability issues in the first place, to individual institutions[45].

A better source for reviewing this issue is the many current 3D digitization projects for CH. These may or may not have connections to research, but are nonetheless cited as results of research methods. As with papers, their purpose and objective varies, resulting in great variation in what data is available. In an attempt to categorize some of the projects, we have made a distinction in this paper between what we regard as data collectors and data repositories out in the field.

Data collectors

Classified as data collectors are the projects which in addition to visualizing CH data in 3D, also stand for their own data acquisition. Prominent, non-profit, data collectors are CyArk,Footnote 14 ZamaniProject,Footnote 15 ScanTheWorld,Footnote 16 GlobalDigitalHeritage,Footnote 17 and Arc/K.Footnote 18 Some of these projects also develop applications for viewing the 3D data on their website, like CyArk’s narrated virtual tours and Zamani Projects plans and sections viewer. Note that these projects are primarily designed for audience viewing of the data, and does not necessarily contribute quantifiably relevant research data.

The Zamani Project is one of the longest running CH digitization projects. It was founded in 2001, and posted its latest fieldwork in November 2021. Consisting of researchers from the University of Cape Town, their fundamental objective is the documentation of heritage sites, and the analysis, communication, and training that can be derived from this. At the time of writing, they have documented over 250 structures and sites across Africa and South-east Asia, and provide media ranging from 3D models, cross sections, point clouds, and GIS, available from UCT’s repository.Footnote 19 CyArk is another of the longer running 3D projects for CH, being founded in 2003 and worked at over 200 sites in over 40 countries. Their objective is similar to that of the Zamani Project, and provides open access to all of their 3D data through Open Heritage 3D.Footnote 20 Both of these projects make use of well known tools and methods to acquire their 3D data, with semi-transparency regarding tool specifications and post-processing stages. Regardless of object quality, the phenomenon of varying geometric resolution in their 3D data is clearly apparent when looking through their collections. Not to mention the technological developments that have been made in the last 20 years, which renders some of their data clearly aged. Heritage objects belonging to the same heritage site is digitized differently by the two projects, apparent in 3D models of temples from the Bagan city, Myanmar. “Eim Ya Kyaung” was modeled by CyArkFootnote 21 and “Temple 1085” by Zamani Project.Footnote 22 Under inspection, both geometry and texture resolution is noticeably different between the objects. So while their objectives and acquisition tools are very similar, their end results and applications would be evaluated differently by an objective metric like that proposed by VIGIE2020/654 and the University of Novi Sad [43].

In addition to non-profit organizations there are also several production companies that makes a business model of generating 3D data, with a specialization in CH. Production companies that non-exclusively work with CH like Rigsters,Footnote 23 RaizeNewMedia,Footnote 24 Calidos,Footnote 25 Quixel,Footnote 26 7Reasons,Footnote 27 and Overhead4DFootnote 28 are prominent in their contribution of the 3D CH data that os available on the web. Private companies are traditionally more secretive with their acquisition tools and methods, which in many cases are also custom and proprietary. This might also cause issues in quality evaluation by other institutions if insufficient metadata and paradata is collected.

As another example of how 3D acquisition can be very different even when done by professionals, we will visually compare two models of the same object done by another two producers. The Al-Khazneh temple in Petra, Jordan has been modeled by both the company RaizeNewMediaFootnote 29 and by the Zamani Project.Footnote 30 They have both posted the models on Sketchfab, which is a 3D viewer that allows for inspection of the models geometry, textures, and UV projections. A comparison on geometry can be seen in Fig. 1, and with color textures in Fig. 2.

Fig. 1
figure 1

Left: ’Petra’ Model by RaizeNewMedia. Right: ’Petra’ Model by Zamani Project

Fig. 2
figure 2

Left: ’Petra’ Model by RaizeNewMedia. Right: ’Petra’ Model by Zamani Project

A visual inspection of this example makes it is hard to evaluate which model is of higher quality, as one seemingly features better geometry while the other features better colors. The Zamani Project utilized a laser scanner for their acquisition, while RaizeNewMedia does not mention what tools they use. The Zamani Project laser scan was a part of the Siq Stability ProjectFootnote 31’ by the Italian Ministry of Foreign Affairs, used to monitor slope instability in the sandstone cliffs around the heritage site. The visualization we see is just a simplification of the raw data used for the monitoring, where color is an non-contributing factor to the project objectives. For this purpose, the 3D model is of high enough quality for its designated application. While RaizeNewMedia’s model does not mention its modeling objective, it might have been for a more general application. It might therefore be more flexible in its uses, but with less conservation or research merit. As such, we cannot evaluate the quality of 3D models for CH from a single metric, as the creation purpose would channel the raw data into a specific form and produce different end results. Forms and resolution of data directly correlates with its data size and format, which again would decide how inter-operable it would be. 3D data repositories and 3D viewers have different support in what scales and formats of data they are able to process, and users must therefore make specific choices about limiting what data they are able to include.

Data repositories

Different from the data collectors, we have the data repositories. Data repositories are applications, websites, databases, or institutions whose purpose is to store 3D data of CH objects collected by a variety of users. While some also produce data themselves, it is not their primary objective. Some of these repositories could in many ways be compared to virtual museums, where users can freely browse and inspect 3D models without downloading them localy, but without the designated museum approach. Others are purely databases for management and long-term data storage, and offer no online visualization whatsoever. Some examples of current data repositories are Google Arts and CultureFootnote 32, EuropeanaFootnote 33, Golden AgentsFootnote 34, TARAFootnote 35, Open ArchivesFootnote 36, Open Heritage 3DFootnote 37, AioliFootnote 38, NextcloudFootnote 39, Texas Data RepositoryFootnote 40, Smithsonian 3DFootnote 41, MorphoSourceFootnote 42, Historic Environment ScotlandFootnote 43, LOCKSSFootnote 44, REKREIFootnote 45 (Formally known as Project Mosul), and MorbaseFootnote 46.

Common among all of these is that they open for users from varied institutions to upload data to their platform. They are not necessarily aligned to a specific project, but some are limited to certain types of CH objects. MorphoSource primarily focuses on biological skeletal material, and Golden Agents focuses on items from the Dutch Golden Age. Few, if any, seem to feature any sort of quality control or accuracy requirement for uploading 3D CH data, perhaps due to the lack of universal agreed-upon quality standards for different sized objects. Resolution of the objects then depends on the data acquisition and processing of each individual object. But, they do feature requirements for metadata, providing some parameters for evaluation for users who want to download or utilize the data for different purposes. Out of the reviewed repositories, Europeana seems to be the most developed in terms of having requirements for making collection data available through their network. They have strict requirements for metadata formats and licensing status, but leave the quality assessment of the published data up to each institution. Common metadata standards are CARARE,Footnote 47 LIDO,Footnote 48 METS,Footnote 49 and EDM,Footnote 50 where EDM is specifically developed for uploading CH data, including 3D, to the Europeana platform. In terms of licensing of the data and availability for the audience, a review by McCarthy and Wallace[46] summarizes how a lot of CH institutions implements the GLAM open access policies. Apart from metadata schemas and licensing, the infrastructures of the repositories are all very different.

In a survey done by the team at PURE3D in 2021 to understand requirements for 3D web infrastructures by 3D CH data collectors, few respondents answered that they had a plan for preserving their data in the future other than uploading to one of these repositories[47]. Following the assumption that the application of acquired 3D data has not yet been properly developed in the field, this short-term view on the acquired data might further suggest at this limited utilization. Their survey summarizes with a priority list for new and current data repositories and 3D viewers specifically designed for CH data, based on the responses in their survey. A great amount of desired features are familiar to any academic environment, such as ID generation for projects and objects, citation styles for 3D objects, and peer-reviews for model uploads or the ability to filter by this. Implementation of such features would make it easier to distinguish between research data and commercial/creative data, and allow for a scientifically grounded approach to 3D CH on the web. Other desired research functions include measurement tools in the 3D viewers, multi-object viewing, and scripting implementation. This could permit for a more objective evaluation of a repository’s content, and review the quality of different 3D models compared to each other.

In Table 1 we have provided an overview of the selected projects and institutions in this review, which specifies their features and category as well as listing the objective of the project as listed on their website.

Table 1 Overview of reviewed Organizations and Projects

A prior review of workflows and documentation approaches proposes that one of the reasons for the current ’disorganization’ of 3D CH data is that the research method of 3D data collection for CH has not been recognized as academic in nature [48], but is rather leaning towards being a more creative and artistic appliance to the CH academic field. While computer graphics is a recognized and well-established discipline, the prior mentioned challenges of ethics, conjecture, and quality control apparent in CH digitization has yet to receive the same treatment. This might be due to the lack of specialized educational or research programs for the discipline, and its close relation to creative computer graphics by using the same software and workflows. Approachable software and audience interest also generate a lot of publicly-created data which blends together with the data originating from research, making distinctions hard without quantifiable methods and standardized parameters. While specialized 3D data from research projects mentioned in Sect.  Introduction and "Prior reviews and existing projects " are obviously scientific in nature, it might be hard for an uneducated eye to distinguish them without any evaluation tools. As such there is significant overlap between the two fields, but less specialized approaches to tackle the specific issues within this overlap. There are currently several institutions that exclusively produce 3D CH data from a research perspective; Visual Computing Lab,Footnote 51 3DOM,Footnote 52 Darklab,Footnote 53 Digital Heritage Lab,Footnote 54 and Cultlab 3DFootnote 55 are a few. While general and specialized research is important, these institutions could also increasingly move towards utilizing educated workflows that does not narrow their data to a specific research question, but rather apply their research to more universal data which is similar to what is being developed by other CH institutions to be fairly scrutinized. Only when the data have some universal similarities can they be evaluated for quality by some standardized metric, but we immediately recognize the limitations this would put on the research that could be conducted.

There are a myriad of guides and workflows for processing 3D data the ’optimal way’, both in relation to CH, research, entertainment, and in general. In the next section, we take a look at how different institutions propose to make a 3D object of high quality, and how they evaluate this.

Workflows for acquisition and processing of 3D data

Similar to the wide application purposes of 3D data, existing workflows for 3D data acquisition encompass a great amount of uses. While some acquisition workflows are designed with CH in mind, they might not differ too much from creative purpose workflows. As such, most proposed workflows for CH orient around data management, good practices, and general purpose solutions. Examples of institutions that contribute guidelines to 3D workflows are The European Commission,Footnote 56 Riksantikvarieämbetet,Footnote 57 Federal Agencies Guidelines Initiative,Footnote 58 Europeana,Footnote 59 The London Charter,Footnote 60 Cultural Heritage Imaging,Footnote 61 and the International Image Interoperability Framework (IIIF) Consortium.Footnote 62 In Table 2 we have provided a table that covers what features each proposal provides suggestions to. This includes features like suggested tools for different objects, acquisition and management processes, documentation practices, quality evaluation, and final data implementation.

Table 2 Overview of reviewed guidelines and their content

As an example of acquisition guidelines, we will take a closer look at the method of photogrammetry. Photogrammetry is one of the most utilized methods for 3D model creation, and is approachable to novices as it does not require expensive hardware or software. Arc/K is a non-profit CH digitization organization that has produced a guide on how they do photogrammetry for CH,Footnote 63 and lists some requirements and suggestions for technical specifications and imaging practices. But as is the pattern with the reviewed projects, they do not mention post-processing or quality evaluation of the final model. Similarly, CIPA, the International Organization for Heritage Documentation, summarizes their photogrammetry guidelines in single document called The 3x3 Rules.Footnote 64

Another guide to photogrammetry is done by the developers of the Unity game engine, who provides a more in-depth workflow from acquisition to implementation of the finalized 3D object.Footnote 65 Note that they specify that their guide is aimed at game development and the creation of a 3D asset meant to be visualized in real-time in a virtual environment. As such, it has limitations in its possible resolution in both geometry and texture. Such a distinction aids the ones responsible for the data acquisition to make decisions about their finalized object, as the application has certain restrictions. Arc/K provides no such distinctions, even within the field of CH. These are just two of the many tutorials, guidelines, and workflows suggested by the projects and institutions referenced in this review. This open-ended approach could be both a blessing and a curse, as the unrestricted acquisition might yield high quality results, but also opens for erroneous and disorganized realization. Recognizing that different applications of 3D data have different limitations could help guide standardization practices for different platforms, and distinguish between good and bad quality models in different categories. CH objects have great and varied challenges where no approach could be deemed superior, so a mixture of the mentioned restriction and freedom might be the best solution.

For CH objects with these varied complexities and characteristics it is normal to implement several 3D data acquisition methods to overcome the different challenges they propose[49]. Often different segments of the object have different resolution requirements to be rendered accurately [50], or segmented surface characteristics which requires a specialized approach [51]. A single workflow might then not encompass all the solutions to these different challenges. Additionally, small changes in a workflow might result in drastically different results. For example, selecting a high enough resolution during acquisition directly relates to how much post processing you have to do later. If the captured data has a really high resolution you would have to do more post-processing to end up with a usable 3D object, but capturing the object in lower resolution might yield inaccurate results with less opportunities for utilization. This resolution problem is intrinsic to the desired quality evaluation of 3D objects, but must also be weighted against factors like object size, surface characteristics, and data size. Psychovisual evaluation and image quality metrics have been tested to see if they can be used for 3D model evaluation in different digital spaces, and what factors renders a 3D object visually realistic [52,53,54,55]. These experiments have yielded limited results, and would only contribute to subjective evaluation. Issues with some of these experiments might be the viewing conditions, as well as the lack of specified purpose for the 3D objects. 3D objects might be evaluated of different quality from different viewing distances, and objects of less complexity might also be easier to visualize accurately with less geometry. Several automatic metrics also exist, but are targeted at visual quality saliency [56,57,58,59,60,61,62]. As such, the simple increase in 3D resolution does not necessarily increase the quality of the 3D object, and might have little to no effect on both subjective or objective evaluations. Differences between subjective and objective evaluation of 3D objects is something that should be explored further.

The Virtual Museums of Małopolska, the British Museum, and National Museums Scotland have all published 3D objects of similar size from their collection on Sketchfab, but with significant differences in end-result-visualization. Figure 3 is a wireframe render of busts taken from each collection which have approximately the same real-life size, but where the 3D resolution is quite different. “The Bust of Róża LoewenfeldFootnote 66 from the Virtual Museums of Małopolska feature 60.000 triangles, “AntinousFootnote 67 from the British Museum has 1.1 million, and the “Joseph Hume Marble BustFootnote 68 from National Museums Scotland has 3.8 million. Nonetheless they are all 3D objects of seemingly good visual quality when inspected on the 3D platform.

Fig. 3
figure 3

Tessellation resolution difference for 3D objects of similar size

This perceptual variation in resolution highlights the significance of the application of the 3D model, as different uses might have drastically different demands for data. All of these busts might be suitable for the online viewing exemplified here, but feature different suitability for research on the objects’ surface. In theory one could say the higher resolution the better, but the practical reality is that 3D objects of high resolution is incredibly computationally heavy. Large quantities of data is challenging to both visualize and preserve, especially for institutions who are not based in research or development, requiring tremendous hardware infrastructure. A new project called EUreka3DFootnote 69 seeks to alleviate this issue for smaller heritage institutions. There also exists some methodological solutions for multi-resolution encoding [63] and progressive transmission of data [64], allowing users to view individual, segmented parts of the data at a time. Regardless, it is most often necessary to apply post-processing on the data, to simplify and tailor it for a specific use. As such, the CH projects reviewed here rarely works with the raw data captured by a 3D acquisition process for long, and apply the prior mentioned alterations to create a more suitable result. They generally want to avoid cases where the necessary post-processing jeopardizes the high-accuracy data which was collected, to not render the high acquisition demands obsolete. This issue can be tackled in a few different ways, and some proposed workflows recommend different approaches for retopology.

Retopolgy encompasses changing the 3D objects polygonal structure to either be more suitable for a specific 3D environment or to reduce the amount of polygons without removing too much of the objects captured shape. This is most commonly proposed to be done automatically by a mesh simplification algorithm as it is fast and efficient. But manual retopology is also often mentioned as an approach [44, 65]. A review of mesh simplification algorithms is done by Cignoni, P., Montani, C., and Scopigno, R. [66], and highlights that different segments of 3D objects behave differently when simplified by different simplification algorithms. Some approaches are better for rough surfaces, while other are better for hard angles. While both algorithmic simplification and manual retopology steps away from the high resolution data collected during the acquisition, and as a result steps further away from the ground truth, they have different benefits based on the subsequent application of the 3D model. Mesh simplification simply decimates the object to be more easily rendered in 3D viewers, while manual retopology is a more time-consuming process which restructures the object in a specific way.

Additionally, texture acquisition and UV projection are issues which will greatly affect a 3D models perceived quality. If one is to look at 3D objects from a distance, one could get away with lower resolution in the geometry, texture, and the UV projection. But this is rarely the case, as one of the advertised benefits of digitizing tangible objects is the possibility to look closely at the object surface. UV seams and low texture resolutions become visible when you zoom in, which would detriment the benefits gained by zooming. The color and surface representation of 3D objects are both large fields of research, urge the reader to explore other reviews about color [67, 68], reflection models and BRDFs (Bidirectional Reflection Distribution Functions) [69, 70], and UV projection [71, 72] in relation to the 3D topic covered here. An example of effects from these factors can be seen in Fig. 4.

Fig. 4
figure 4

Left: No visible UV seams or pixelization. Middle: Visible UV seams and pixelization when zooming. Right: UV segmentation. Model: “Nile” ( by Rigsters

These are just a few of the issues which projects utilizing 3D would have to make decisions about, and documentation of their selected approach and workflow is essential for an evaluation of the end result. But while the documentation of these technical specifications and approaches are integral to a 3D project, in many cases they also drown out the question of why the digitization was conducted in the first place. This observation was first reported by Pfarr-Harfst in 2016[48], and while there have been improvements to contextualizing the data in contemporary research, many projects still suffers from the same issue.

In many projects the objective is summarized as the paraphrase: “the 3D documentation of CH to provide open access for education, research, and audiences”. While noble, this is very open ended and could potentially have limited use if the aspects of the data is not of a high enough quality for a certain application. The projects and workflows reviewed here may in some cases produce the data, but not the tools or platform for which they created them. Figure 5 highlights the importance of quality control for the 3D models of CH, as some published data has many apparent faults in their accuracy.

Fig. 5
figure 5

Left: Image of Lamassu from the British Museum Right: Model of the same Lamassu by CyArk

In her review, Pfarr-Harfst proposed a documentation practice that highlights the ’prior’, the ’during’, and ’subsequent’ situation of both the heritage object and the project data[48]. This might be an important step the field should take to be more academically recognized, as we would be able to more clearly move away from using CH as an object for computer graphics visualization and towards using computer science as a tool for CH preservation.

Current reviews for 3D implementation in CH highlight the necessity to quantify the accuracy of the 3D data in an objective, homogeneous, and semantic way, while suggested workflows provide subjective and indeterminate tools to do so. This divide is a cause for concern for the merit of 3D CH data, and future research should exert itself to contribute to this gap in information. But the apparent and necessary variation in approaches, along with the great variation of the objects themselves, signifies that a single, standardized workflow for 3D in CH might be an impractical approach.

Heterogeneous data and interoperability issues

Another possible reason for the data clutter in the 3D CH field is precisely this variation of objectives with limited specifying documentation. This ties back to the lack of long-term support and application of the collected data, and overemphasis on acquisition methods relative to research questions. What we have ended up with is heterogeneous data that might have limited interoperability and little contribution to other areas than computer graphics visualization, which is not unique to the CH field. There is a significant semantic difference between using 3D for visualization and for the research of the tangible objects, as one emphasizes observer perception and the other measures quantitative parameters. Various approaches will also weigh different parameters of an acquisition process in a unique way, perhaps leading to specialized data that is not easily used for other applications. While it may not be an objective for some research approaches to make their data universally applicable, extensive restrictions on the workflow and post-processing might render the data or research results to not be reproducible in other environments. If this ends up being the case, the legitimacy of the methodology might be jeopardized, as it might only be valid under very specific conditions.

Digital projects for CH that utilize 3D data will always be multimodal, and weighs different data types based on the project objective. It is an intrinsic part of any 3D workflow that the data undergoes a lot of change and travel through different software. Even projects that exclusively want to capture the geometry of objects, disregarding color and texture, are still dependent on temporary data like 2D images or point clouds, meaning that there is no clear divide between 2D and 3D workflows. Maintaining the different modalities, even if it might have no practical use to the current project might lessen this heterogeneity issue. This ties back to Pfarr-Harfst’s notion of documenting the prior, the current, and the subsequent in 3D digitization processes, and different data formats validating this could provide sufficient academic evidence of the results of a 3D processing stage for CH.

For example, while research have been conducted on reconstructing the missing shape of CH objects using Poisson reconstruction [28] and shape recognition[73], there is no way of validating how accurate this is in reality. And while this is the case, there might be little difference between using this method and modeling by hand, apart from ethical or subjective considerations. A model that has been processed in such a way would also be unsuitable for suggested applications like change monitoring, where the introduced conjecture already renders the 3D object ethically improper for ground truth comparisons. Other application workflows, like 3D printing, uses a file format which reads the data in way where a watertight 3D model is essential. In which case a hole-filling process is inevitable. ScanTheWorldFootnote 70 is a repository designed for sharing 3D data of cultural artifacts for the purpose of 3D printing, and therefore does not support objects or formats which are unsuitable for this task. Other applications might not have this demand, and while a 3D object with no holes might be more visually appealing to look at, introducing algorithms to fill these holes will make conjecture unavoidable.

The issue with the current proposed standard parameters and workflows referenced in this review is that they are not quantifiable to the degree of being objective. As such, different researchers, producers, and curators will weigh them differently based on their needs, and a unified and reproducible metric for all forms of 3D data is unachievable.

Standardized formats and their use

Specialized production also often results in data formats that supports the primary objective of each specific data acquisition process, possibly limiting the interoperability or applied use of the approach for other means. A few papers and projects investigate the creation of evaluation datasets, but this approach has yet to see too much development in the field. Using such tools, different approaches could tested on the same object which might give a better baseline evaluation of an applied methodology or processing technique.

Tools like the H3D dataset from [74] released in 2021 is an example of how it might be useful for researchers and developers to test out their new processing workflows, and quantitatively compare them to others who have used the same dataset. H3D is a UAV LiDAR dataset depicting the town of Hessigheim, Germany in several epochs, and terrestrial data acquired by UAV LiDAR are often used for researching heritage sites and buildings[75,76,77]. The work has 32 citations at the time of writing, and note that this is not a collectively approved standard. Other examples of such datasets include CO3D [78] from Meta, HM3D [79] and LIBRE [80]. Similarly there is no such baseline for smaller type objects. While the Stanford Bunny [81] has been used for a long time for such applications, it has never been universally recognized as an evaluation tool. Its age is also becoming apparent from a technological perspective, as modern acquisition methods are able to acquire 3D data at a higher rate and density. Issues with utilizing such baseline datasets for method evaluation is that they are also unavoidably affected by the original acquisition method and tools, but for the sake of method testing this might be disregarded. Merit of 3D CH data is yet another discussion that is outside the scope of this review, but an interesting note is the difference between reality-based data, born-digital data, and processed reality-based data. Prior papers have investigated this [82,83,84,85,86], but there are still many ethical dilemmas with 3D CH to consider.

While differences in the structures of file formats are also out of the scope of this review, it defines the readability of the data by different software and viewing platforms. Therefore being the first step in interoperability. The European Commission’s latest report provides a comprehensive list of current 3D formats, rasters, and vectors, along with international standardization bodies.Footnote 71 We note the vast amount of formats, along with how many are listed as standards for different industries. Institutions that are currently working on standardization on 3D CH data are the European Committee for Standardzation,Footnote 72 International Organization for Standardization,Footnote 73 and the Web 3D Consortium.Footnote 74

Digital platforms and APIs

Most of the data collectors and data repositories that feature web-viewing of their content integrate other 3D viewers in their websites with a provided API. An API is an intermediary software which allows different applications to “talk” to each other. The EU Project 3D Icons [65] provide some considerations for choosing publishing platforms, building on results from the CARARE project [87]. Their evaluation mostly orients around user friendliness, and technical specifications for the data is very limited. This is very similar to the reviewed projects and workflows mentioned earlier. But there are several other ways to visualize 3D CH data digitally, ranging from Javascript frameworks to game engines.

There are a lot of digital services for hosting and visualizing 3D models, but they should not necessarily be all counted as fit for visualizing CH research. Some that are often mentioned in previous reports to possibly visualize 3D CH data are Sketchfab,Footnote 75 Hexagon,Footnote 76 Configure One,Footnote 77 Atlatl,Footnote 78 Soft8Soft,Footnote 79 3D Cloud Marxent,Footnote 80 CanvasLogic,Footnote 81 Threekit,Footnote 82 ModelViewer,Footnote 83 p3d,Footnote 84 3DHop [88], PoTree,Footnote 85 Exhibit,Footnote 86 Mozilla,Footnote 87 SayDuck,Footnote 88 Kompakkt,Footnote 89 GB3D,Footnote 90 Universal Viewer,Footnote 91 Smithsonian Voyager,Footnote 92 ADS 3D Viewer,Footnote 93 and ATON Framework[89]. Note that some of these are designed for a specific project, and are therefore tailored-made for the project requirements. Others are designed for specific formats of 3D data, like PoTree being designed for rendering of large point clouds and Universal Viewer specifically supporting 3D viewing of IIIF manifests. But more importantly, many of these often cited 3D viewers are designed for product visualization for commercial businesses, and are therefore not designed or applicable for 3D CH data. Requirements like the ones mentioned prior are either limited or non-existent, limiting the viewing to extremely simplified 3D objects with restricted inspection. While Sketchfab is in the lead in terms of users and uploads, and is indeed a popular upload platform for CH projects, it is also primarily designed for visualizing and interacting with simple 3D objects at a commercial/audience level. Like most of these viewers, it is not a platform servicing quantitative research approaches to the data they host. In Table 3 and 4, we have provided an overview of the features of the different 3D viewers. Features that are covered include general attributes like “Cost”, “PBR Rendering”, and “Object Statistics Inspection” which reports object information like vertex and polygon numbers. Features more specifically relevant to the CH field, like “Measuring Tool” and “Peer Review” are also covered. Authenticity also becomes an issue when visualizing creative and research-based data in the same viewer. This is apparent in the large collection of 3D objects classified as CH on the Sketchfab website, where many of the objects are custom made replicas of a CH object.Footnote 94 This is an important categorical difference that the field needs to consider moving forward, to achieve the differentiation from computer graphics mentioned by Pfarr-Harfst. Standardization frameworks like IIIF might be a tool to enrich the functionality of viewing 3D objects on the web, and serve as an authentication factor for digital objects. But implementation of the framework is still limited for the moment, and must implement a lot of desired tools to be useful for research purposes. Nonetheless, it shows promise for a verifiable approach for visualizing research data in 3D viewers in the future.

Table 3 3D viewer features 1–12
Table 4 3D Viewer Features 13–24

But, there are some current viewers that are specifically designed for more interaction and scientific approaches, like ATON Frameworks[89] multi-temporal visualization, Visual Media Service’s RTI relighting features [90], and CHER-OBs analysis tools[91]. 3D viewers like these should be adopted by the CH/computer science field to step away from the commercial and creative environments like Sketchfab. Another online 3D viewing platform that is research based is Virtual Interiors. A partner program led by Huygens Institute for the History of the Netherlands, Virtual Interiors seeks to visualize interiors from the Dutch Golden Age based on historical data, which can then further be used in culture development and creative productions [92]. Developed with BabylonJS JavaScript framework, the project seeks create a platform for reading and visualizing Big Data on the web. Such frameworks provide blueprints for 3D implementation on the web, and are more customizable ways of integrating quantitative investigation of 3D CH data to a website compared to an API. Albeit with the trade-off of longer development time and specialized implementation. Other JavaScript frameworks that could be used for similar purposes are Three.js, D3, Aframe, Cannon.js, and PlayCanvas. These frameworks includes some baseline 3D format loaders, but has support for adding additional loaders as well. Note that not all 3D formats are supported.Footnote 95

Such frameworks are more flexible than online 3D viewers, as it opens for institutions to develop custom tools for their own purposes while using universal formats. The Vasa Museum for example uses WebGL for their internal database Vasabas,Footnote 96 which includes a multi-temporal visualization built in to the software. But such custom software may also make it harder to share content that is developed in-house, as the frameworks include a lot of dependencies. Programming interfaces like OpenGL, HTML 5, Mesa, and Vulkan provide similar tools. While these frameworks provide more flexibility than out-of-the-box 3D viewers, they still struggle with large scale datasets or 3D objects of high resolution, of which there are many in 3D CH depositories. For this, more heavy duty software might be required.

Game Engines like Unreal Engine and Unity are tools for constructing larger scale visualizations of 3D objects, and due to their industrial production purpose they feature some of the most effective and powerful tools for large-scale projects. Especially one of the new features of Unreal Engine, a geometry decimation system called NaniteFootnote 97 shows great promise for visualizing high-resolution 3D objects in real time without too much performance loss. But, utilization of such systems requires the software to be built within the game engine framework, which may currently have limited out-of-the-box support for quantifiable 3D analysis of CH. It also limits its connectivity to the web, restricting the software to only utilize files stored locally on the system. But, both Unity and Unreal Engine has extended the possible applications for their frameworks beyond the video game industry. Especially implementations in the fields of architecture, automotive, virtual productions,Footnote 98 and the metaverseFootnote 99 displays the flexibility of these engines. Little then stands in the way of developing such tools towards 3D CH applications in the future, and foundations like The Linux FoundationFootnote 100 and the Open 3D FoundationFootnote 101 consist of large actors within the 3D field that work towards open-source developments.


Through this review, we have found that several of the original theorized applications for 3D in CH have been explored in various ways, as different projects implements 3D for CH for means of education, dissemination, and simulation. Visualization seems to still be the primary result of many of these projects, either for the purpose of visualizing to an audience or exploring different data acquisition methodologies. But in many ways the various projects remain fragmented and isolated from each other. Ad hoc solutions for implementation mirrors the ad hoc acquisition workflows of data collectors, making it almost impossible to quantitatively compare two similar implementations by the same parameters. Evaluation of 3D project yields is therefore still very subjective in the CH field, as several root issues are yet to be tackled. In this review we have explored the most prominent and recurring issues with data acquisition, data storage, file formats, and standardization along with 3D object quality assessment, workflow variation, data actualisation, and limited research focus within the field of 3D in cultural heritage. It is clear that while there exist many great tools to aid and develop this process, there are still several shortcomings that are integral. Numeric evaluation tools on objective variables and statistics should be a priority for the field in the coming years, so that the variability of 3D could at least be quantified in the most recurring dimensions. Variables that we deem the most important for this are:

  1. 1.

    Geometric accuracy and its alterations and reductions in a 3D process.

  2. 2.

    3D resolution levels utilized to digitize certain objects and surfaces, with distance measurements of whats captured within the resolution.

  3. 3.

    Processing power and computer memory required to utilize 3D objects.

  4. 4.

    Characterization of color and surface acquisition, and texture projection protocols.

Even though 3D can be used for a great variety of applications, and as a result will look significantly different within such measurements, it would be a tool for categorization and evaluation of 3D objects depending on the most important variables within its specific application. Subcategories would be created, that would narrow down the idea of 3D objects and what they include depending on the utilization. This could potentially lead to better standardization practices within each subcategory, and avoid the issue of attempting to develop a one-tool-fits-all standard.

While institutions would perhaps be more inclined to subscribe to agreed-upon standards, private enthusiasts of the general public might adhere to no such regulations. Public production is only set to increase in the future if the current trend continues, so it is vital for the research-field on 3D CH to separate 3D objects that artistically represents CH from research based 3D that attempts to visualize, archive, and analyze the truthful presentation of tangible CH objects. Development of standards would have to tackle a lot of different issues, as the heterogeneous and alterable nature of both CH and computer science will stretch and strain regulations in many different directions. First drafts would, and arguably should, therefore not encompass all variations, but attempt to establish some ground-rules based on the most recurring characteristics. The standardization is nonetheless vital for the research on the field, to be able to approach the data from a common scientific and quantifiable way. It would be the means to separate the generic visualization of computer graphics from the quantifiable research data of computer science, and elevate appearance acquisition and analysis for CH to a more concrete field. Variables of 3D implementations that are not part of an objects numerical evaluation that we suggest to pay more attention to are:

  1. 1.

    Extent of human intervention in the digitization process, compared to purely algorithmic.

  2. 2.

    Lifespan of the 3D object, both in terms of utilization and quality compared to current state-of-the-art.

  3. 3.

    Subsequent use of the 3D assets after initial acquisition and visualization.

  4. 4.

    Semantic and objective descriptions of the 3D asset and its use, instead of generic and open-ended.

Even with these suggestions, the field in relevance currently still very much depends on a lot of different actors from different backgrounds. Be it for interdisciplinary research or not. Researchers in this field must be aware of the interdisciplinary, commercial, public, and non-profit developments being made, as the public interactions with the research results is one of the primary objectives of 3D CH projects. As such, it would be beneficial to find some way to develop a workflow that does not narrow the 3D data acquisition to a specific research question or creative application, thereby making the data more universally relevant. Specialized training and education would allow for more nuanced and knowledgeable approaches, but the field is still too fragmented for such aspirations to emerge by themselves. A research gap for methods of validation and quality control of 3D objects is still prominent, especially in the CH sector where the conservation of object appearance, both shape and surface, is the main aspiration. If this is not considered by future projects, the heterogeneity of the field is only set to increase.

Europeana’s controlled and organized approach to 3D data storage shows promise for a more officially-recognized standard, and similar institutions provide valuable input that gradually provides more guidelines for 3D data collectors. But still there is lacking some implementation to verify that their hosted objects are of a high quality. Suggestions of peer-reviews of uploaded 3D data is a very interesting notion that should be explored further, and attempt to develop a framework from which 3D models could be evaluated. Other quantitative approaches could also be made, like relating the tangible object’s size or geometric variation to resolution requirements. We already have great quantities of 3D CH data available on the web, collected using various acquisition paradigms. Attempting to extract what research data we can from these pre-existing models would show us where they excel, where they fall short, and what characteristics are lacking for various research applications. Another project that is promising is the development of The European Collaborative Cloud for CH, which released its stakeholder survey in December 2022 [93]. This survey repeats a lot of what is noted in prior reports, and we hope that the development of this platform will take the shortcomings highlighted in this review into consideration.

One more segment that is seeing more development is the 3D viewing platforms. For a long time Sketchfab has reigned supreme as the 3D viewer of choice on the web, and while it offers some possibilities for 3D model inspection and provides a good API, we argue that it should not be deemed fit for hosting research-oriented 3D CH models apart from secondary-objective visualization. Some projects have opted for other 3D viewers that fits their format of 3D data, like PoTree, which comes with their own limitations or restrictions. General shortcomings of 3D viewers seems to be universal tools and format support, as well as object quality validation and embedding of metadata. While simpler tools like annotation is often listed as a feature of high priority, and is indeed implemented in most 3D viewers, the field should move towards using 3D viewers that opens more for processing of the dataset directly. Open Source projects have shown to provide good solutions for hosting 3D research data, and the transparency in development and code integration should be prioritized over proprietary, black-box viewers.

Challenges and opportunities are apparent in every stage of a 3D project, from project planning to data implementation, and CH provide various challenges that strains creative workflows. Substantial work is being done in improving many of these stages, but we have highlighted a few that is key to elevating the research-field.


This review looks at different projects working with 3D CH data, including data collectors, data repositories, suggestions for standards and workflows, as well as viewing platforms and processing engines. We have highlighted some of the important developments in each segment, and proposed directions to where research should head into the future. There is no shortcoming of work being done, but we hope to see a bit more in depth application research in the future, as well as emphasis on research questions for 3D CH that is subsequent for acquisition.

The research field is an interdisciplinary field, and as such includes a great variation of competent institutions that could to contribute to standardization agreements, research, and development activities. However, in many cases the approach each institution selects is incompatible with the approach of another institution, limiting the interoperability of the implementation and reduces the possibility for sharing data directly. Current work shows great promise in providing solutions for these issues, but there is a lot of work still left to do.

Availability of data and materials

The paper presents and details all data used for the review. Additional access to the data are available upon request from the authors.








































































































  1. Haleem Abid, Javaid Mohd. 3d scanning applications in medical field: a literature-based review. Clin Epidemio Global Health. 2019;7(2):199–210.

    Article  Google Scholar 

  2. Su YY, Hashash Youssef MA, Liu Liang Y. Integration of construction as-built data via laser scanning with geotechnical monitoring of urban excavation. J Const Eng Manage. 2006;132(12):1234–41.

    Article  Google Scholar 

  3. Morteza Daneshmand, Ahmed Helmi, Egils Avots, Fatemeh Noroozi, Fatih Alisinanoglu. Arslan Hasan Sait, Gorbova Jelena, Haamer Rain Eric, Ozcinar Cagri, Anbarjafari Gholamreza, 3d scanning: A comprehensive survey. arXiv. 2018.

    Article  Google Scholar 

  4. Remondino Fabio. Heritage recording and 3d modeling with photogrammetry and 3d scanning. Remote Sens. 2011;3(6):1104–38.

    Article  Google Scholar 

  5. Do Phuong Ngoc Binh, Nguyen Quoc Chi, A review of stereo-photogrammetry method for 3-d reconstruction in computer vision, in 2019 19th International Symposium on Communications and Information Technologies (ISCIT). IEEE. 2019:138–43.

  6. Styliani Sylaiou, Fotis Liarokapis, Kostas Kotsakis, Petros Patias. Virtual museums, a survey and some issues for consideration. J Cult Herit. 2009;10(4):520–8.

    Article  Google Scholar 

  7. Schweibenz Werner. The virtual museum: an overview of its origins, concepts, and terminology. Museum Rev. 2019;4(1):1–29.

    Google Scholar 

  8. Keene Suzanne, Collections Digital. Museums and the Information Age. Butterworth-Heinemann; 1998.

    Google Scholar 

  9. ICOM, “Standards,” Available from:, Accessed 30 May 2023.

  10. ICOMOS, “Charters Adopted by the General Assembly of ICOMOS,”, Accessed 30 May 2023.

  11. Robson Stuart, MacDonald Sally, Were Graeme, Hess Mona. 3d recording and museums. Digital Humanit Pract. 2012;1:91–115.

    Article  Google Scholar 

  12. Carvajal Daniel Alejandro Loaiza, Morita María Mercedes, Bilmes Gabriel Mario. Virtual museums. captured reality and 3d modeling. J Cult Herit. 2020;45:234–9.

    Article  Google Scholar 

  13. Ivan Apollonio Fabrizio, Fantini Filippo, Garagnani Simone, Gaiani Marco. A photogrammetry-based workflow for the accurate 3d construction and visualization of museums assets. Remote Sens. 2021;13(3):486.

    Article  Google Scholar 

  14. Verhoeven Geert J. Computer graphics meets image fusion: the power of texture baking to simultaneously visualise 3d surface features and colour. ISPRS Ann Photogram Remote Sens Spatial Inform Sci. 2017;4:295.

    Article  Google Scholar 

  15. Korytkowski Przemyslaw, Olejnik-Krugly Agnieszka. Precise capture of colors in cultural heritage digitization. Color Res Appl. 2017;42(3):333–6.

    Article  Google Scholar 

  16. Scopigno Roberto, Callieri Marco, Cignoni Paolo, Corsini Massimiliano, Dellepiane Matteo, Ponchio Federico, Ranzuglia Guido. 3d models for cultural heritage: beyond plain visualization. Computer. 2011;44(7):48–55.

    Article  Google Scholar 

  17. Wijnhoven Martijn A, Moskvin Aleksei, Moskvina Mariia. Testing archaeological mail armour in a virtual environment: 3rd century bc to 10th century ad. J Cult Herit. 2021;48:106–18.

    Article  Google Scholar 

  18. Perakis Panagiotis, Schellewald Christian, Kebremariam Kidane Fanta, Theoharis Theoharis, Simulating erosion on cultural heritage monuments, in Proceedings of the 20th international conference on cultural heritage and new technologies (CHNT20), 2015;.

  19. Nicoletta Di Blas, Caterina Poggi. 3d for cultural heritage and education: evaluating the impact, in Museums and the Web. Arch Museums Inform. 2006;2006:141–50.

    Google Scholar 

  20. Liestøl Gunnar, Museums, artefacts and original cultural heritage sites. using augmented reality to bridge the gaps between indoors/outdoors and center/periphery in cultural heritage communication, MW20: MW, 2020;.

  21. Mortara Michela, Catalano Chiara. 3d virtual environments as effective learning contexts for cultural heritage. Italian J Educl Technol. 2018;26(2):5–21.

    Google Scholar 

  22. Koller David, Frischer Bernard, Humphreys Greg. Research challenges for digital archives of 3d cultural heritage models. J Comput Cult Herit (JOCCH). 2010;2(3):1–17.

    Google Scholar 

  23. Ioannides Marinos, Ewald Q. 3d research challenges in cultural heritage. Lecture Notes Comp Sci. 2014;8355:151.

    Google Scholar 

  24. Münster Sander, Pfarr-Harfst Mieke, Kuroczyński Piotr, Ioannides Marinos, 3D research challenges in cultural heritage II: How to manage data and knowledge related to interpretative digital 3D reconstructions of cultural heritage, pp. 32–46, Springer, 2016;.

  25. Tucci G, Bonora V, Conti A, Fiorini L. High-quality 3d models and their use in a cultural heritage conservation project. Int Arch Photogram Remote Sens Spat Inform Sci. 2017;42:687–93.

    Article  Google Scholar 

  26. Saha Sunita, Foryś Piotr, Martusewicz Jacek, Sitnik Robert, Approach to analysis the surface geometry change in cultural heritage objects, in International Conference on Image and Signal Processing. Springer, 2020;3–13.

  27. Papanikolaou Athanasia, Dzik-Kruszelnicka Dorota, Kujawinska Malgorzata. Spatio-temporal monitoring of humidity induced 3d displacements and strains in mounted and unmounted parchments. Herit Sci. 2022;10(1):1–25.

    Article  Google Scholar 

  28. Gregor Robert, Mavridis Pavlos, Wiltsche Albert, Schreck Tobias, A soft union based method for virtual restoration and 3d printing of cultural heritage objects, in Proceedings of the 14th Eurographics Workshop on Graphics and Cultural Heritage, 2016;43–52.

  29. Neumüller Moritz, Reichinger Andreas, Rist Florian, Kern Christian, 3d printing for cultural heritage: Preservation, accessibility, research and education, in 3D Research Challenges in Cultural Heritage, pp. 119–134. Springer, 2014.

  30. Martina Ballarin, Balletti C, Vernier P. Replicas in cultural heritage: 3d printing and the museum experience. Int Arch Photogram Remote Sens Spatial Inform Sci. 2018;42:55–62.

    Google Scholar 

  31. Dore Conor, Murphy Maurice, Integration of historic building information modeling (hbim) and 3d gis for recording and managing cultural heritage sites, in 2012 18th International conference on virtual systems and multimedia. IEEE. 2012;369–76.

  32. Mandujano R, Maria G, Integration of historic building information modeling and valuation approaches for managing cultural heritage sites, in Proc. 27th Annual Conference of the International. Group for Lean Construction (IGLC), Pasquire C. and Hamzeh FR (ed.), Dublin, Ireland, 2019;1433–1444.

  33. Peinado-Santana Sara, Hernández-Lamas Patricia, Bernabéu-Larena Jorge, Cabau-Anchuelo Beatriz, Martín-Caro José Antonio. Public works heritage 3d model digitisation, optimisation and dissemination with free and open-source software and platforms and low-cost tools. Sustainability. 2021;13(23):13020.

    Article  Google Scholar 

  34. Aicardi Irene, Chiabrando Filiberto, Lingua Andrea Maria, Noardo Francesca. Recent trends in cultural heritage 3d survey: The photogrammetric computer vision approach. J Cult Herit. 2018;32:257–66.

    Article  Google Scholar 

  35. Bi ZM, Wang Lihui. Advances in 3d data acquisition and processing for industrial applications. Robot Comp Integr Manuf. 2010;26(5):403–13.

    Article  Google Scholar 

  36. Adamopoulos Efstathios, Rinaudo Fulvio, Ardissono Liliana. A critical comparison of 3d digitization techniques for heritage objects. ISPRS Int J Geo Inform. 2020;10(1):10.

    Article  Google Scholar 

  37. Magda Ramos M, Remondino Fabio. Data fusion in cultural heritage-a review. Int Arch Photogram Remote Sens Spat Inform Sci. 2015;40(5):359.

    Article  Google Scholar 

  38. Saiti Evdokia, Theoharis Theoharis. An application independent review of multimodal 3d registration methods. Comput Graph. 2020;91:153–78.

    Article  Google Scholar 

  39. Remondino Fabio, Rizzi Alessandro. Reality-based 3d documentation of natural and cultural heritage sites-techniques, problems, and examples. Appl Geom. 2010;2(3):85–100.

    Article  Google Scholar 

  40. Pavlidis George, Koutsoudis Anestis, Arnaoutoglou Fotis, Tsioukas Vassilios, Chamzas Christodoulos. Methods for 3d digitization of cultural heritage. J Cult Herit. 2007;8(1):93–8.

    Article  Google Scholar 

  41. Wachowiak Melvin J, Karas Basiliki Vicky. 3d scanning and replication for museum and cultural heritage applications. J Am Inst Conserv. 2009;48(2):141–58.

    Article  Google Scholar 

  42. Pillay Ruven, Picollo Marcello, Hardeberg Jon Yngve, George Sony. Evaluation of the data quality from a round-robin test of hyperspectral imaging systems. Sensors. 2020;20(14):3812.

    Article  Google Scholar 

  43. Budak Igor, Santosi Zeljko, Stojakovic Vesna, Korolija Crkvenjakov Daniela, Obradovic Ratko, Milosevic Mijodrag, Sokac Mario. Development of expert system for the selection of 3d digitization method in tangible cultural heritage. Tehnicki Vjesnik. 2019;26(3):837–44.

    Google Scholar 

  44. Guillaume Henry-Louis, Schenkel Arnaud, Best practice checklists for 3d museum model publication, Proceedings of the International Conference on Cultural Heritage and New Technologies, 2021;159–170.

  45. Champion Erik, Rahaman Hafizur. Survey of 3d digital heritage repositories and platforms. Virtual Archaeol Rev. 2020;11(23):1–15.

    Article  Google Scholar 

  46. McCarthy D, Wallace A, Survey of glam open access policy and practice, Copyright Cortex. Retrieved January, 2018;18:2020.

  47. Pure3D, “PURE3D Survey on 3D Web Infrastructures: Final Report,”, Accessed 21 Aug 2023.

  48. Pfarr-Harfst Mieke, Typical workflows, documentation approaches and principles of 3d digital reconstruction of cultural heritage, in 3D research challenges in cultural heritage II. Springer 2016;32–46.

  49. Alshawabkeh Yahya, Baik Ahmad, Miky Yehia. Integration of laser scanner and photogrammetry for heritage bim enhancement. ISPRS Int J Geo Inform. 2021;10(5):316.

    Article  Google Scholar 

  50. Alliez Pierre, Bergerot Laurent, Bernard Jean-François, Boust Clotilde, Bruseker George, Carboni Nicola, Chayani Mehdi, Dellepiane Matteo, Dell’Unto Nicolo, Dutailly Bruno, et al., Digital 3D Objects in Art and Humanities: challenges of creation, interoperability and preservation. White paper, Ph.D. thesis, European Commission; Horizon H2020 Projects, 2017;.

  51. Mathys Aurore, Brecko Jonathan, Van den Spiegel Didier, Semal Patrick. 3d and challenging materials, in 2015 Digital Heritage. IEEE. 2015;1:19–26.

  52. Rushmeier Holly E, Rogowitz Bernice E, Piatko Christine. Perceptual issues in substituting texture for geometry. Human Vision and Electronic Imaging V Spie. 2000;3959:372–83.

    Article  Google Scholar 

  53. Rogowitz Bernice E, Rushmeier Holly E. Are image quality metrics adequate to evaluate the quality of geometric objects? Human Vision Electronic Imaging VI SPIE. 2001;4299:340–8.

    Article  Google Scholar 

  54. Silva Samuel S, Ferreira Carlos, Madeira Joaquim, Santos Beatriz Sousa, Perceived quality of simplified polygonal meshes: Evaluation using observer studies., in SIACG, 2006;169–178.

  55. Thorn Jacob, Pizarro Rodrigo, Spanlang Bernhard, Bermell-Garcia Pablo, Gonzalez-Franco Mar, Assessing 3d scan quality through paired-comparisons psychophysics, in Proceedings of the 24th ACM international conference on Multimedia, 2016;147–151.

  56. Abouelaziz Ilyass, El Hassouni Mohammed, Cherifi Hocine, No-reference 3d mesh quality assessment based on dihedral angles model and support vector regression, in International Conference on Image and Signal Processing. Springer, 2016;369–377.

  57. Abouelaziz Ilyass, El Hassouni Mohammed, Cherifi Hocine, A curvature based method for blind mesh visual quality assessment using a general regression neural network, in 2016 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS). IEEE, 2016;793–797.

  58. Karni Zachi, Gotsman Craig, Spectral compression of mesh geometry, in Proceedings of the 27th annual conference on Computer graphics and interactive techniques, 2000;279–286.

  59. Lavoué Guillaume, A multiscale metric for 3d mesh visual quality assessment, in Computer graphics forum. Wiley Online Library, 2011, number 5 in 30;1427–1437.

  60. Abouelaziz Ilyass, Chetouani Aladine, El Hassouni Mohammed, Latecki Longin Jan, Cherifi Hocine. 3d visual saliency and convolutional neural network for blind mesh quality assessment. Neural Comput Appl. 2020;32(21):16589–603.

    Article  Google Scholar 

  61. Wang Kai, Torkhani Fakhri, Montanvert Annick. A fast roughness-based approach to the assessment of 3d mesh visual quality. Comput Graph. 2012;36(7):808–18.

    Article  Google Scholar 

  62. Nouri Anass, Charrier Christophe, Lézoray Olivier, 3d blind mesh quality assessment index, in IS &T International Symposium on Electronic Imaging, 2017;.

  63. Gadelha Matheus, Wang Rui, Maji Subhransu, Multiresolution tree networks for 3d point cloud processing, in Proceedings of the European Conference on Computer Vision (ECCV), 2018;103–118.

  64. Royan Jérôme, Balter R, Bouville Christian, Hierarchical representation of virtual cities for progressive transmission over networks, in Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT’06). IEEE, 2006;432–439.

  65. D’Andrea Andrea, Niccolucci Franco, Bassett Sheena, Fernie Kate, 3d-icons: World heritage sites for europeana: Making complex 3d models available to everyone, in 2012 18th International Conference on Virtual Systems and Multimedia. IEEE, 2012;517–520.

  66. Cignoni Paolo, Montani Claudio, Scopigno Roberto. A comparison of mesh simplification algorithms. Comput Graph. 1998;22(1):37–54.

    Article  Google Scholar 

  67. Gaiani Marco, Apollonio Fabrizio Ivan, Ballabeni Andrea. Cultural and architectural heritage conservation and restoration: which colour? Colorat Technol. 2021;137(1):44–55.

    Article  CAS  Google Scholar 

  68. Molada-Tebar A, Marqués-Mateu Á, Lerma JL. Correct use of color for cultural heritage documentation. ISPRS Ann Photogram Remote Sens Spatial Inform Sciences. 2019;4:107–13.

    Article  Google Scholar 

  69. Guarnera Darya, Claudio Guarnera Giuseppe, Abhijeet Ghosh, Cornelia Denk, Mashhuda Glencross. Brdf representation and acquisition. Comp Graph Forum. 2016;35:625–50.

    Article  Google Scholar 

  70. Montes Rosana, Ureña Carlos, An overview of brdf models, University of Grenada, Technical Report LSI-2012, 2012;1:19.

  71. Robleda Prieto G, Caroti Gabriella MARTINEZESPEJOZARAGOZA, Isabel Piemonte Andrea, et al. Computational vision in uv-mapping of textured meshes coming from photogrammetric recovery: unwrapping frescoed vaults. Int Arch Photogram Remote Sens Spat Inform Sci. 2016;41:391–8.

    Article  Google Scholar 

  72. Poranne Roi, Tarini Marco, Huber Sandro, Panozzo Daniele, Sorkine-Hornung Olga. Autocuts: simultaneous distortion and cut optimization for uv mapping. ACM Trans Graph (TOG). 2017;36(6):1–11.

    Article  Google Scholar 

  73. Pratikakis Ioannis, Savelonas Michalis A, Mavridis Pavlos, Papaioannou Georgios, Sfikas Konstantinos, Arnaoutoglou Fotis, Rieke-Zapp Dirk. Predictive digitisation of cultural heritage objects. Multimedia Tools Appl. 2018;77:12991–3021.

    Article  Google Scholar 

  74. Kölle Michael, Laupheimer Dominik, Schmohl Stefan, Haala Norbert, Rottensteiner Franz, Wegner Jan Dirk, Ledoux Hugo, The hessigheim 3d (h3d) benchmark on semantic segmentation of high-resolution 3d point clouds and textured meshes from uav lidar and multi-view-stereo, ISPRS Open Journal of Photogrammetry and Remote Sensing, 2021;1:11.

  75. Themistocleous Kyriacos, Ioannides Marinos, Agapiou Athos, Hadjimitsis Diofantos G, The methodology of documenting cultural heritage sites using photogrammetry, uav, and 3d printing techniques: the case study of asinou church in cyprus, in Third International Conference on Remote Sensing and Geoinformation of the Environment (RSCy2015). SPIE, 2015, 9535:312–318.

  76. Dominique Meyer, Elioth Fraijo, Eric Lo, Dominique Rissolo, Falko Kuester. Optimizing uav systems for rapid survey and reconstruction of large scale cultural heritage sites, in 2015 Digital Heritage. IEEE. 2015;1:151–4.

    Google Scholar 

  77. Poirier Nicolas, Baleux François, Calastrenc Carine. The mapping of forested archaeological sites using uav lidar. a feedback from a south-west france experiment in settlement and landscape archaeology. Archeol Numeriques. 2020;4(2):1–24.

    Google Scholar 

  78. Reizenstein Jeremy, Shapovalov Roman, Henzler Philipp, Sbordone Luca, Labatut Patrick, Novotny David, Common objects in 3d: Large-scale learning and evaluation of real-life 3d category reconstruction, in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021;10901–10911.

  79. Ramakrishnan Santhosh K, Aaron Gokaslan, Erik Wijmans, Oleksandr Maksymets, Alex Clegg, John Turner, Eric Undersander, Wojciech Galuba, Andrew Westbury, Chang Angel X, et al. Habitat-matterport 3d dataset (hm3d): 1000 large-scale 3d environments for embodied ai. arXiv. 2021.

    Article  Google Scholar 

  80. Carballo Alexander, Lambert Jacob, Monrroy Abraham, Wong David, Narksri Patiphon, Kitsukawa Yuki, Takeuchi Eijiro, Kato Shinpei, Takeda Kazuya, Libre: The multiple 3d lidar dataset, in 2020 IEEE Intelligent Vehicles Symposium (IV). IEEE. 2020;1094–101.

  81. Turk Greg, Levoy Marc, Zippered polygon meshes from range images, in Proceedings of the 21st annual conference on Computer graphics and interactive techniques, 1994;311–318.

  82. Manferdini Anna Maria, Remondino Fabio, Reality-based 3d modeling, segmentation and web-based visualization, in Euro-Mediterranean Conference. Springer, 2010;110–124.

  83. Khunti Roshni. The problem with printing palmyra: exploring the ethics of using 3d printing technology to reconstruct heritage. Stud Digital Herit. 2018;2(1):1–12.

    Article  Google Scholar 

  84. Wyeld Theodor G, Leavy Brett, Carroll Joti, Gibbons Craig, Ledwich Brendan, Hills James, The ethics of indigenous storytelling: using the torque game engine to support australian aboriginal cultural heritage, in DiGRA Conference, 2007;261–268.

  85. Hirst Cara S, White Suzanna, Smith Sian E. Standardisation in 3d geometric morphometrics: ethics, ownership, and methods. Archaeologies. 2018;14(2):272–98.

    Article  Google Scholar 

  86. Santana Quintero M, Fai S, Smith L, Duer A, Barazzetti L, et al. Ethical framework for heritage recording specialists applying digital workflows for conservation. Int Arch Photogram Remote Sens Spatial Inform Sci. 2019;42(2):1063–70.

    Article  Google Scholar 

  87. Hansen Henrik Jarl, Fernie Kate, Carare: Connecting archaeology and architecture in europeana, in Digital Heritage, Marinos Ioannides, Dieter Fellner, Andreas Georgopoulos, and Diofantos G. Hadjimitsis, Eds., Berlin, Heidelberg, 2010;450–462, Springer Berlin Heidelberg.

  88. Potenziani Marco, Callieri Marco, Dellepiane Matteo, Corsini Massimiliano, Ponchio Federico, Scopigno Roberto. 3dhop: 3d heritage online presenter. Comput Graph. 2015;52:129–41.

    Article  Google Scholar 

  89. Fanini Bruno, Ferdani Daniele, Demetrescu Emanuel, Berto Simone, d’Annibale Enzo. Aton: an open-source framework for creating immersive, collaborative and liquid web-apps for cultural heritage. Appl Sci. 2021;11(22):11062.

    Article  CAS  Google Scholar 

  90. Federico Ponchio, Marco Potenziani, Marco Callieri. Ariadne visual media service: easy web publishing of advanced visual media. CAA2015. 2016;1:433–42.

    Google Scholar 

  91. Wang Zeyu, Shi Weiqi, Akoglu Kiraz, Kotoula Eleni, Yang Ying, Rushmeier Holly. Cher-ob: a tool for shared analysis and video dissemination. Comput Cult Herit (JOCCH). 2018;11(4):1–22.

    Article  Google Scholar 

  92. Huurdeman Hugo, Piccoli Chiara. 3d reconstructions as research hubs: geospatial interfaces for real-time data exploration of seventeenth-century amsterdam domestic interiors. Open Archaeol. 2021;7(1):314–36.

    Article  Google Scholar 

  93. European Commission, Directorate-General for Research, and Innovation, “Stakeholders’ survey on a european collaborative cloud for cultural heritage : report on the online survey results,”, 2022, Accessed 10 Aug 2023.

Download references


I would like to extend a sincere thanks to all archaeologists, conservators, curators, and other museum employees we met during our trip with the NO-CHANGE project for their insightful input and valuable comments to this review. The NO-CHANGE project was funded by the Research Council of Norway.


Open access funding provided by Norwegian University of Science and Technology.

Author information

Authors and Affiliations



Writing, data collection, figure creation: Storeide, MSB. Review and supervision: G, S, S, A, H, JY. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Markus Sebastian Bakken Storeide.

Ethics declarations

Competing interests

The authors declare that they have no conflicts of interest in this work. We declare that we do not have any commerci al or associative interest that represents a competing interest in connection with the work submitted.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Referenced 3D models

Referenced 3D models

  • “Al Khazneh - The Treasury, Petra” ( by RaizNewMedia is licensed under Creative Commons Attribution (

  • “Al Khazneh (The Treasury), Petra, Jordan” ( by Zamani Project is licensed under Creative Commons Attribution (

  • “Sculpture “Bust of Róża Loewenfeld”” ( by Virtual Museums of Małopolska is licensed under CC Public Domain (

  • “Antinous” ( by The British Museum is licensed under CC Attribution-NonCommercial-ShareAlike (

  • “Joseph Hume Marble Bust” ( by National Museums Scotland is licensed under CC Attribution-NonCommercial-NoDerivs (

  • “Nile” ( by Rigsters is licensed under Creative Commons Attribution (

  • “Lamasu - British Museum (Assyrian Collection)” ( by CyArk is licensed under Creative Commons Attribution (

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Storeide, M.B., George, S., Sole, A. et al. Standardization of digitized heritage: a review of implementations of 3D in cultural heritage. Herit Sci 11, 249 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: