Introduction
I am an archaeologist and I have been studying the Ancient Maya civilization for 20 years. I do fieldwork in Mexico, Guatemala, and Honduras. Sometimes it means I am working with a collection of artifacts in a lab or a museum. Sometimes my work involves living in a tent in the rainforest for a few weeks… or months. My archaeological specialty is epigraphy: I document, translate, and publish Ancient Maya hieroglyphic inscriptions. Maya archaeologists and epigraphers are at the forefront of the efforts to preserve the Ancient Maya heritage (also the cultural heritage of Mexico, Guatemala, Belize, El Salvador, Honduras, and of the modern Maya nations) and to protect it from deterioration, vandalism, development, and looting.
From Reading/Drawing Glyphs to Scanning Maya Monuments
In 2008, I joined the Corpus of Maya Hieroglyphic Inscriptions (CMHI) at the Peabody Museum of Archaeology and Ethnology as a research associate and a field director of the 3D documentation project. My first exposure to 3D documentation happened a year earlier when I was still a graduate student and CMHI was considering the purchase of 3D scanning equipment for a new phase in field documentation and even did some field trials in Mexico and Honduras.
The project ended up acquiring a structured-light 3D scanner smartSCAN Duo and I spent the next six years of my life digitizing Ancient Maya buildings, monuments, and portable artifacts in the field and in the museum collections in Central America and the US. The project’s main focus back then was on the largest Maya inscription, the Hieroglyphic Stairway at the site of Copán in Honduras. The bulk of this enormous monument was scanned during four field seasons from 2009–2012. The team had to come up with many practical solutions to operate on a steep stairway without endangering the steps’ fragile carvings or the scanning team itself.
We still have a backlog in processing the high-resolution 3D models of 600+ stairway blocks and miscellaneous sculptures (about 30m polygons per model). Three kinds of 3D models for each stairway sculpture have already been created, including a set of fully printable low-resolution (under 2m polygons) models for the 1:10 scale replica that the project is using to test different stairway configurations (the current reconstruction is only partially correct and most of the hieroglyphic blocks are out of order).

This model of a seated warrior is from a section of the stairway that was taken from Honduras and became part of the collection of the Peabody Museum of Archaeology and Ethnology. One of the goals of our project was to put together all of the known elements of the stairway regardless of their current physical location.
Since then, my efforts have shifted to other endangered Maya monuments and buildings at Copán and other archaeological sites in Central America. The project’s 2013 report provides an overview of 3D scanning activities in addition to Copán’s Hieroglyphic Stairway. One of the most significant contributions was recording a well-preserved temple frieze at the archaeological site of Holmul in Guatemala.

In 2016, I became Assistant Professor of Archaeology at the University of Alabama and started a small Visual Documentation Lab while continuing my efforts to document Maya monuments in collaboration with the CMHI. In addition to my fieldwork, I teach courses and workshops on visual documentation and provide technical assistance to faculty members and students at the University of Alabama. I am currently working on a project creating a virtual 3D collection of Classic Maya pottery from the Holmul region. The archaeological sites where I worked most recently are La Sufricaya, Holmul, Xmakabatun, Witzna, and Dzibanche.
My Workflow: Structured-light Scanner
I am still using the old smartSCAN Duo 3D structured-light scanner in my research. It is definitely a legacy machine, but the cost of a new comparable 3D scanner is prohibitively high ($100,000) and I have so far been unable to find the funds to replace it. Its main strength is high precision and accuracy, which means data-processing takes very little time in comparison to high-resolution photogrammetry (see below). The scanner comes with three sets of lenses offering greater field-of-view (FOV) with lower resolution vs. reduced field-of-view with higher resolution. The largest FOV has a diagonal of 600mm and XY resolution of 0.36mm. The smallest FOV has a diagonal of just 90mm, but the XY resolution is 0.056mm. High precision and accuracy mean that I can combine hundreds of scans without any significant error or distortion building up. The choice of the resolution largely depends on the size of the object and the amount of detail. I can also replace the lenses during the same project and combine the scans taken with different FOV-s and XY resolution.

A typical workflow with the 3D scanner begins with taking the scans with a preferred set of lenses. The scanner is mounted on a tripod and needs to be moved between the shots. The scanning/processing software is Optocat and it comes with the scanner. Full-size polygon meshes are too much for my fieldwork laptop to handle, so I usually review the first couple of scans at full resolution to check for glitches and errors and then switch to a subsampled visualization for the rest of the scans. This method does have a flaw, in that small surface errors caused by a shifting or shaking tripod may go unnoticed. For example, 10% of nearly 600 scans of Stela 31 from Tikal (illustrated below) contained surface errors because the tripod shifted repeatedly on loose floor tiles. I had to manually inspect every scan and try to minimize its contribution to the final 3D model. Fortunately, there was enough info from overlapping shots to remove nearly all areas with surface noise. I do not think you would be able to find them on this model unless you know where to look:
Once scanning is complete and I am back in my lab, I process the scans into a single merged mesh running the same Optocat software on a more powerful workstation that can handle 600m+ point clouds. That involves generating full-resolution meshes from the scans, inspecting them, cleaning, refining the alignment, and then merging. The abovementioned Holmul temple frieze has been the largest merged mesh so far with nearly 900 scans (1m point cloud each) and 320m polygons in the final merged mesh.
Optocat does not have the tools for making the meshes 3D printable by optimizing their topography, creating volume when necessary, and removing problematic surface features, intersections, and non-manifold edges. I rely on Geomagic Wrap for that stage in processing. I also create several downsampled versions of every 3D model so that I can share or visualize them on a less powerful device. The original scans contain only per-vertex color information, but I use Geomagic to generate texture from high-resolution meshes before downsampling. Recent upgrades to Agisoft Metashape Pro allow users to create normal and occlusion maps from high-res meshes and apply them to downsampled versions. I have been experimenting with that feature. If I am working with an inscribed object and I want to make a line drawing of it, the final stage in my 3D workflow is creating orthoimages, which enhance the visibility of surface topography (in my case, some eroded Maya glyphs), using filters such as Radiance Scaling in Meshlab.
My Workflow: Photogrammetry
As of 2015, I have been relying increasingly on structure-from-motion photogrammetry. I am still not quite comfortable with the accuracy and precision of the data that I collect in this way, but sometimes these parameters are less significant and sometimes I can increase the resolution to a point where the surface noise generated by the inaccuracies becomes negligible. One obvious advantage of the method is its cost, although good digital cameras are not cheap. I still rely on an older and rugged Canon 5D II in the field. I use a newer Sony A7r III only in the lab or in a museum setting because I have doubts about its resistance to ambient moisture, mold, and dust (the rainforest is humid and excavations are dusty). In a constrained space of an archaeological trench or a tunnel, it is much easier to take photographs with a portable camera, than to operate a bulky tripod-mounted scanner connected to a laptop. For example, I was able to document sections of several palatial structures exposed through tunneling at the archaeological site of Naranjo. Here is a model of one building that used to be a shrine to a hummingbird patron deity.
In 2018 and 2019, I investigated dozens of old looting trenches at the archaeological site of La Sufricaya. Unfortunately, looting was rampant at the site in 1980–1990s. The usual procedure is to clean the damaged area, document visible architectural features, sample exposed artifacts, and then backfill the space in order to stabilize the ancient building and prevent further damage. Compared to photographs and measurement-based drawings, photogrammetry offers a quicker and more comprehensive method of recording everything before backfilling. Here is an example of a looters’ tunnel into the side of a temple structure at La Sufricaya that my project cleaned, investigated, and closed in 2018. Every stage of the process was documented with photogrammetry and then a composite 3D model was produced (the annotations refer to archaeological contexts mentioned in the site report).
- Constrained space in an archaeological tunnel at Naranjo where the photographs for the 3D model illustrated above were taken (photograph by Alexandre Tokovinine)
- On a trail to the archaeological site of Xmakabatun, Guatemala (photograph by Alexandre Tokovinine)
If I need to hike for a few miles to get to a site, and when I am not sure how much time I am going to have and how favorable the weather conditions are going to be, photogrammetry is definitely preferable to 3D scanning. Data-capturing is much quicker if there is enough ambient/artificial light to do it without mounting the camera on a tripod. For example, documenting all four monuments at the archaeological site of Witzna took me a few hours on two hiking trips. Doing the same amount of work with a structured-light scanner would have taken 2–3 days. The results were good enough to discern an ancient place name and contribute to a study of Ancient Maya warfare. Here is a small video about my workflow from a 3D model to determining the significance of the glyphs:
All of the Witzna monuments may be accessed in this Sketchfab collection:
Witzna by Alexandre Tokovinine on Sketchfab
Once the photographs are taken (usually in the “raw” Canon and Sony formats), I usually do some preliminary processing in Adobe Photoshop to make sure that things like white balance are consistent. Then I rely on Agisoft Metashape Pro to turn the photographs into a 3D model and then create several downsampled versions. As with the structured-light scanner models, Geomagic helps with cleaning the meshes for 3D printing. I also use Geomagic to combine multiple 3D models into a single object, for example, when I need to refit several pieces of the same monument. Stela 46 from Naranjo illustrates this approach:
There is also a 3D model of the stela as it was on the floor of the storage building (archaeological camp near Lake Yaxha where the fragments were transferred to because of safety concerns) when I photographed it:
My photogrammetry workflow in a lab or in a museum setting differs from my approach in the field, in that I am usually trying to overcome the problem of shallow depth of field by reducing the aperture and shooting with a tripod. I use a turntable and a monochrome screen that I mask out in post-processing. I have been reluctant to use green screens in the past because many artifacts have partially reflective surfaces, but masking out dark grey backgrounds has been quite challenging because there is too much overlap with the color of the artifacts. Compared to 3D scanning the masking process and all the subsequent aligning and point-cloud generation takes much longer than generating point clouds from scanner data. However, photogrammetry-based textures look much more realistic when compared to those that my old smartSCAN Duo can produce, so I rely on 3D scanning for objects with little color and reserve photogrammetry for artifacts such as painted polychrome vessels.
For example, compare a structured-light based model of a painted cylinder vessel:
With a bowl in a similar style documented with photogrammetry:
In the case of the cylinder vessel above, an additional reason to use 3D scanning was that I had to scan multiple fragments and then fit them together digitally. It was much easier to accomplish that with a scan-based model. I could manually align pairs of fragments as guidelines for successive sets of scans for each fragment. An alternative approach is to scan all of the fragments separately and then use a surface matching tool such as shape-based alignment in Geomagic. This second solution is much slower in practice and also requires high accuracy for the matching surfaces. Here is an example of several fragments matched semi-automatically (I had to manually pre-select areas of possible matches) in Geomagic:

How/Why I Rely On Sketchfab
Sketchfab, for me, is primarily a way of sharing data with my colleagues and creating new classroom experiences for my students. My 3D models on Sketchfab are already part of research publications (when they allow for additional materials and supplements) and news reports. As for my teaching, for example, I create take-home assignments where students need to explore archaeological features and artifact collections on Sketchfab. I incorporate Sketchfab models into classroom presentations, although I am only beginning to use the new AR feature of the Sketchfab app. I believe that this is where the most promising educational application is going to be because students can explore while still interacting with each other and the instructor. The VR mode in its present form is too isolating for a classroom experience. My 3D models on Sketchfab have already been used to add an AR component to educational programs in a museum.
My Favorite Sketchfab Scans
I am a huge fan of the project undertaken by the Harvard Semitic Museum directed by Prof. Peter Der Manuelian. It includes not only virtual 3D reconstructions based on excavation photographs and reports, but also comprehensive 3D digitization of the museum collections using photogrammetry and 3D scanning. This is my current favorite from Peter and his team:
I recommend visiting their Digital Giza webpage.