LUWA Dataset: Learning Lithic Use-Wear Analysis on Microscopic Images  

1 New York University   2 Stanford University
* Equal Contribution   Corresponding Author



LUWA poses a unique computer vision challenge due to: its complex wear formation and irregular wear patterns, ambiguous sensing modalities and magnifications in microscopic imaging. Facing these challenges, the LUWA dataset encompasses both texture and heightmap with different magnifications, encouraging the exploration of image classification beyond common objects.

Abstract

Lithic Use-Wear Analysis (LUWA) using microscopic images is an underexplored vision-for-science research area. It seeks to distinguish the worked material, which is critical for understanding archaeological artifacts, material interactions, tool functionalities, and dental records. However, this challenging task goes beyond the well-studied image classification problem for common objects. It is affected by many confounders owing to the complex wear mechanism and microscopic imaging, which makes it difficult even for human experts to identify the worked material successfully. In this paper, we investigate the following three questions on this unique vision task for the first time:(i) How well can state-of-the-art pre-trained models (like DINOv2) generalize to the rarely seen domain? (i) How can few-shot learning be exploited for scarce microscopic images? (i) How do the ambiguous magnification and sensing modality influence the classification accuracy? To study these, we collaborated with archaeologists and built the first open-source and the largest LUWA dataset containing 23,130 microscopic images with different magnifications and sensing modalities. Extensive experiments show that existing pre-trained models notably outperform human experts but still leave a large gap for improvements. Most importantly, the LUWA dataset provides an underexplored opportunity for vision and learning communities and complements existing image classification problems on common objects.

Demonstration

Image diversity of LUWA dataset and corresponding visual explanations for human and model decision-making processes. (i) LUWA dataset provides diverse microscopic images associated with spatial distributions (e.g. Regions 1 and 2), magnifications (e.g. Regions 2 and 4) and sensing modalities (texture in the first row and heightmap in the second row); (ii) We compared visual explanations in both human (in the third row) and model (in the fourth row) decision-making processes. Human experts labeled the most important region with red and the less important region with yellow when looking at details of microscopic images to distinguish the worked material. Similarly, Grad-CAM heatmaps use red for the highest importance, yellow for lower importance, and blue for the lowest importance. Interestingly, similar areas (e.g. Regions 1, 4 and 6) are labeled with higher importance for both humans and models.

BibTeX

@misc{zhang2024luwa,
            title         ={LUWA Dataset: Learning Lithic Use-Wear Analysis on Microscopic Images}, 
            author        ={Jing Zhang and Irving Fang and Juexiao Zhang and Hao Wu and Akshat Kaushik and Alice Rodriguez and Hanwen Zhao and Zhuo Zheng and Radu Iovita and Chen Feng},
            year          ={2024},
            eprint        ={2403.13171},
            archivePrefix ={arXiv},
            primaryClass  ={cs.CV}
            }

Acknowledgements

This work is supported by NSF Grant 2152565, and by NYU IT High-Performance Computing resources, services, and staff expertise. We gratefully acknowledge Prof. Rakesh Behera for the tribometer hardware, and thank Sara Borsodi, Felix Devis Kisena, Kat Liu, Eugenia Ochoa, Vita Jackman Kuwabara, Alice Jiang, Meiyu Zhang, and Sriram Koushik for their valuable assistance in collecting the microscopic images.