PoLAr-MAE

Particle Trajectory Representation Learning with Masked Point Modeling

1Stanford University
2SLAC National Accelerator Laboratory

PCA projection of the learned representation of the 3D point cloud.

Abstract

Effective self-supervised learning (SSL) techniques have been key to unlocking large datasets for representation learning. While many promising methods have been developed using online corpora and captioned photographs, their application to scientific domains, where data encodes highly specialized knowledge, remains a challenge.

We introduce the Point-based Liquid Argon Masked Autoencoder (PoLAr-MAE), applying masked point modeling to unlabeled Liquid Argon Time Projection Chamber (LArTPC) images. We show this SSL approach learns physically meaningful trajectory representations directly from data.

This method yields remarkable data efficiency: fine-tuning on just 100 labeled events achieves track/shower semantic segmentation performance comparable to the state-of-the-art supervised baseline trained on 100,000 events. Furthermore, internal attention maps exhibit emergent instance segmentation of particle trajectories. While challenges remain, particularly for fine-grained features, we make concrete SSL's potential for building a foundation model for LArTPC image analysis capable of serving as a common base for all data reconstruction tasks.

Truth

Predicted

Semantic segmentation of the 3D point cloud after fine-tuning the PoLAr-MAE model on 10x less labeled data than the supervised baseline.

Attention Maps

Attention maps exhibit emergent instance segmentation of individual particle trajectories. Each panel shows the attention map for a different query point (marked in red), highlighting which parts of the event the model attends to for that query. Lighter colors indicate higher attention scores.

Paper

BibTeX

@misc{young2025particletrajectoryrepresentationlearning,
      title={Particle Trajectory Representation Learning with Masked Point Modeling}, 
      author={Sam Young and Yeon-jae Jwa and Kazuhiro Terao},
      year={2025},
      eprint={2502.02558},
      archivePrefix={arXiv},
      primaryClass={hep-ex},
      url={https://arxiv.org/abs/2502.02558},
}