PoLArMAE Attention Viz

Visualize the learned attention maps from PoLArMAE

Explore the attention heads learned by PoLArMAE, a masked autoencoder-based pretraining strategy for particle trajectory data from liquid argon TPCs. Click on a token center to visualize the tokens that are most attended to.

Hint: try checking out (Layer 1, Head 5) and (Layer 5, Head 6).

Note: Each event is 10-20 MB, so it might be slow to start depending on your internet connection.

Point Cloud Attention Visualization

Event Selection

1
1
1