Intelligent Systems
Note: This research group has relocated.

Attention Normalization Impacts Cardinality Generalization in Slot Attention

2024

Article

ev


Object-centric scene decompositions are important representations for downstream tasks in fields such as computer vision and robotics. The recently proposed Slot Attention module, already leveraged by several derivative works for image segmentation and object tracking in videos, is a deep learning component which performs unsupervised object-centric scene decomposition on input images. It is based on an attention architecture, in which latent slot vectors, which hold compressed information on objects, attend to localized perceptual features from the input image. In this paper, we demonstrate that design decisions on normalizing the aggregated values in the attention architecture have considerable impact on the capabilities of Slot Attention to generalize to a higher number of slots and objects as seen during training. We propose and investigate alternatives to the original normalization scheme which increase the generalization capabilities of Slot Attention to varying slot and object counts, resulting in performance gains on the task of unsupervised image segmentation. The newly proposed normalizations represent minimal and easy to implement modifications of the usual Slot Attention module, changing the value aggregation mechanism from a weighted mean operation to a scaled weighted sum operation.

Author(s): Markus Krimmel and Jan Achterhold and Joerg Stueckler
Journal: In Transactions on Machine Learning Research (TMLR)
Year: 2024

Department(s): Embodied Vision
Bibtex Type: Article (article)
Paper Type: Journal

State: Published
URL: https://openreview.net/forum?id=llQXLfbGOq

Links: preprint
source code
video
Video:

BibTex

@article{krimmel2024_sanormalization,
  title = {Attention Normalization Impacts Cardinality Generalization in Slot Attention},
  author = {Krimmel, Markus and Achterhold, Jan and Stueckler, Joerg},
  journal = {In Transactions on Machine Learning Research (TMLR)},
  year = {2024},
  doi = {},
  url = {https://openreview.net/forum?id=llQXLfbGOq}
}