SONY

Efficient Joint Detection and Multiple Object Tracking with Spatially Aware Transformer

Date
2023
Academic Conference
Transformer4Vision CVPR2023 Workshop
Authors
Siddharth Sagar Nijhawan(Sony Group Corporation)
Leo Hoshikawa(Sony Group Corporation)
Atsushi Irie(Sony Group Corporation)
Masakazu Yoshimura(Sony Group Corporation)
Junji Otsuka(Sony Group Corporation)
Takeshi Ohashi(Sony Group Corporation)
Research Areas
Computer Vision & CG

Abstract

We propose a light-weight and highly efficient Joint Detection and Tracking pipeline for the task of Multi-Object Tracking using a fully-transformer architecture. It is a modified version of TransTrack, which overcomes the computational bottleneck associated with its design, and at the same time, achieves state-of-the-art MOTA score of 73.20%. The model design is driven by a transformer based backbone instead of CNN, which is highly scalable with the input resolution. We also propose a drop-in replacement for Feed Forward Network of transformer encoder layer, by using Butterfly Transform Operation to perform channel fusion and depth-wise convolution to learn spatial context within the feature maps, otherwise missing within the attention maps of the transformer. As a result of our modifications, we reduce the overall model size of TransTrack by 58.73% and the complexity by 78.72%. Therefore, we expect our design to provide novel perspectives for architecture optimization in future research related to multi-object tracking.

このページの先頭へ