SEAFORMER: SQUEEZE-ENHANCED AXIAL TRANS-FORMER FOR MOBILE SEMANTIC SEGMENTATION

Abstract

Since the introduction of Vision Transformers, the landscape of many computer vision tasks (e.g., semantic segmentation), which has been overwhelmingly dominated by CNNs, recently has significantly revolutionized. However, the computational cost and memory requirement render these methods unsuitable on the mobile device, especially for the high-resolution per-pixel semantic segmentation task. In this paper, we introduce a new method squeeze-enhanced Axial Transformer (SeaFormer) for mobile semantic segmentation. Specifically, we design a generic attention block characterized by the formulation of squeeze Axial and detail enhancement. It can be further used to create a family of backbone architectures with superior cost-effectiveness. Coupled with a light segmentation head, we achieve the best trade-off between segmentation accuracy and latency on the ARM-based mobile devices on the ADE20K and Cityscapes datasets. Critically, we beat both the mobile-friendly rivals and Transformer-based counterparts with better performance and lower latency without bells and whistles. Beyond semantic segmentation, we further apply the proposed SeaFormer architecture to image classification problem, demonstrating the potentials of serving as a versatile mobile-friendly backbone. Our code and models are made publicly available at

1. INTRODUCTION

As a fundamental problem in computer vision, semantic segmentation aims to assign a semantic class label to each pixel in an image. Conventional methods rely on stacking local convolution kernel Long et al. (2015) to perceive the long-range structure information of the image. Since the introduction of Vision Transformers Dosovitskiy et al. (2021) , the landscape of semantic segmentation has significantly revolutionized. Transformer-based approaches Zheng et al. (2021) ; Xie et al. (2021) have remarkably demonstrated the capability of global context modeling. However, the computational cost and memory requirement of Transformer render these methods unsuitable on mobile devices, especially for high-resolution imagery inputs. Following conventional wisdom of efficient operation, local/window-based attention Luong et al. (2015) ; Liu et al. (2021) ; Huang et al. (2021a) ; Yuan et al. (2021) , Axial attention Huang et al. (2019b) ; Ho et al. (2019) ; Wang et al. (2020a) , dynamic graph message passing Zhang et al. (2020; 2022b) and some lightweight attention mechanisms Hou et al. (2020) ; Li et al. (2021b; c; 2020) ; Liu et al. (2018) ; Shen et al. (2021) ; Xu et al. (2021) ; Cao et al. (2019) ; Woo et al. (2018) ; Wang et al. (2020b) ; Choromanski et al. (2021) ; Chen et al. (2017) ; Mehta & Rastegari (2022a) are introduced. However, these advances are still insufficient to satisfy the design requirements and constraints for mobile devices due to the high latency on the high-resolution inputs (see Figure 1 ). Recently there is a surge of interest in building a Transformer-based semantic segmentation. In order to reduce the computation cost at high resolution, TopFormer Zhang et al. (2022c) dedicates to applying the global attention at a 1/64 scale of the original input, which definitely harms the segmentation performance. To solve the dilemma of high-resolution computation for pixel-wise segmentation task and low latency requirement on the mobile device in a performance harmless way, we propose a family The core building block squeeze-enhanced Axial attention (SEA attention) seeks to squeeze (pool) the input feature maps along the horizontal/vertical axis into a compact column/row and computes self-attention. We concatenate query, keys and values to compensate the detail information sacrificed during squeeze and then feed it into a depth-wise convolution layer to enhance local details. Coupled with a light segmentation head, our design (see Figure 2 ) with the proposed SeaFormer layer in the small-scale feature is capable of conducting high-resolution image semantic segmentation with low latency on the mobile device. As shown in Figure 1 , the proposed SeaFormer outperforms other efficient neural networks on the ADE20K dataset with lower latency. In particular, SeaFormer-Base is superior to the lightweight CNN counterpart MobileNetV3 (41.0 vs.33.1 mIoU) with lower latency (106ms vs.126ms) on an ARM-based mobile device. We make the following contributions: (i) 

Context branch

The context branch is designed to capture context-rich information from the feature map x s . As illustrated in the red branch of Figure 2 , the context branch is divided into three stages. To obtain larger receptive field, we stack SeaFormer layers after applying a MobileNet block to down-sampling and expanding feature dimension. Compared with the standard convolution as the down-sampling module, MobileNet block increases the representation capacity of the model while maintaining a lower amount of computation and latency. For variants except SeaFormer-Large, SeaFormer layers are applied in the last two stages for superior trade-off between accuracy and efficiency. For SeaFormer-Large, we insert SeaFormer layers in each stage of context branch. To achieve a good trade-off between segmentation accuracy and inference speed, we design a squeezeenhanced Axial attention block (SEA attention) illustrated in the next subsection.

Spatial branch

The spatial branch is designed to obtain spatial information in high resolution. Identical to the context branch, the spatial branch reuses feature maps x s . However, the feature from the early convolution layers contains rich spatial details but lacks high-level semantic information. Consequently, we design a fusion block to fuse the features in the context branch into the spatial branch, bringing high-level semantic information into the low-level spatial information. Fusion block As depicted in Figure 2 , high resolution feature maps in the spatial branch are followed by a 1 × 1 convolution and a batch normalization layer to produce a feature to fuse. Low Light segmentation head The feature after the last fusion block is fed into the proposed segmentation head directly, as demonstrated in Figure 2 . For fast inference purpose, our light segmentation head consists of two convolution layers, which are followed by a batch normalization layer separately and the feature from the first batch normalization layer is fed into an activation layer.

3.2. SQUEEZE-ENHANCED AXIAL ATTENTION

The global attention can be expressed as y o = p∈G(o) softmax p q ⊤ o k p v p where x ∈ R H×W ×C . q, k, v are linear projection of x, i.e.q = W q x, k = W k x, v = W v x, where W q , W k ∈ R C qk ×C , W v ∈ R Cv×C are learnable weights. G(o ) means all positions on the feature map of location o = (i, j). When traditional attention module is applied on a feature map of H × W × C, the time complexity can be O(H 2 W 2 (C qk + C v )) , leading to low efficiency and high latency. According to their drawbacks, we propose the mobile-friendly squeeze-enhanced Axial attention, with a succinct squeeze Axial attention for global semantics extraction and an efficient convolutionbased detail enhancement kernel for local details supplement. Squeeze Axial attention To achieve a more efficient computation and aggregate global information at the same time, we resort to a more radical strategy. In the same way, q, k, v are first get from x with W (s) y o = p∈Nm×m(o) softmax p q ⊤ o k p v p y o = p∈N 1×W (o) softmax p q ⊤ o k p v p + p∈N H×1 (o) softmax p q ⊤ o k p v p q (h) = 1 W q →(C qk ,H,W ) 1 W →(H,C qk ) , q (v) = 1 H q →(C qk ,W,H) 1 H →(W,C qk ) q , W (s) k ∈ R C qk ×C , W (s) v ∈ R Cv×C . According to Equation 4, we first implement horizontal squeeze by taking average of query feature map on the horizontal direction. In the same way, the right shows the vertical squeeze on the vertical direction. z →(•) means permuting the dimension of tensor z as given, and 1 m ∈ R m is a vector with all the elements equal to 1. The squeeze operation on q also repeats on k and v, so we finally get q (h) , k (h) , v (h) ∈ R H×C qk , q (v) , k (v) , v (v) ∈ R W ×C qk . The squeeze operation reserves the global information to a single axis, thus greatly alleviating the following global semantic extraction showing by Equation 5. Squeeze Axial position embedding Equation 4 are, however, not positional-aware, including no positional information of feature map. Hence, we propose squeeze Axial position embedding to squeeze Axial attention. For squeeze Axial attention, we render both q (h) and k (h) to be aware of their position in squeezed axial feature by introducing positional embedding r q (h) , r k (h) ∈ R H×C qk , which are linearly interpolated from learnable parameters y (i,j) = H p=1 softmax p q ⊤ (h)i k (h)p v (h)p + W p=1 softmax p q ⊤ (v)j k (v)p v (v)p B q (h) , B k (h) ∈ R L×C qk . L is a constant. In the same way, r q (v) , r k (v) ∈ R W ×C qk are applied to q (v) , k (v) . Thus, the positional-aware squeeze Axial attention can be expressed as Equation 6. y (i,j) = H p=1 softmax p (q (h)i + r q (h)i ) ⊤ (k (h)p + r k (h)p ) v (h)p + W p=1 softmax p (q (v)j + r q (v)j ) ⊤ (k (v)p + r k (v)p ) v (v)p Detail enhancement kernel The squeeze operation, though extracting global semantic information efficiently, sacrifices the local details. Hence an auxiliary convolution-based kernel is applied to enhance the spatial details. As is shown in the upper path of Figure 3 , q, k, v are first get from x with another W (e) q , W (e) k ∈ R C qk ×C , W v ∈ R Cv×C and are concatenated on the channel dimension and then passed to a block made up of 3×3 depth-wise convolution and batch normalization. By using a 3×3 convolution, auxiliary local details can be aggregated from q, k, v. And then a linear projection with activation function and batch normalization are used to squeeze (2C qk + C v ) dimension to C and generate detail enhancement weights. Finally, the enhancement feature will be fused with the feature given by squeeze Axial attention. Different enhancement mode including element-wise addition and multiplication will be compared in experiment section. Time complexity for the 3×3 depth-wise convolution is O(3 2 HW (2C qk + C v )) and the time complexity for the 1×1 convolution is O(HW C(2C qk + C v )). Time for the other operations like activation can be omitted.

Architecture and Variants

We introduce four variants, SeaFormer-Tiny, Small, Base and Large (T, S, B and L). More configuration details are listed in the supplementary material.

4. EXPERIMENTS

We evaluate our method on semantic segmentation and image classification tasks. First, we describe implementation details and compare results with state of the art. We then conduct a series of ablation studies to validate the design of SeaFormer. Each proposed component and important hyper-parameters are examined thoroughly. 

4.2. COMPARISON WITH STATE OF THE ART

ADE20K Table 1 shows the results of SeaFormer and previous efficient backbones on ADE20K val set. The comparison covers Params, FLOPs, Latency and mIoU. As shown in Table 1 , SeaFormer outperforms these approaches with comparable or less FLOPs and lower latency. Compared with specially designed mobile backbone, TopFormer, which sets global attention as its semantics extractor, SeaFormer achieves higher segmentation accuracy with lower latency. And the performance of SeaFormer-B surpasses MobileNetV3 by a large margin of +7.9% mIoU with lower latency (-16%). The results demonstrate our SeaFormer layers improve the representation capability significantly. Cityscapes From the table 2, it can be seen that SeaFormer-S achieves comparable or better results than TopFormer-B with less computation cost and latency, which proves that SeaFormer could also achieve a good trade-off between performance and latency in high-resolution scenario.

4.3. ABLATION STUDIES

In this section, we ablate different self-attention implementations and some important design elements in the proposed model, including our squeeze-enhanced Axial attention module (SEA attention) and fusion block on ADE20K dataset.

The influence of components in SEA attention

We conduct experiments with several configurations, including detail enhancement kernel only, squeeze Axial attention only, and the fusion of both. As is shown in Enhancement input means the input of detail enhancement kernel. conv(x) means x followed by a point-wise conv. upconv(x) is the same as conv(x) except different channels as upconv(x) is from  C in to C q + C k + C v and conv(x) is from C in to C in . concat[qkv] indicates concat of Q,K, V.

The influence of the width in fusion block

To study the influence of the width in fusion block, we perform experiments with different embedding dimensions in fusion blocks on SeaFormer-Base, M denotes the channels that spatial branch and context branch features mapping to in two fusion blocks. Results are shown in Table 5 .

4.4. IMAGE CLASSIFICATION

We conduct experiments on ImageNet-1K Deng et al. (2009) , which contains 1.28M training images and 50K validation images from 1,000 classes. We employ an AdamW Kingma & Ba (2014) optimizer for 600 epochs using a cosine decay learning rate scheduler. A batch size of 1024, an initial learning rate of 0.064, and a weight decay of 2e-5 are used. The results are illustrated in Table 6 . Compared with other efficient approaches, SeaFormer achieves a relatively better trade-off between latency and accuracy. We make the statistics of the latency of the proposed SeaFormer-Tiny, as shown in Figure 4 , the shared STEM takes up about half of the latency of the whole network (49%). The latency of the context branch is about a third of the total latency (34%), whilst the actual latency of the spatial branch is relatively low (8%) due to sharing early convolution layers with the context branch. Our light segmentation head (8%) also contributes to the success of building a light model. 

Results

From the table 10 We can observe that our SeaFormer achieves superior results on detection task which further demonstrates the strong generalization ability of our method.

F ADDITIONAL ABLATION STUDY

In addition to the ablation study in the submission paper, we investigate the effect of fusion method in fusion block in Figure 2 .

F.1 THE INFLUENCE OF FUSION BLOCK DESIGN

We set four fusion methods, including "Add directly", "Multiply directly", "Sigmoid add" and "Sigmoid multiply". X directly means features from context branch and spatial branch X directly. Sigmoid X means feature from context branch goes through a sigmoid layer and X feature from spatial branch. From the 

G PERFORMANCE UNDER DIFFERENT PRECISION OF THE MODELS

Following TopFormer, we measure the latency in the submission paper on a single Qualcomm Snapdragon 865, and only an ARM CPU core is used for speed testing. No other means of acceleration, e.g., GPU or quantification, is used. We provide a more comprehensive comparison to demonstrate the necessity of our proposed method. We test the latency under different precision of the models. From the table 13, it can be seen that whether it is full precision or half precision the performance of SeaFormer is better than that of TopFormer.

H VISUALIZATION H.1 ATTENTION HEATMAP

To demonstrate the effectiveness of detail enhancement in our squeeze-enhanced Axial attention (SEA attention), we ablate our model by removing the detail enhancement. We visualize the attention heatmaps of the two models in Figure 5 . Without detail enhancement, attention heatmaps from solely SA attention appears to be axial strips while our proposed SEA attention is able to activate the semantic local region accurately, which is particularly significant in the dense prediction task. 

I LIMITATIONS AND SOCIETAL IMPACT

The mobile-friendly segmentation is deeply related to the industrial application on edge computation platforms, while few academic attempts are made to meet the requirement of the industry. We test our method on a Qualcomm Snapdragon 865 processor (Fig. 1 main paper) and shows superior results to the alternatives. We believe our work can lead to expected and unexpected innovations in both academia and industry. However, our system is not perfect yet and hence not fully trustworthy in real-world deployment. Also, the current system is not exhaustively evaluated and tested due to limited resources. We focus on mobile semantic segmentation and image classification tasks. New mobile-friendly method for more downstream tasks and extended to GPU systems will be studied in the future.



CONCLUSIONIn this paper, we have proposed squeeze-enhanced Axial Transformer (SeaFormer) for mobile semantic segmentation, filling the vacancy of mobile-friendly efficient Transformer. Moreover, we create a family of backbone architectures of SeaFormer and achieve cost-effectiveness. The superior performance on the ADE20K and Cityscapes, and the lowest latency demonstrate its effectiveness on the ARM-based mobile device. Beyond semantic segmentation, we further apply the proposed SeaFormer architecture to image classification problem, demonstrating the potential of serving as a versatile mobile-friendly backbone.



Figure 1: Left: Latency comparison with Transformer Vaswani et al. (2017), MixFormer Chen et al. (2022a), ACmix Pan et al. (2022b), Axial attention Ho et al. (2019) and local attention Luong et al. (2015). It is measured with a single module of channel dimension 64 on a Qualcomm Snapdragon 865 processor. Right: The mIoU versus latency on the ADE20K val set. MV2 means Mo-bileNetV2 Sandler et al. (2018). MV3-L means MobileNetV3-Large Howard et al. (2019). MV3-Lr denotes MobileNetV3-Large-reduce Howard et al. (2019). The latency is measured on a single Qualcomm Snapdragon 865, and only an ARM CPU core is used for speed testing. No other means of acceleration, e.g., GPU or quantification, is used. For figure Right, the input size is 512×512. SeaFormer achieves superior trade-off between mIoU and latency.

Figure 2: The overall architecture of SeaFormer. It contains shared STEM, context branch (red), spatial branch (blue), fusion block and light segmentation head. MV2 block means MobileNetV2 block and MV2 ↓2 means MobileNetV2 block with downsampling. SeaFormer layers and fusion block with dash box only exist in SeaFormer-L. The symbol denotes element-wise multiplication.

3)To improve the efficiency, there are some worksLiu et al. (2021); Huang et al. (2019b); Ho et al. (2019) computing self-attention within the local region. We show two most representative efficient Transformer in Equation 2, 3. Equation 2 is represented by window-based attention Luong et al. (2015) successfully reducing the time complexity to O(m 2 HW (C qk + C v )) = O(HW ), where N m×m (o) means the neighbour m × m positions of o, but loosing global receptiveness. The Equation 3 is represented by Axial attention Ho et al. (2019), which only reduces the time complexity to O((H + W )HW (C qk + C v )) = O((HW ) 1.5 ), where N H×1 (o) means all the positions of the column of o; N 1×W (o) means all the positions of the row of o.

Figure 3: Right: the schematic illustration of the proposed squeeze-enhanced Axial Transformer layer including a squeeze-enhanced Axial attention and a Feed-Forward Network (FFN). Left is the squeeze-enhanced Axial Transformer layer, including detail enhancement kernel and squeeze Axial attention. The symbol indicates an element-wise addition operation. Mul means multiplication.

5)Each position of feature map propagates information only on two squeezed axial features. Although it shows no distinct computation reduction comparing to Equation 3, repeat of Equation 5 can be simply implemented by the most efficient broadcast operation. The detail is shown in Figure3. Time complexity for squeezing q, k, v is O((H + W )(2C qk + C v )) and the attention operation takes O((H 2 + W 2 )(C qk + C v )) time. Thus, our squeeze Axial attention successfully reduces time complexity to O(HW ).

4.1.2 IMPLEMENTATION DETAILSWe set ImageNet-1KDeng et al. (2009) pretrained network as the backbone, and training details of ImageNet-1K are illustrated in the last subsection. For semantic segmentation, the standard Batch-NormIoffe & Szegedy (2015) layer is replaced by synchronized BatchNorm.Training Our implementation is based on public codebase mmsegmentation Contributors (2020). We follow the batch size, training iteration scheduler and data augmentation strategy of TopFormerZhang et al. (2022c)  for a fair comparison. The initial learning rate is 0.0005 and the weight decay is 0.01. A "poly" learning rate scheduled with factor 1.0 is adopted. During inference, we set the same resize and crop rules as TopFormer to ensure fairness. The comparison of Cityscapes contains full-resolution and half-resolution. For the full-resolution version, the training images are randomly scaled and then cropped to the fixed size of 1024 × 1024. For the half-resolution version, the training images are resized to 1024 × 512 and randomly scaling, the crop size is 1024 × 512.

measured with 256×256 according to their original implementations. P, F and L mean Parameters, FLOPs and latency. * indicates re-parameterized variants Vasu et al. (2022). The latency is measured on a single Qualcomm Snapdragon 865, and only an ARM CPU core is used for speed testing. No other means of acceleration, e.g., GPU or quantification, is used.

Figure 5: The visualization of attention heatmaps from the model consisting of squeeze Axial attention without detail enhancement (first row) and SeaFormer (second row). Heatmaps are produced by averaging channels of the features from the last attention block, normalizing to [0, 255] and upsampling to the image size.

ViT, Mobile-Former, TopFormer and EfficientFormer are restricted by Transformer blocks and have to trade off between efficiency and performance in model design. LVT, MobileViTv2 and EdgeViTs keep the model size small at the cost of relatively high computation, which also means high latency.

Results of semantic segmentation on ADE20K val set, * indicates training batch size is 32. The latency is measured on a single Qualcomm Snapdragon 865 with input size 512×512, and only an ARM CPU core is used for speed testing.

Results on Cityscapes val set. The results on test set of some methods are not presented due to the fact that they are not reported in their original papers.ADE20K dataset covers 150 categories, containing 25K images that are split into 20K/2K/3K for Train, val and test. CityScapes is a driving dataset for semantic segmentation. It consists of 5000 fine annotated high-resolution images with 19 categories.

Table 3, only detail enhancement or squeeze Axial attention achieves a relatively poor Ablation studies on components in SEA attention on ImageNet-1K and ADE20K datasets.

performance, and enhancing squeeze Axial attention with detail enhancement kernel brings a performance boost with a gain of 2.3% mIoU on ADE20K. The results indicate that enhancing global semantic features from squeeze Axial attention with local details from convolution optimizes the feature extraction capability of Transformer block. For enhancement input, there is an apparent performance gap between upconv(x) and conv(x). And we conclude that increasing the channels will boost performance significantly. Comparing concat[qkv] and upconv(x), which also correspond to w/ or w/o convolution weight sharing between detail enhancement kernel and squeeze Axial attention, we can find that sharing weights makes our model improve inference efficiency with minimal performance loss(35.8 vs.35.9). As for enhancement modes, multiplying features from squeeze Axial attention and detail enhancement kernel outperforms add enhancement by +0.4% mIoU.

Results on ADE20K val set based on Swin Transformer architecture. (B) denotes backbone. OOM means CUDA out of memory. References: ISSA Huang et al. Considering that we may not be able to draw conclusions rigorously, we doubled the number of their Transformer blocks (including MLP). As ACmix has the same architecture configuration as Swin, we borrow the results from the original paper. From Table4, it can be seen that SeaFormer outperforms other attention mechanisms with lower FLOPs and latency.

Ablation studies on embedding dimensions and position bias.M = [128, 160] is an optimal embedding dimension in fusion blocks.

Image classification results on ImageNet-1K val set. The FLOPs and latency are measured with input size 224×224, except for Mo-bileViT and MobileViTv2 that are

Architectures for semantic segmentation.[Conv, 3 ,16, 2]  denotes regular convolution layer with kernel of 3, output channel of 16 and stride of 2.[MB, 3, 4, 16, 2]  means MobileNetV2 Sandler et al. (2018) block with kernel of 3, expansion ratio of 4, output channel of 16 and stride of 2.[Sea,  2, 4]  refers to SeaFormer layers with number of layers of 2 and heads of 4.

Results on Pascal Context val set. F means FLOPs. We omit the latency as the input resolution is almost the same as that in table 1.

Results on COCO-Stuff test set. F means FLOPs. We omit the latency in this table as the input resolution is the same as that in table 1.

Results on COCO object detecion. MF denotes Mobile-Former Chen et al. (2022b). MV3 denotes MobileNetV3 Howard et al. (2019).

Ablation study on fusion method on ADE20K val set.

Table 11 we can see that replacing sigmoid multiply with other fusion methods hurts performance. Sigmoid multiply is our optimal fusion block choice. Performance comparison on ADE20K val set under different precision.enhanced attention methods including our SEA attention, ACmix and MixFormer. The ablation experiments are organized in seven groups. Since the resolution of computing attention is relatively small, the window size in Local attention, ACmix, and MixFormer is set to 4. We adjust the channels when applying different attention modules to keep the FLOPs aligned and compare their performance and latency. The results are illustrated in Table12.As demonstrated in the table, SEA attention outperforms the counterpart built on other efficient attentions. Compared with global attention, SEA attention outperforms it by +1.2% Top1 accuracy on ImageNet-1K and +1.6 mIoU on ADE20K with less FLOPs and lower latency. Compared with similar convolution enhanced attention works, ACmix and MixFormer, our SEA attention obtains better results on ImageNet-1K and ADE20K with similar FLOPs but lower latency. The results indicate the effectiveness and efficiency of SEA attention module.

ACKNOWLEDGMENTS

This work was supported in part by National Natural Science Foundation of China (Grant No. 62106050), Lingang Laboratory (Grant No. LG-QS-202202-07), Natural Science Foundation of Shanghai (Grant No. 22ZR1407500) and CCF-Tencent Open Research Fund (No. CCF-Tencent RAGR20210111).

availability

https://github.com/fudan-zvg

Appendix A ARCHITECTURE DETAILS AND VARIANTS

SeaFormer backbone contains 6 stages, corresponding to the shared STEM and context branch in Figure 2 in the main paper. When conducting the image classification experiments, a pooling layer and a linear layer are added at the end of the context branch.Table 7 details the family of our SeaFormer configurations with varying capacities. We construct SeaFormer-Tiny, SeaFormer-Small, SeaFormer-Base and SeaFormer-Large models with different scales via varying the number of SeaFormer layers and the feature dimensions. We use input image size of 512 × 512 by default. For variants except SeaFormer-Large, SeaFormer layers are applied in the last two stages for superior trade-off between accuracy and efficiency. For SeaFormer-Large, we apply the proposed SeaFormer layers in each stage of the context branch.

B COMPLEXITY ANALYSIS

we analyze the complexity of our proposed SEA attention in subsection 3.2 to demonstrate its efficiency theoretically. In our application, we set C qk = 0.5C v to further reduce computation cost. The total time complexity of squeeze-enhanced Axial attention isif we assume H = W and take channel as constant. SEA attention is linear to the feature map size theoretically. Moreover, SEA attention only includes mobile-friendly operation like convolution, pooling, matrix multiplication and so on.

C PASCAL CONTEXT PERFORMANCE

We evaluate performance on Pascal Context val set over 59 categories and 60 categories. PAS-CAL Context dataset has 4998/5105 images for train and test, covering 59 semantic labels and 1 background.Following TopFormer Zhang et al. (2022c) , we train the models for 80,000 iterations on PASCAL Context dataset. The same data augmentation strategy and batch size are adopted for a fair comparison. The initial learning rate is 0.0002 and the weight decay is 0.01. A poly learning rare scheduled with factor 1.0 is used.Table 8 demonstrates that SeaFormer-S is +1.4% mIoU higher (45.08% vs.43.68%) than TopFormer-S with lower latency.

D COCO-STUFF PERFORMANCE

We compare SeaFormer with the previous approaches on COCO-Stuff val set. COCO-Stuff dataset augments COCO dataset with pixel-level stuff annotations. 10K complex images are selected from COCO. The train and test set contain 9K/1K images.Following TopFormer Zhang et al. (2022c) , we train the models for 80,000 iterations on COCO-Stuff dataset. The same data augmentation strategy and batch size are adopted for a fair comparison.The initial learning rate is 0.0002 and the weight decay is 0.01. A poly learning rare scheduled with factor 1.0 is used. 

