RETHINKING SKIP CONNECTION MODEL AS A LEARN-ABLE MARKOV CHAIN

Abstract

Over the past few years afterward the birth of ResNet, skip connection has become the defacto standard for the design of modern architectures due to its widespread adoption, easy optimization, and proven performance. Prior work has explained the effectiveness of the skip connection mechanism from different perspectives. In this work, we deep dive into the model's behaviors with skip connections which can be formulated as a learnable Markov chain. An efficient Markov chain is preferred as it always maps the input data to the target domain in a better way. However, while a model is explained as a Markov chain, it is not guaranteed to be optimized following an efficient Markov chain by existing SGD-based optimizers prone to getting trapped in local optimal points. In order to move towards a more efficient Markov chain, we propose a simple routine of penal connection to make any residual-like model become a learnable Markov chain. Aside from that, the penal connection can also be viewed as a particular model regularization and can be easily implemented with one line of code in the most popular deep learning frameworks. The encouraging experimental results in multi-modal translation and image recognition empirically confirm our conjecture of the learnable Markov chain view and demonstrate the superiority of the proposed penal connection.

1. INTRODUCTION

Over the last decade, deep learning has been dominant in many tasks, including image recognition (Voulodimos et al., 2018) , machine translation (Singh et al., 2017) , speech recognition (Zhang et al., 2018) , etc. Many SGD-based methods and excellent network structures come to the fore (Alom et al., 2019) . Among them, skip connection (He et al., 2016) is a widely-used technique to improve the performance and the convergence of deep neural networks. Aided by the skip connection, models with very deep layers can be easily optimized by SGD-based methods (Amari, 1993 ), e.g., vanilla SGD (Cherry et al., 1998 ), Momentum SGD (Sutskever et al., 2013 ), Adagrad (Lydia and Francis, 2019 ), Adam (Kingma and Ba, 2014) . Recently, many theoretical explanations of how it works have been largely underexplored (Li and Yuan, 2017; Allen-Zhu et al., 2019) . In this work, we continued to explore the behaviors of the model with skip connection and view it as a learnable Markov chain (short for Markov

funding

* Equal contribution. † Corresponding author. The work is supported in part by NSFC Grants (62072449).

