TRANSFORMERS CAN BE TRANSLATED TO FIRST-ORDER LOGIC WITH MAJORITY QUANTIFIERS

Abstract

Characterizing the implicit structure of the computation within neural networks is a foundational problem in the area of deep learning interpretability. Can their inner decision process be captured symbolically in some familiar logic? We show that any transformer neural network can be translated into an equivalent fixedsize first-order logic formula which may also use majority quantifiers. The idea is to simulate transformers with highly uniform threshold circuits and leverage known theoretical connections between circuits and logic. Our findings also reveal the surprising fact that the entire transformer computation can be reduced merely to the division of two (large) integers. While our results are most pertinent for transformers, they apply equally to a broader class of neural network architectures, namely those with a fixed-depth uniform computation graph made up of standard neural net components, which includes feedforward and convolutional networks.

1. INTRODUCTION

The incredible success of deep learning models, especially very large language and vision models with tens to hundreds of billions of parameters (Brown et al., 2020; Thoppilan et al., 2022) , has come at the cost of increasingly limited understanding of how these models actually work and when they might fail. This raises many concerns, such as around their safe deployment, fairness, and accountability. Is the inner working of such networks fundamentally different from classical algorithms and symbolic systems that we understand better? Or can their computation be described symbolically using a familiar symbolic formalism? We derive the first, to the best of our knowledge, direct connection between a broad class of neural networks and the well-studied classical formalism of first-order logic. Specifically, we show that transformers-and other neural networks with a computation graph that has constant depth and a "repetitive" or uniform structure-implement nothing but fixed-size first-order logic expressions, if the logic is allowed to have majority quantifiers (M) in addition to the standard existential (∃) and universal quantifiers (∀). Majority quantifiers simply take a sequence of boolean values and return true if more than half of them are true. The resulting logic formalism is often referred to as FO(M). Theorem 1 (Informal version of Cor. 5.1). For any neural network N with a constant-depth computation graph, there is a fixed-size FO(M) formula ϕ equivalent to N . This result immediately provides mechanistic interpretability-it demonstrates that, at least in principle, the inner decision process of any transformer model can be efficiently translated into a fixedsize formula (with respect to the input length) in a simple, well-defined logic. The output N (x) of the transformer on any input x is simply the value ϕ(x) of this formula. Similar to decision trees, FO(M) formulae have the property that each sub-expression corresponds to a logical constraint, i.e., a function mapping the input sequence to a truth value. In contrast, the internal modules of a transformer or complex circuit do not satisfy this, as they map between uninterpretable latent spaces. We thus believe that converting transformers to FO(M) formulae could be leveraged for interpreting their behavior in future work, although a thorough exploration of this idea lies outside the scope of our theoretical contributions in this paper. Thm. 1 also gives some insight about how to contrast the abilities of transformers and finite-state machines. Classically, the regular languages can be characterized as the languages definable in terms 1

