ITERATIVE CIRCUIT REPAIR AGAINST FORMAL SPEC-IFICATIONS

Abstract

We present a deep learning approach for repairing sequential circuits against formal specifications given in linear-time temporal logic (LTL). Given a defective circuit and its formal specification, we train Transformer models to output circuits that satisfy the corresponding specification. We propose a separated hierarchical Transformer for multimodal representation learning of the formal specification and the circuit. We introduce a data generation algorithm that enables generalization to more complex specifications and out-of-distribution datasets. In addition, our proposed repair mechanism significantly improves the automated synthesis of circuits from LTL specifications with Transformers. It improves the state-of-theart by 6.8 percentage points on held-out instances and 11.8 percentage points on an out-of-distribution dataset from the annual reactive synthesis competition.

1. INTRODUCTION

Sequential circuit repair (Katz & Manna, 1975) refers to the task of given a formal specification and a defective circuit implementation automatically computing an implementation that satisfies the formal specification. Circuit repair finds application especially in formal verification. Examples are automated circuit debugging after model checking (Clarke, 1997) or correcting faulty circuit implementations predicted by heuristics such as neural networks (Schmitt et al., 2021b) . In this paper, we design and study a deep learning approach to circuit repair for linear-time temporal logic (LTL) specifications (Pnueli, 1977) that also improves the state-of-the-art of synthesizing sequential circuits with neural networks. We consider sequential circuit implementations that continuously interact with their environments. For example, an arbiter that manages access to a shared resource interacts with processes by giving out mutually exclusive grants to the shared resource. Linear-time temporal logic (LTL) and its dialects (e.g., STL Maler & Nickovic (2004) or CTL Clarke & Emerson (1981) ) are widely used in academia and industry to specify the behavior of sequential circuits (e.g., Godhal et al. (2013) ; IEEE (2005); Horak et al. ( 2021)). A typical example is the response property (r → g), stating that it always ( ) holds that request r is eventually ( ) answered by grant g. We can specify an arbiter that manages the access to a shared resource for four processes by combining response patterns for requests r 0 , . . . , r 3 and grants g 0 , . . . , g 3 with a mutual exclusion property as follows: (r 0 → g 0 ) ∧ (r 1 → g 1 ) ∧ (r 2 → g 2 ) ∧ (r 3 → g 3 ) response properties ((¬g 0 ∧ ¬g 1 ∧ (¬g 2 ∨ ¬g 3 )) ∨ ((¬g 0 ∨ ¬g 1 ) ∧ ¬g 2 ∧ ¬g 3 )) mutual exclusion property A possible implementation of this specification is a circuit that gives grants based on a round-robin scheduler. However, running neural reactive synthesis (Schmitt et al., 2021b) on this specification results in a defective circuit as shown in Figure 1a . After model checking the implementation, we observe that the circuit is not keeping track of counting (missing an AND gate) and that the mutual exclusion property is violated (the same variable controls grants g 0 and g 1 ). We present the first deep learning approach to repair such faulty circuits, inspired by the successful application of deep learning to the LTL trace generation (Hahn et al., 2021) and reactive synthesis problem (Schmitt et al., 2021b) . We introduce a new Transformer architecture, the separated hierarchical Transformer, that accounts for the different characteristics of the problem's input. The separated hierarchical Transformer combines the advantages of the hierarchical Transformer (Li et al., 2021) with the multimodal representation learning of an LTL specification and a faulty circuit. In particular, it utilizes that LTL specifications typically consist of reoccurring patterns. This architecture can successfully be trained on the circuit repair problem. Our model, for example, produces a correct circuit implementation of the round-robin strategy by repairing the faulty circuit in Figure 1a in only two iterations. Each iteration predicts a circuit based on the specification and a faulty circuit as input. The result of the first iteration is shown in Figure 1b . The circuit remains faulty, with two of the four grants still controlled by the same variable. Progress was made, however, towards a functioning counter: latch l1 now consists of a combination of AND gates and inverters expressive enough to represent a counter. The second iteration finally results in a correct implementation, as shown in Figure 1c . To effectively train and enable further research on repair models, we provide open-source datasets and our open-source implementation for the supervised training of the circuit repair problemfoot_0 . We demonstrate that the trained separated hierarchical Transformer architecture generalizes to unseen specifications and faulty circuits. Further, we show that our approach can be combined with the existing neural method for synthesizing sequential circuits (Schmitt et al., 2021b ) by repairing its mispredictions, improving the overall accuracy substantially. We made a significant improvement of 6.8 percentage points to a total of 84% on held-out-instances, while an even more significant improvement was made on out-of-distribution datasets with 11.8 percentage points on samples from the annual reactive synthesis competition SYNTCOMP (Jacobs et al., 2022a) .



https://github.com/reactive-systems/circuit-repair



Figure 1: Circuit representations of 4-process arbiter implementation in DOT visualizations: The triangles represent inputs and outputs, the rectangles represent variables, the diamond-shaped nodes represent latches (flip-flop), ovals represent AND gates, and the black dots represent inverter (NOT gates). The output of our repair model is given as an AIGER circuit (bottom right).

Correct circuit. Final prediction of the repair model in the second iteration (DOT visualization on the left, model's output in AIGER on the right).

