CKTGNN: CIRCUIT GRAPH NEURAL NETWORK FOR ELECTRONIC DESIGN AUTOMATION

Abstract

The electronic design automation of analog circuits has been a longstanding challenge in the integrated circuit field due to the huge design space and complex design trade-offs among circuit specifications. In the past decades, intensive research efforts have mostly been paid to automate the transistor sizing with a given circuit topology. By recognizing the graph nature of circuits, this paper presents a Circuit Graph Neural Network (CktGNN) that simultaneously automates the circuit topology generation and device sizing based on the encoder-dependent optimization subroutines. Particularly, CktGNN encodes circuit graphs using a two-level GNN framework (of nested GNN) where circuits are represented as combinations of subgraphs in a known subgraph basis. In this way, it significantly improves design efficiency by reducing the number of subgraphs to perform message passing. Nonetheless, another critical roadblock to advancing learning-assisted circuit design automation is a lack of public benchmarks to perform canonical assessment and reproducible research. To tackle the challenge, we introduce Open Circuit Benchmark (OCB), an open-sourced dataset that contains 10K distinct operational amplifiers with carefully-extracted circuit specifications. OCB is also equipped with communicative circuit generation and evaluation capabilities such that it can help to generalize CktGNN to design various analog circuits by producing corresponding datasets. Experiments on OCB show the extraordinary advantages of CktGNN through representation-based optimization frameworks over other recent powerful GNN baselines and human experts' manual designs. Our work paves the way toward a learning-based open-sourced design automation for analog circuits.

1. INTRODUCTION

Graphs are ubiquitous to model relational data across disciplines (Gilmer et al., 2017; Duvenaud et al., 2015; Dong et al., 2021) . Graph neural networks (GNNs) (Kipf & Welling, 2016; Xu et al., 2019; Velickovic et al., 2018; You et al., 2018; Scarselli et al., 2008) have been the de facto standard for representation learning over graph-structured data due to the superior expressiveness and flexibility. In contrast to heuristics using hand-crafted node features (Kriege et al., 2020) and non-parameterized graph kernels (Vishwanathan et al., 2010; Shervashidze et al., 2009; Borgwardt & Kriegel, 2005) , GNNs incorporate both graph topologies and node features to produce the node/graph-level embeddings by leveraging inductive bias in graphs, which have been extensively used for node/graph classification (Hamilton et al., 2017; Zhang et al., 2018 ), graph decoding (Dong et al., 2022; Li et al., 2018) , link prediction (Zhang & Chen, 2018), and etc. Recent successes in GNNs have boosted the requirement for benchmarks to properly evaluate and compare the performance of different GNN architectures. Numerous efforts have been made to produce benchmarks of various graph-structured data. Open Graph Benchmark (OGB) (Hu et al., 2020) introduces a collection of realistic and diverse graph datasets for real-world applications including molecular networks, citation networks, source code networks, user-product networks, etc. NAS-Bench-101 (Ying et al., 2019) and NAS-Bench-301 (Zela et al., 2022) create directed acyclic graph datasets for surrogate neural architecture search (Elsken et al., 2019; Wen et al., 2020) . These benchmarks efficiently facilitate substantial and reproducible research, thereby advancing the study of graph representation learning. Analog circuits, an important type of integrated circuit (IC), are another essential graph modality (directed acyclic graphs, i.e., DAGs). However, since the advent of ICs, labor-intensive manual efforts dominate the analog circuit design process, which is quite time-consuming and cost-ineffective. This problem is further exacerbated by continuous technology scaling where the feature size of transistor devices keeps shrinking and invalidates designs built with older technology. Automated analog circuit design frameworks are thus highly in demand. Dominant representation-based approaches (Liu et al., 2021; Wang et al., 2020; Cao et al., 2022a; b; Zhang et al., 2019a) have recently been developed for analog circuit design automation. Specifically, they optimize device parameters to fulfill desired circuit specifications with a given circuit topology. Typically, GNNs are applied to encode nodes' embeddings from circuit device features based on the fixed topology, where black-box optimization techniques such as reinforcement learning (Zoph & Le, 2016) and Bayesian Optimization (Kandasamy et al., 2018) are used to optimize parameterized networks for automated searching of device parameters. While these methods promisingly outperform traditional heuristics (Liu et al., 2017) in node feature sizing (i.e., device sizing), they are not targeting the circuit topology optimization/generation, which, however, constitutes the most critical and challenging task in analog circuit design. In analogy to neural architecture search (NAS), we propose to encode analog circuits into continuous vectorial space to optimize both the topology and node features. Due to the DAG essence of analog circuits, recent DAG encoders for computation graph optimization tasks are applicable to circuit encoding. However, GRU-based DAG encoders (D-VAE (Zhang et al., 2019b) and DAGNN (Thost & Chen, 2021)) use shallow layers to encode computation defined by DAGs, which is insufficient to capture contextualized information in circuits. Transformer-based DAG encoder (Dong et al., 2022) , however, encodes DAG structures instead of computations. Consequently, we introduce Circuit Graph neural Network (CktGNN) to address the above issues. Particularly, CktGNN follows the nested GNN (NGNN) framework (Zhang & Li, 2021) , which represents a graph with rooted subgraphs around nodes and implements message passing between nodes with each node representation encoding the subgraph around it. The core difference is that CktGNN does not extract subgraphs around each node. Instead, a subgraph basis is formulated in advance, and each circuit is modeled as a DAG G where each node represents a subgraph in the basis. Then CktGNN uses two-level GNNs to encode a circuit: the inner GNNs independently learn the representation of each subgraph as node embedding, and the outer GNN further performs directed message passing with learned node embeddings to learn a representation for the entire graph. The inner GNNs enable CktGNN to stack multiple message passing iterations to increase the expressiveness and parallelizability, while the outer directed message passing operation empowers CktGNN to encode computation of circuits (i.e. circuit performance). Nonetheless, another critical barrier to advancing automated circuit design is the lack of public benchmarks for sound empirical evaluations. Researches in the area are hard to be reproduced due to the non-unique simulation processes on different circuit simulators and different search space design. To ameliorate the issue, we introduce Open Circuit Benchmark (OCB), the first open graph dataset for optimizing both analog circuit topologies and device parameters, which is a good supplement to the growing open-source research in the electronic design automation (EDA) community for IC (Chai et al., 2022; Hakhamaneshi et al., 2022) . OCB contains 10K distinct operational amplifiers (circuits) whose topologies are modeled as graphs and performance metrics are carefully extracted from circuit simulators. Therefore, the EDA research can be conducted via querying OCB without notoriously tedious circuit reconstructions and simulation processes on the simulator. In addition, we will open-source codes of the communicative circuit generation and evaluation processes to facilitate further research by producing datasets with arbitrary sizes and various analog circuits. The OCB dataset is also going to be uploaded to OGB to augment the graph machine learning research. The key contributions in this paper are: 1) we propose a novel two-level GNN, CktGNN, to encode circuits with deep contextualized information, and show that our GNN framework with a pre-designed subgraph basis can effectively increase the expressiveness and reduce the design space of a very challenging problem-circuit topology generation; 2) we introduce the first circuit benchmark dataset OCB with open-source codes, which can serve as an indispensable tool to advance research in EDA; 3) experimental results on OCB show that CktGNN not only outperforms competitive GNN baselines but also produces high-competitive operational amplifiers compared to human experts' designs.

