CKTGNN: CIRCUIT GRAPH NEURAL NETWORK FOR ELECTRONIC DESIGN AUTOMATION

Abstract

The electronic design automation of analog circuits has been a longstanding challenge in the integrated circuit field due to the huge design space and complex design trade-offs among circuit specifications. In the past decades, intensive research efforts have mostly been paid to automate the transistor sizing with a given circuit topology. By recognizing the graph nature of circuits, this paper presents a Circuit Graph Neural Network (CktGNN) that simultaneously automates the circuit topology generation and device sizing based on the encoder-dependent optimization subroutines. Particularly, CktGNN encodes circuit graphs using a two-level GNN framework (of nested GNN) where circuits are represented as combinations of subgraphs in a known subgraph basis. In this way, it significantly improves design efficiency by reducing the number of subgraphs to perform message passing. Nonetheless, another critical roadblock to advancing learning-assisted circuit design automation is a lack of public benchmarks to perform canonical assessment and reproducible research. To tackle the challenge, we introduce Open Circuit Benchmark (OCB), an open-sourced dataset that contains 10K distinct operational amplifiers with carefully-extracted circuit specifications. OCB is also equipped with communicative circuit generation and evaluation capabilities such that it can help to generalize CktGNN to design various analog circuits by producing corresponding datasets. Experiments on OCB show the extraordinary advantages of CktGNN through representation-based optimization frameworks over other recent powerful GNN baselines and human experts' manual designs. Our work paves the way toward a learning-based open-sourced design automation for analog circuits.

1. INTRODUCTION

Graphs are ubiquitous to model relational data across disciplines (Gilmer et al., 2017; Duvenaud et al., 2015; Dong et al., 2021) . Graph neural networks (GNNs) (Kipf & Welling, 2016; Xu et al., 2019; Velickovic et al., 2018; You et al., 2018; Scarselli et al., 2008) have been the de facto standard for representation learning over graph-structured data due to the superior expressiveness and flexibility. In contrast to heuristics using hand-crafted node features (Kriege et al., 2020) and non-parameterized graph kernels (Vishwanathan et al., 2010; Shervashidze et al., 2009; Borgwardt & Kriegel, 2005) , GNNs incorporate both graph topologies and node features to produce the node/graph-level embeddings by leveraging inductive bias in graphs, which have been extensively used for node/graph classification (Hamilton et al., 2017; Zhang et al., 2018 ), graph decoding (Dong et al., 2022; Li et al., 2018) , link prediction (Zhang & Chen, 2018), and etc. Recent successes in GNNs have boosted the requirement for benchmarks to properly evaluate and compare the performance of different GNN architectures. Numerous efforts have been made to produce benchmarks of various graph-structured data. Open Graph Benchmark (OGB) (Hu et al., 2020) introduces a collection of realistic and diverse graph datasets for real-world applications including molecular networks, citation networks,

