FAIR GRAPH MESSAGE PASSING WITH TRANSPARENCY Anonymous

Abstract

Recent advanced works achieve fair representations and predictions through regularization, adversarial debiasing, and contrastive learning in graph neural networks (GNNs). These methods implicitly encode the sensitive attribute information in the well-trained model weight via backward propagation. In practice, we not only pursue a fair machine learning model but also lend such fairness perception to the public. For current fairness methods, how the sensitive attribute information usage makes the model achieve fair prediction still remains a black box. In this work, we first propose the concept transparency to describe whether the model embraces the ability of lending fairness perception to the public or not. Motivated by the fact that current fairness models lack of transparency, we aim to pursue a fair machine learning model with transparency via explicitly rendering sensitive attribute usage for fair prediction in forward propagation . Specifically, we develop an effective and transparent Fair Message Passing (FMP) scheme adopting sensitive attribute information in forward propagation. In this way, FMP explicitly uncovers how sensitive attributes influence final prediction. Additionally, FMP scheme can aggregate useful information from neighbors and mitigate bias in a unified framework to simultaneously achieve graph smoothness and fairness objectives. An acceleration approach is also adopted to improve the efficiency of FMP. Experiments on node classification tasks demonstrate that the proposed FMP outperforms the state-of-the-art baselines in terms of fairness and accuracy on three real-world datasets.

1. INTRODUCTION

Graph neural networks (GNNs) (Kipf & Welling, 2017; Veličković et al., 2018; Wu et al., 2019; Han et al., 2022a; b) are widely adopted in various domains, such as social media mining (Hamilton et al., 2017 ), knowledge graph (Hamaguchi et al., 2017 ) and recommender system (Ying et al., 2018) , due to remarkable performance in learning representations. Graph learning, a topic with growing popularity, aims to learn node representation containing both topological and attribute information in a given graph. Despite the outstanding performance in various tasks, GNNs often inherit or even amplify societal bias from input graph data (Dai & Wang, 2021) . The biased node representation largely limits the application of GNNs in many high-stake tasks, such as job hunting (Mehrabi et al., 2021) and crime ratio prediction (Suresh & Guttag, 2019). Hence, bias mitigation that facilitates the research on fair GNNs is in an urgent need. Many existing works achieving fair prediction in graph either rely on regularization (Jiang et al., 2022) , adversarial debiasing (Dai & Wang, 2021), or contrastive learning (Zhu et al., 2020; 2021b; Agarwal et al., 2021; Kose & Shen, 2022) . These methods adopt sensitive attribute information in training loss refinement. In this way, such sensitive attribute can be implicitly encoded in well-trained model weight through backward propagation. However, only achieving fair model is insufficient in practice since the fairness should also lend perception to the public (e.g., the auditors, or the maintainers of machine learning systems). In other words, the influence of sensitive attributes should be easily probed for public. We name such property for public probing as transparency. Specifically, we provide the following formal statement on transparency in fairness: Transparency in fairness: Onlookers can verify the released fair model with • Transparent influence: How and if the sensitive attribute information influence fair model prediction.

