TOWARDS EQUIVARIANT GRAPH CONTRASTIVE LEARNING VIA CROSS-GRAPH AUGMENTATION

Abstract

Leading graph contrastive learning (GCL) frameworks conform to the invariance mechanism by encouraging insensitivity to different augmented views of the same graph. Despite the promising performance, invariance worsens representation when augmentations cause aggressive semantics shifts. For example, dropping the super-node can dramatically change a social network's topology. In this case, encouraging invariance to the original graph can bring together dissimilar patterns and hurt the task of instance discrimination. To resolve the problem, we get inspiration from equivariant self-supervised learning and propose Equivariant Graph Contrastive Learning (E-GCL) to encourage the sensitivity to global semantic shifts. Viewing each graph as a transformation to others, we ground the equivariance principle as a cross-graph augmentation -graph interpolationto simulate global semantic shifts. Without using annotation, we supervise the representation of cross-graph augmented views by linearly combining the representations of their original samples. This simple but effective equivariance principle empowers E-GCL with the ability of cross-graph discrimination. It shows significant improvements over the state-of-the-art GCL models in unsupervised learning and transfer learning. Further experiments demonstrate E-GCL's generalization to various graph pre-training frameworks.

1. INTRODUCTION

Graph contrastive learning (GCL) (You et al., 2020; Suresh et al., 2021; Xu et al., 2021) is a prevailing paradigm for self-supervised learning (Chen et al., 2020; Zbontar et al., 2021) on graph-structured data. It typically pre-trains a graph neural network (GNN) (Dwivedi et al., 2020) without labeled data, in an effort to learn generalizable representations and boost the fine-tuning on downstream tasks. The common theme across recent GCL studies is instance discrimination (Dosovitskiy et al., 2014; Purushwalkam & Gupta, 2020 ) -viewing each graph as a class of its own, and differing it from other graphs. It galvanizes representation learning to capture discriminative characteristics of graphs. Towards this end, leading GCL works usually employ two key modules: graph augmentation and contrastive learning. Specifically, graph augmentation adopts the "intra-graph" strategy to create multiple augmented views of each graph, such as randomly dropping nodes (You et al., 2020) or adversarially perturbing edges (Suresh et al., 2021) . The views stemming from the same graph constitute the positive samples of this class, while the views of other graphs are treated as negatives. Consequently, contrastive learning encourages the agreement between positive samples and the discrepancy between negatives. This procedure essentially imposes "invariance" (Purushwalkam & Gupta, 2020; Dangovski et al., 2022) upon representations -making the anchor graph's representation invariant to its intra-graph augmentations (Figure 1a ). Formally, let g be the anchor graph, P be the groups of intra-graph augmentations, and ϕ(•) be the GNN encoder. The "invariance to intra-graph augmentations" mechanism states ϕ(g) = ϕ(T p (g)), ∀p ∈ P -the representation ϕ(g) is insensitive to the changes in augmentation p, where T p (g) is the action of augmentation p on graph g. We refer to works adopting this mechanism as Invariant Graph Contrastive Learning (I-GCL). However, we argue that invariance to intra-graph augmentations alone is insufficient to improve the semantic quality of graph representations and boost the downstream performance: < l a t e x i t s h a 1 _ b a s e 6 4 = " G 0 X C y d t h V v Z S M 9 a D f 4 r V D B q n 8 A Q = " > A A A B 6 3 i c b V C 7 S g N B F L 0 b X z G + o p Z p B k P A K u y K a M q A j W U E 8 4 A k y O x k N j t k Z n a Z m R X C k s 7 a x k I R W 3 / F D 7 D T D / A L / A B n k x S a e O D C 4 Z x 7 u f c e P + Z M G 9 f 9 c H I r q 2 v r G / n N w t b 2 z u 5 e c f + g p a N E E d o k E Y 9 U x 8 e a c i Z p 0 z D D a S d W F A u f 0 7 Y / u s j 8 9 i 1 V m k X y 2 o x j 2 h d 4 K F n A C D a Z 1 I t D d l M s u 1 V 3 C r R M v D k p 1 0 u V u + + 3 r 8 / G T f G 9 N 4 h I I q g 0 h G O t u 5 4 b m 3 6 K l W G E 0 0 m h l 2 g a Y z L C Q 9 q 1 V G J B d T + d 3 j p B F a s M U B A p W 9 K g q f p 7 I s V C 6 7 H w b a f A J t S L X i b + 5 3 U T E 9 T 6 K Z N x Y q g k s 0 V B w p G J U P Y 4 G j B F i e F j S z B R z N 6 K S I g V J s b G U 7 A h e I s v L 5 P W S d U 7 q 5 5 e 2 T R q M E M e S n A E x + D B O d T h E h r Q B A I h 3 M M j P D n C e X C e n Z d Z a 8 6 Z z x z C H z i v P x j d k r g = < / l a t e x i t > < l a t e x i t s h a _ b a s e = " G X C y d t h V v Z S M a D f r V D B q n A Q = " > p V k s 7 s 1 Y U j / C A 8 F C R r C x V q s r h 6 w 0 O O 8 V i m 7 Z n Q m t g r e A Y g 1 d i + z n w 0 m 9 V / j q 9 m O S R F Q Y w r H W H c + V x k + x M o x w O s l 3 E 0 0 l J i M 8 o B 2 L A k d U + + l s 3 A k 6 s 0 4 f h b G y T x g 0 c 3 9 3 p D j S e h w F t j L C Z q i X s 6 n 5 X 9 Z J T F j 1 U y Z k Y q g g 8 4 / C h C M T o + n u q M 8 U J Y a P L W C i m J 0 V k S F W m B h 7 o b w 9 g r e 8 8 i o 0 K 2 X v s n x x 5 x V r V Z g r B 8 d w C i X w 4 A p q c A t 1 a A C B E T z B C 7 w 6 0 n l 2 3 p z 3 e W n G W f Q c w R 8 5 H z + w f Z F a < / l a t e x i t > (g) < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 k i B n 4 B s d N m H r L D j Q d 2 3 x 7 E A o G 8 = " > A A A B 8 n i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s x I U Z c F F 7 q s Y B / Q D i W T Z t r Q T G Z I 7 g h l 6 G e 4 c a G I W 7 / G n X 9 j p q 2 g r Q c C h 3 P u J e e e I J H C o O t + O Y W 1 9 Y 3 N r e J 2 a W d 3 b / + g f H j U M n G q G W + y W M a 6 E 1 D D p V C 8 i Q I l 7 y S a 0 y i Q v B 2 M b 3 K / / c i 1 E b F 6 w E n C / Y g O l Q g F o 2 i l b i + i O G J U Z r f T f r n i V t 0 Z y C r x F q R S h z k a / f J n b x C z N O I K m a T G d D 0 3 Q T + j G g W T f F r q p Y Y n l I 3 p k H c t V T T i x s 9 m k a f k z C o D E s b a P o V k p v 7 e y G h k z C Q K 7 G Q e 0 S x 7 u f i f 1 0 0 x v P Y z o Z I U u W L z j 8 J U E o x J f j 8 Z C M 0 Z y o k l l G l h s x I 2 o p o y t C 2 V b A n e 8 s m r p H V R 9 S 6 r t f t a p f 5 T R x F O 4 B T O w Y M r q M M d N K A J D G J 4 g h d 4 d d B 5 d t 6 c 9 / l o w V n s H M M f O B / f 0 s C R p A = = < / l a t e x i t > G < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 C P g Z h Y A Q v 3 u k E 0 + W F h r N l G H 2 F o = " > A A A B + n i c b V D L S s N A F L 3 x W e s r 1 a W b w S L U T U m k q M u C C 1 1 W s A 9 o Q p l M J + 3 Q y S T M T J Q S + y l u X C j i 1 i 9 x 5 9 8 4 a S t o 6 4 G B w z n 3 c s + c I O F M a c f 5 s l Z W 1 9 Y 3 N g t b x e 2 d 3 b 1 9 u 3 T Q U n E q C W 2 S m M e y E 2 B F O R O 0 q Z n m t J N I i q O A 0 3 Y w u s r 9 9 j 2 V i s X i T o 8 T 6 k d 4 I F j I C N Z G 6 t k l L x m y i h d h P S S Y Z 9 e T 0 5 5 d d q r O F G i Z u H N S r s M M j Z 7 9 6 f V j k k Z U a M K x U l 3 X S b S f Y a k Z 4 X R S 9 F J F E 0 x G e E C 7 h g o c U e V n 0 + g T d G K U P g p j a Z 7 Q a K r + 3 s h w p N Q 4 C s x k n l E t e r n 4 n 9 d N d X j p Z 0 w k q a a C z A 6 F K U c 6 R n k P q M 8 k J Z q P D c F E M p M V k S G W m G j T V t G U 4 C 5 + e Z m 0 z q r u e b V 2 W y v X f + o o w B E c Q w V c u I A 6 3 E A D m k D g A Z 7 g B V 6 t R + v Z e r P e Z 6 M r 1 n z n E P 7 A + v g G M 0 6 T / w = = < / l a t e x i t > (G) < l a t e x i t s h a 1 _ b a s e 6 4 = " S n V d 6 K 1 r Y J m h z h d a A Z P z C w 8 C m 4 k = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K i D k G v H h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o L J 3 d z v P K H S P J Y P Z p q g H 9 G R 5 C F n 1 F i p O R o U S 2 7 Z X Y B s E m 9 F S n V Y o j E o f v W H M U s j l I Y J q n X P c x P j Z 1 Q Z z g T O C v 1 U Y 0 L Z h I 6 w Z 6 m k E W o / W x w 6 I 1 d W G Z I w V r a k I Q v 1 9 0 R G I 6 2 n U W A 7 I 2 r G e t 2 b i / 9 5 v d S E N T / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 r 4 p e 9 V y p V k p 1 W u r O P J w A Z d w D R 7 c Q h 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B y N w j S o = < / l a t e x i t > g < l a t e x i t s h a 1 _ b a s e 6 4 = " L X e m w C 7 q S X v D x r m N f w D p J q p 9 6 h w = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y G o B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 q 6 U v Z t y t V k p 1 a q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + r 9 j Q A = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " L X e m w C 7 q S X v D x r m N f w D p J q p 9 6 h w = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y G o B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 q 6 U v Z t y t V k p 1 a q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + r 9 j Q A = < / l a t e x i t >  M d A L j l G M A / Y L G F 2 0 k m G z M 4 s M 7 1 i W P I Z X j w o 4 t W v 8 e b f O H k c N L G g o a j q p r s r S g Q 3 6 H n f z t b 2 z u 7 e f u 4 g f 3 h 0 f H J a O D t v G 5 V q B i 2 m h N L d i B o Q X E I L O Q r o J h p o H A n o R J P 6 3 O 8 8 g j Z c y Q e c J h D G d C T 5 k D O K V g p 6 C E + Y 1 R u z f q V f K H p l b w F 3 k / g r U q y R J Z r 9 w l d v o F g a g 0 Q m q D G B 7 y U Y Z l Q j Z w J m + V 5 q I K F s Q k c Q W C p p D C b M F i f P 3 G u r D N y h 0 r Y k u g v 1 9 0 R G Y 2 O m c W Q 7 Y 4 p j s + 7 N x f + 8 I M X h X Z h x m a Q I k i 0 X D V P h o n L n / 7 s D r o G h m F p C m e b 2 V p e N q a Y M b U p 5 G 4 K / / v I m a V f K / k 2 5 e l 8 t 1 k q r O H L k k l y R E v H J L a m R B m m S F m F E k W f y S t 4 c d F 6 c d + d j 2 b r l r G Y u y B 8 4 n z 9 q + 5 F M < / l a t e x i t > CH2 < l a t e x i t s h a 1 _ b a s e 6 4 = " I i Y n i 4 S M w 6 / K m 5 O r c F + N k o G z n N o = " > A A A B 8 n i c b V D L S g N B E J z 1 G e M r 6 t H L Y h B y C r s h q M d A L j l G M A / Y L G F 2 0 k m G z M 4 s M 7 1 i W P I Z X j w o 4 t W v 8 e b f O H k c N L G g o a j q p r s r S g Q 3 6 H n f z t b 2 z u 7 e f u 4 g f 3 h 0 f H J a O D t v G 5 V q B i 2 m h N L d i B o Q X E I L O Q r o J h p o H A n o R J P 6 3 O 8 8 g j Z c y Q e c J h D G d C T 5 k D O K V g p 6 C E + Y 1 R u z f q V f K H p l b w F 3 k / g r U q y R J Z r 9 w l d v o F g a g 0 Q m q D G B 7 y U Y Z l Q j Z w J m + V 5 q I K F s Q k c Q W C p p D C b M F i f P 3 G u r D N y h 0 r Y k u g v 1 9 0 R G Y 2 O m c W Q 7 Y 4 p j s + 7 N x f + 8 I M X h X Z h x m a Q I k i 0 X D V P h o n L n / 7 s D r o G h m F p C m e b 2 V p e N q a Y M b U p 5 G 4 K / / v I m a V f K / k 2 5 e l 8 t 1 k q r O H L k k l y R E v H J L a m R B m m S F m F E k W f y S t 4 c d F 6 c d + d j 2 b r l r G Y u y B 8 4 n z 9 q + 5 F M < / l a t e x i t > CH2 < l a t e x i t s h a 1 _ b a s e 6 4 = " I i Y n i 4 S M w 6 / K m 5 O r c F + N k o G z n N o = " > A A A B 8 n i c b V D L S g N B E J z 1 G e M r 6 t H L Y h B y C r s h q M d A L j l G M A / Y L G F 2 0 k m G z M 4 s M 7 1 i W P I Z X j w o 4 t W v 8 e b f O H k c N L G g o a j q p r s r S g Q 3 6 H n f z t b 2 z u 7 e f u 4 g f 3 h 0 f H J a O D t v G 5 V q B i 2 m h N L d i B o Q X E I L O Q r o J h p o H A n o R J P 6 3 O 8 8 g j Z c y Q e c J h D G d C T 5 k D O K V g p 6 C E + Y 1 R u z f q V f K H p l b w F 3 k / g r U q y R J Z r 9 w l d v o F g a g 0 Q m q D G B 7 y U Y Z l Q j Z w J m + V 5 q I K F s Q k c Q W C p p D C b M F i f P 3 G u r D N y h 0 r Y k u g v 1 9 0 R G Y 2 O m c W Q 7 Y 4 p j s + 7 N x f + 8 I M X h X Z h x m a Q I k i 0 X D V P h o n L n / 7 s D r o G h m F p C m e b 2 V p e N q a Y M b U p 5 G 4 K / / v I m a V f K / k 2 5 e l 8 t 1 k q r O H L k k l y R E v H J L a m R B m m S F m F E k W f y S t 4 c d F 6 c d + d j 2 b r l r G Y u y B 8 4 n z 9 q + 5 F M < / l a t e x i t > CH2 < l a t e x i t s h a 1 _ b a s e 6 4 = " I i Y n i 4 S M w 6 / K m 5 O r c F + N k o G z n N o = " > A A A B 8 n i c b V D L S g N B E J z 1 G e M r 6 t H L Y h B y C r s h q M d A L j l G M A / Y L G F 2 0 k m G z M 4 s M 7 1 i W P I Z X j w o 4 t W v 8 e b f O H k c N L G g o a j q p r s r S g Q 3 6 H n f z t b 2 z u 7 e f u 4 g f 3 h 0 f H J a O D t v G 5 V q B i 2 m h N L d i B o Q X E I L O Q r o J h p o H A n o R J P 6 3 O 8 8 g j Z c y Q e c J h D G d C T 5 k D O K V g p 6 C E + Y 1 R u z f q V f K H p l b w F 3 k / g r U q y R J Z r 9 w l d v o F g a g 0 Q m q D G B 7 y U Y Z l Q j Z w J m + V 5 q I K F s Q k c Q W C p p D C b M F i f P 3 G u r D N y h 0 r Y k u g v 1 9 0 R G Y 2 O m c W Q 7 Y 4 p j s + 7 N x f + 8 I M X h X Z h x m a Q I k i 0 X D V P h o n L n / 7 s D r o G h m F p C m e b 2 V p e N q a Y M b U p 5 G 4 K / / v I m a V f K / k 2 5 e l 8 t 1 k q r O H L k k l y R E v H J L a m R B m m S F m F E k W f y S t 4 c d F 6 c d + d j 2 b r l r G Y u y B 8 4 n z 9 q + 5 F M < / l a t e x i t > CH2 < l a t e x i t s h a 1 _ b a s e 6 4 = " L X e m w C 7 q S X v D x r m N f w D p J q p 9 6 h w = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y G o B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 q 6 U v Z t y t V k p 1 a q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + r 9 j Q A = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " L X e m w C 7 q S X v D x r m N f w D p J q p 9 6 h w = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y G o B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 q 6 U v Z t y t V k p 1 a q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + r 9 j Q A = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " L X e m w C 7 q S X v D x r m N f w D p J q p 9 6 h w = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y G o B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 q 6 U v Z t y t V k p 1 a q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + r 9 j Q A = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " W K u z w C v 4 n Z V m B X c u Z 1 7 e K f h I 4 3 4 = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K i B 4 D X j x J A u Y B y R J m J 7 3 J m N n Z Z W Z W C E u + w I s H R b z 6 S d 7 8 G y e P g y Y W N B R V 3 X R 3 B Y n g 2 r j u t 5 P b 2 N z a 3 s n v F v b 2 D w 6 P i s c n L R 2 n i m G T x S J W n Y B q F F x i 0 3 A j s J M o p F E g s B 2 M b 2 d + + w m V 5 r F 8 M J M E / Y g O J Q 8 5 o 8 Z K j f t + s e S W 3 T n I O v G W p F S D B e r 9 4 l d v E L M 0 Q m m Y o F p 3 P T c x f k a V 4 U z g t N B L N S a U j e k Q u 5 Z K G q H 2 s / m h U 3 J h l Q E J Y 2 V L G j J X f 0 9 k N N J 6 E g W 2 M 6 J m p F e 9 m f i f 1 0 1 N e O N n X C a p Q c k W i 8 J U E B O T 2 d d k w B U y I y a W U K a 4 v Z W w E V W U G Z t N w Y b g r b 6 8 T l p X Z a 9 a r j Q q p V p l G U c e z u A c L s G D a 6 j B H d S h C Q w Q n u E V 3 p x H 5 8 V 5 d z 4 W r T l n O X M K f + B 8 / g D 8 S Y 0 N < / l a t e x i t > N < l a t e x i t s h a 1 _ b a s e 6 4 = " I i Y n i 4 S M w 6 / K m 5 O r c F + N k o G z n N o = " > A A A B 8 n i c b V D L S g N B E J z 1 G e M r 6 t H L Y h B y C r s h q M d A L j l G M A / Y L G F 2 0 k m G z M 4 s M 7 1 i W P I Z X j w o 4 t W v 8 e b f O H k c N L G g o a j q p r s r S g Q 3 6 H n f z t b 2 z u 7 e f u 4 g f 3 h 0 f H J a O D t v G 5 V q B i 2 m h N L d i B o Q X E I L O Q r o J h p o H A n o R J P 6 3 O 8 8 g j Z c y Q e c J h D G d C T 5 k D O K V g p 6 C E + Y 1 R u z f q V f K H p l b w F 3 k / g r U q y R J Z r 9 w l d v o F g a g 0 Q m q D G B 7 y U Y Z l Q j Z w J m + V 5 q I K F s Q k c Q W C p p D C b M F i f P 3 G u r D N y h 0 r Y k u g v 1 9 0 R G Y 2 O m c W Q 7 Y 4 p j s + 7 N x f + 8 I M X h X Z h x m a Q I k i 0 X D V P h o n L n / 7 s D r o G h m F p C m e b 2 V p e N q a Y M b U p 5 G 4 K / / v I m a V f K / k 2 5 e l 8 t 1 k q r O H L k k l y R E v H J L a m R B m m S F m F E k W f y S t 4 c d F 6 c d + d j 2 b r l r G Y u y B 8 4 n z 9 q + 5 F M < / l a t e x i t > CH2 < l a t e x i t s h a 1 _ b a s e 6 4 = " I i Y n i 4 S M w 6 / K m 5 O r c F + N k o G z n N o = " > A A A B 8 n i c b V D L S g N B E J z 1 G e M r 6 t H L Y h B y C r s h q M d A L j l G M A / Y L G F 2 0 k m G z M 4 s M 7 1 i W P I Z X j w o 4 t W v 8 e b f O H k c N L G g o a j q p r s r S g Q 3 6 H n f z t b 2 z u 7 e f u 4 g f 3 h 0 f H J a O D t v G 5 V q B i 2 m h N L d i B o Q X E I L O Q r o J h p o H A n o R J P 6 3 O 8 8 g j Z c y Q e c J h D G d C T 5 k D O K V g p 6 C E + Y 1 R u z f q V f K H p l b w F 3 k / g r U q y R J Z r 9 w l d v o F g a g 0 Q m q D G B 7 y U Y Z l Q j Z w J m + V 5 q I K F s Q k c Q W C p p D C b M F i f P 3 G u r D N y h 0 r Y k u g v 1 9 0 R G Y 2 O m c W Q 7 Y 4 p j s + 7 N x f + 8 I M X h X Z h x m a Q I k i 0 X D V P h o n L n / 7 s D r o G h m F p C m e b 2 V p e N q a Y M b U p 5 G 4 K / / v I m a V f K / k 2 5 e l 8 t 1 k q r O H L k k l y R E v H J L a m R B m m S F m F E k W f y S t 4 c d F 6 c d + d j 2 b r l r G Y u y B 8 4 n z 9 q + 5 F M < / l a t e x i t > CH2 < l a t e x i t s h a 1 _ b a s e 6 4 = " L X e m w C 7 q S X v D x r m N f w D p J q p 9 6 h w = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y G o B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 q 6 U v Z t y t V k p 1 a q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + r 9 j Q A = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " L X e m w C 7 q S X v D x r m N f w D p J q p 9 6 h w = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y G o B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 q 6 U v Z t y t V k p 1 a q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + r 9 j Q A = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " L X e m w C 7 q S X v D x r m N f w D p J q p 9 6 h w = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e y G o B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 q 6 U v Z t y t V k p 1 a q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + r 9 j Q A = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = "  W K u z w C v 4 n Z V m B X c u Z 1 7 e K f h I 4 3 4 = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K i B 4 D X j x J A u Y B y R J m J 7 3 J m N n Z Z W Z W C E u + w I s H R b z 6 S d 7 8 G y e P g y Y W N B R V 3 X R 3 B Y n g 2 r j u t 5 P b 2 N z a 3 s n v F v b 2 D w 6 P i s c n L R 2 n i m G T x S J W n Y B q F F x i 0 3 A j s J M o p F E g s B 2 M b 2 d + + w m V 5 r F 8 M J M E / Y g O J Q 8 5 o 8 Z K j f t + s e S W 3 T n I O v G W p F S D B e r 9 4 l d v E L M 0 Q m m Y o F p 3 P T c x f k a V 4 U z g t N B L N S a U j e k Q u 5 Z K G q H 2 s / m h U 3 J h l Q E J Y 2 V L G j J X f 0 9 k N N J 6 E g W 2 M 6 J m p F e 9 m f i f 1 0 1 N e O N n X C a p Q c k W i 8 J U E B O T 2 d d k w B U y I y a W U K a 4 v Z W w E V W U G Z t N w Y b g r b 6 8 T l p X Z a 9 a r j Q q p V p l G U c e z u A c L s G D a 6 j B H d S h C Q w Q n u E V 3 p x H 5 8 V 5 d z 4 W r T l n O X M K f + B 8 / g D 8 S Y 0 N < / l a t e x i t > N < l a t e x i t s h a 1 _ b a s e 6 4 = " I i Y n i 4 S M w 6 / K m 5 O r c F + N k o G z n N o = " > A A A B 8 n i c b V D L S g N B E J z 1 G e M r 6 t H L Y h B y C r s h q M d A L j l G M A / Y L G F 2 0 k m G z M 4 s M 7 1 i W P I Z X j w o 4 t W v 8 e b f O H k c N L G g o a j q p r s r S g Q 3 6 H n f z t b 2 z u 7 e f u 4 g f 3 h 0 f H J a O D t v G 5 V q B i 2 m h N L d i B o Q X E I L O Q r o J h p o H A n o R J P 6 3 O 8 8 g j Z c y Q e c J h D G d C T 5 k D O K V g p 6 C E + Y 1 R u z f q V f K H p l b w F 3 k / g r U q y R J Z r 9 w l d v o F g a g 0 Q m q D G B 7 y U Y Z l Q j Z w J m + V 5 q I K F s Q k c Q W C p p D C b M F i f P 3 G u r D N y h 0 r Y k u g v 1 9 0 R G Y 2 O m c W Q 7 Y 4 p j s + 7 N x f + 8 I M X h X Z h x m a Q I k i 0 X D V P h o n L n / 7 s D r o G h m F p C m e b 2 V p e N q a Y M b U p 5 G 4 K / / v I m a V f K / k 2 5 e l 8 t 1 k q r O H L k k l y R E v H J L a m R B m m S F m F E k W f + N k o G z n N o = " > A A A B 8 n i c b V D L S g N B E J z 1 G e M r 6 t H L Y h B y C r s h q M d A L j l G M A / Y L G F 2 0 k m G z M 4 s M 7 1 i W P I Z X j w o 4 t W v 8 e b f O H k c N L G g o a j q p r s r S g Q 3 6 H n f z t b 2 z u 7 e f u 4 g f 3 h 0 f H J a O D t v G 5 V q B i 2 m h N L d i B o Q X E I L O Q r o J h p o H A n o R J P 6 3 O 8 8 g j Z c y Q e c J h D G d C T 5 k D O K V g p 6 C E + Y 1 R u z f q V f K H p l b w F 3 k / g r U q y R J Z r 9 w l d v o F g a g 0 Q m q D G B 7 y U Y Z l Q j Z w J m + V 5 q I K F s Q k c Q W C p p D C b M F i f P 3 G u r D N y h 0 r Y k u g v 1 9 0 R G Y 2 O m c W Q 7 Y 4 p j s + 7 N x f + 8 I M X h X Z h x m a Q I k i 0 X D V P h o n L n / 7 s D r o G h m F p C m e b 2 V p e N q a Y M b U p 5 G 4 K / / v I m a V f K / k 2 5 e l 8 t 1 k q r O H L k k l y R E v H J L a m R B m m S F m F E k W f + N k o G z n N o = " > A A A B 8 n i c b V D L S g N B E J z 1 G e M r 6 t H L Y h B y C r s h q M d A L j l G M A / Y L G F 2 0 k m G z M 4 s M 7 1 i W P I Z X j w o 4 t W v 8 e b f O H k c N L G g o a j q p r s r S g Q 3 6 H n f z t b 2 z u 7 e f u 4 g f 3 h 0 f H J a O D t v G 5 V q B i 2 m h N L d i B o Q X E I L O Q r o J h p o H A n o R J P 6 3 O 8 8 g j Z c y Q e c J h D G d C T 5 k D O K V g p 6 C E + Y 1 R u z f q V f K H p l b w F 3 k / g r U q y R J Z r 9 w l d v o F g a g 0 Q m q D G B 7 y U Y Z l Q j Z w J m + V 5 q I K F s Q k c Q W C p p D C b M F i f P 3 G u r D N y h 0 r Y k u g v 1 9 0 R G Y 2 O m c W Q 7 Y 4 p j s + 7 N x f + 8 I M X h X Z h x m a Q I k i 0 X D V P h o n L n / 7 s D r o G h m F p C m e b 2 V p e N q a Y M b U p 5 G 4 K / / v I m a V f K / k 2 5 e l 8 t 1 k q r O H L k k l y R E v H J L a m R B m m S F m F E k W f + N k o G z n N o = " > A A A B 8 n i c b V D L S g N B E J z 1 G e M r 6 t H L Y h B y C r s h q M d A L j l G M A / Y L G F 2 0 k m G z M 4 s M 7 1 i W P I Z X j w o 4 t W v 8 e b f O H k c N L G g o a j q p r s r S g Q 3 6 H n f z t b 2 z u 7 e f u 4 g f 3 h 0 f H J a O D t v G 5 V q B i 2 m h N L d i B o Q X E I L O Q r o J h p o H A n o R J P 6 3 O 8 8 g j Z c y Q e c J h D G d C T 5 k D O K V g p 6 C E + Y 1 R u z f q V f K H p l b w F 3 k / g r U q y R J Z r 9 w l d v o F g a g 0 Q m q D G B 7 y U Y Z l Q j Z w J m + V 5 q I K F s Q k c Q W C p p D C b M F i f P 3 G u r D N y h 0 r Y k u g v 1 9 0 R G Y 2 O m c W Q 7 Y 4 p j s + 7 N x f + 8 I M X h X Z h x m a Q I k i 0 X D V P h o n L n / 7 s D r o G h m F p C m e b 2 V p e N q a Y M b U p 5 G 4 K / / v I m a V f K / k 2 5 e l 8 t 1 k q r O H L k k l y R E v H J L a m R B m m S F m F E k W f y S t 4 c d F 6 c d + d j 2 b r l r G Y u y B 8 4 n z 9 q + 5 F M < / l a t e x i t > CH2 < l a t e x i t s h a 1 _ b a s e 6 4 = " d G i X g C e j l q 2 K I b q K m w W P m j 3 / G Z g = " > A A A B 8 n i c b Z D L S g M x F I Y z r Z d a b 1 W X b l K L U D d l R k S 7 L L h x W a E 3 m A 5 D J s 2 0 o Z l k S D J C G e r O R 3 D j Q h G 3 P o c P 4 M 5 H c W e m 7 U J b f w h 8 / P 8 5 5 J w T x I w q b d t f V i 6 / t r 6 x W d g q b u / s 7 u 2 X D g 4 7 S i Q S k z Y W T M h e g B R h l J O 2 p p q R X i w J i g J G u s H 4 O s u 7 d 0 Q q K n h L T 2 L i R W j I a U g x 0 s Z y W 3 4 a + 8 6 0 O j w r + q W K X b N n g q v g L K D S K H + X 8 w 8 f 9 0 2 / 9 N k f  C J x E h G v M k F K u Y 8 f a S 5 H U F D M y L f Y T R W K E x 2 h I X I M c R U R 5 6 W z k K T w 1 z g C G Q p r H N Z y 5 v z t S F C k 1 i Q J T G S E 9 U s t Z Z v 6 X u Y k O 6 1 5 K e Z x o w v H 8 o z B h U A u Y 7 Q 8 H V B K s 2 c Q A w p K a W S E e I Y m w N l f K j u A s r 7 w K n f O a c 1 m 7 u H U q j T q Y q w C O w Q m o A g d c g Q a 4 A U 3 Q B h g I 8 A i e w Y u l r S f r 1 X q b l + a s R c 8 R + C P r / Q d N 3 J O Y < / l a t e x i t > T p1 (g) q 5 j h 1 r L 0 V S U 8 z I t N B P F I k R H q M h c Q 1 y F B H l p b O R p / D M O A M Y C m k e 1 3 D m / u 5 I U a T U J A p M Z Y T 0 S C 1 n m f l f 5 i Y 6 r H s p 5 X G i C c f z j 8 K E Q S 1 g t j 8 c U E m w Z h M D C E t q Z o V 4 h C T C 2 l w p O 4 K z v P I q d G p V 5 7 J 6 c e u U G 3 U w V x 6 c g F N Q A Q 6 4 A g 1 w A 5 q g D T A Q 4 B E 8 g x d L W 0 / W q / U 2 L 1 2 z F j 3 H 4 I + s 9 x 9 P Z Z O Z < / l a t e x i t > T p2 (g) < l a t e x i t s h a 1 _ b a s e 6 4 = " H L 3 8 i i Y q w q N 0 y 3 j i 1 i 9 x 5 9 8 4 a S t o 6 4 G B w z n 3 c s + c I O F M a c f 5 s l Z W 1 9 Y 3 N g t b x e 2 d 3 b 1 9 u 3 T w A 9 w + l h o 6 4 E L h 3 P u 5 d 5 7 / I Q z p W 3 7 w 8 1 b 3 8 h u 5 r a 2 d 3 b 3 8 s h a 1 _ b a s e 6 4 = " 6 I 3 R K 5 2 8  3 k a R b G 0 v 0 o w f o = " > A A A B 7 3 i c b Z C 7 S g N B F I b P J l 5 i v E U t b S Y G w S r s B t G U A R v L C L l B E p b Z y W w y Z H Z 2 n Z k V w h I 7 X 8 D G Q h F b X 8 Q H s P N R 7 J x N U m j i D w M f / 3 8 O c 8 7 x I s 6 U t u 0 v K 5 N d W 9 / Y z G 3 l t 3 d 2 9 / Y L B 4 c t F c a S 0 C Y J e S g 7 H l a U M 0 G b m m l O O 5 G k O P A 4 b X v j q z R v 3 1 G p W C g a e h L R f o C H g v m M Y G 2 s T s N N I r c y z b u F k l 2 2 Z 0 K r 4 C y g V C t + F 7 M P H / d 1 t / D Z G 4 Q k D q j Q h G O l u o 4 d 6 X 6 C p W a E 0 2 m + F y s a Y T L G Q 9 o 1 K H B A V T + Z z T t F p 8 Y Z I D + U 5 g m N Z u 7 v j g Q H S k 0 C z 1 Q G W I / U c p a a / 2 X d W P v V f s J E F G s q y P w j P + Z I h y h d H g 2 Y p E T z i Q F M J D O z I j L C E h N t T p Q e w V l e e R V a l b J z U T 6 / c U q 1 K s y V g 2 M 4 g T N w 4 B J q c A 1 1 a A I B D o / w D C / W r f V k v V p v 8 9 K M t e g 5 g j + y 3 n 8 A w B i S w w = = < / l a t e x i t > T p2 < l a t e x i t s h a 1 _ b a s e 6 4 = " c Q k V A / s j v o 1 J + O K m / l V m F 7 U C J L g = " > A A A B 7 3 i c b Z C 7 S g N B F I b P G i 8 x 3 q K W N h O D Y B V 2 R T R l w M Y y Q m 6 Q L M v s Z D Y Z M j u 7 z s w K Y Y m d L 2 B j o Y i t L + I D 2 P k o d s 4 m K T T x h 4 G P / z + H O e f 4 M W d K 2 / a X t Z J b X V v f y G 8 W t r Z 3 d v e K + w c t F S W S 0 C a J e C Q 7 P l a U M 0 G b m m l O O 7 G k O P Q 5 b f u j q y x v 3 1 G p W C Q a e h x T N 8 Q D w Q J G s D Z W p + G l s e d M C l 6 x b F f s q d A y O H M o 1 0 r f p d z D x 3 3 d K 3 7 2 + h F J Q i o 0 4 V i p r m P H 2 k 2 x 1 I x w O i n 0 E k V j T E Z 4 Q L s G B Q 6 p c t P p v B N 0 Y p w + C i J p n t B o 6 v 7 u S H G o 1 D j 0 T W W I 9 V A t Z p n 5 X 9 Z N d F B 1 U y b i R F N B Z h 8 F C U c 6 Q t n y q M 8 k J Z q P D W A i m Z k V k S G W m G h z o u w I z u L K y 9 A 6 q z g X l f M b p 1 y r w k x 5 O I J j O A U H L q E G 1 1 C H J h D g 8 A j P 8 G L d W k / W q / U 2 K 1 2 x 5 j 2 H 8 E f W + w + + k p L C < / l a t e x i t > T p1 (a) Invariance. < l a t e x i t s h a 1 _ b a s e 6 4 = " O J 5 d y R 2 Y 8 M v / B / j c S K 0 r q j b h J i M = " > A A A B 8 H i c b V D L S g N B E O y N U W N 8 R T 1 6 G Q 1 C v I R d E c 1 F C H j x G M G 8 S J Y w O 5 l N h s z M L j O z Q l j y F V 4 8 K O L V r / A b v P k 3 T h 4 H T S x o K K q 6 6 e 4 K Y s 6 0 c d 1 v J 7 O W X d / Y z G 3 l t 3 d 2 9 / Y L B 4 c N H S W K 0 D q J e K R a A d a U M 0 n r h h l O W 7 G i W A S c N o P R 7 d R v P l K l W S Q f z D i m v s A D y U J G s L F S u x s P W W n Q 8 8 5 7 h a J b d m d A q 8 R b k G I V 3 c j s Z / u k 1 i t 8 d f s R S Q S V h n C s d c d z Y + O n W B l G O J 3 k u 4 m m M S Y j P K A d S y U W V P v p 7 O A J O r N K H 4 W R s i U N m q m / J 1 I s t B 6 L w H Y K b I Z 6 2 Z u K / 3 m d x I Q V P 2 U y T g y V Z L 4 o T D g y E Z p + j / p M U W L 4 2 B J M F L O 3 I j L E C h N j M 8 r b E L z l l 1 d J 4 6 L s X Z U v 7 2 0 a F Z g j B 8 d w C i X w 4 B q q c A c 1 q A M B A U / w A q + O c p 6 d N + d 9 3 p p x F j N H 8 A f O x w / Z p Z H 9 < / l a t e x i t > (g 1 ) < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 k i B n 4 B s d N m H r L D j Q d 2 3 x 7 E A o G 8 = " > A A A B 8 n i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s x I U Z c F F 7 q s Y B / Q D i W T Z t r Q T G Z I 7 g h l 6 G e 4 c a G I W 7 / G n X 9 j p q 2 g r Q c C h 3 P u J e e e I J H C o O t + O Y W 1 9 Y 3 N r e J 2 a W d 3 b / + g f H j U M n G q G W + y W M a 6 E 1 D D p V C 8 i Q I l 7 y S a 0 y i Q v B 2 M b 3 K / / c i 1 E b F 6 w E n C / Y g O l Q g F o 2 i l b i + i O G J U Z r f T f r n i V t 0 Z y C r x F q R S h z k a / f J n b x C z N O I K m a T G d D 0 3 Q T + j G g W T f F r q p Y Y n l I 3 p k H c t V T T i x s 9 m k a f k z C o D E s b a P o V k p v 7 e y G h k z C Q K 7 G Q e 0 S x 7 u f i f 1 0 0 x v P Y z o Z I U u W L z j 8 J U E o x J f j 8 Z C M 0 Z y o k l l G l h s x I 2 o p o y t C 2 V b A n e 8 s m r p H V R 9 S 6 r t f t a p f 5 T R x F O 4 B T O w Y M r q M M d N K A J D G J 4 g h d 4 d d B 5 d t 6 c 9 / l o w V n s H M M f O B / f 0 s C R p A = = < / l a t e x i t > G < l a t e x i t s h a 1 _ b a s e 6 4 = " 1 C P g Z h Y A Q v 3 u k E 0 + W F h r N l G H 2 F o = " > A A A B + n i c b V D L S s N A F L 3 x W e s r 1 a W b w S L U T U m k q M u C C 1 1 W s A 9 o Q p l M J + 3 Q y S T M T J Q S + y l u X C Q U n E q C W 2 S m M e y E 2 B F O R O 0 q Z n m t J N I i q O A 0 3 Y w u s r 9 9 j 2 V i s X i T o 8 T 6 k d 4 I F j I C N Z G 6 t k l L x m y i h d h P S S Y Z 9 e T 0 5 5 d d q r O F G i Z u H N S r s M M j Z 7 9 6 f V j k k Z U a M K x U l 3 X S b S f Y a k Z 4 X R S 9 F J F E 0 x G e E C 7 h g o c U e V n 0 + g T d G K U P g p j a Z 7 Q a K r + 3 s h w p N Q 4 C s x k n l E t e r n 4 n 9 d N d X j p Z 0 w k q a a C z A 6 F K U c 6 R n k P q M 8 k J Z q P D c F E M p M V k S G W m G j T V t G U 4 C 5 + e Z m 0 z q r u e b V 2 W y v X f + o o w B E c Q w V c u I A 6 3 E A D m k D g A Z 7 g B V 6 t R + v Z e r P e Z 6 M r 1 n z n E P 7 A + v g G M 0 6 T / w = = < / l a t e x i t > (G) < l a t e x i t s h a 1 _ b a s e 6 4 = " p p 9 x f B m g F z w s + / H X d V c i s w v 0 9 E s = " > A A A B 6 n i c b V D L S s N A F L 2 p r 1 p f V Z f d D J a C q 5 K I a J c F N y 4 r m r b Q h j K Z T t K h k 0 m Y m Q g l d O f W j Q t F 3 P o t f o A 7 / Q C / q t r K 6 t b + Q 3 C 1 v b O 7 t 7 x f 2 D p o p T S a h L Y h 7 L t o 8 V 5 U x Q V z P N a T u R F E c + p y 1 / e D H x W 7 d U K h a L G z 1 K q B f h U L C A E a y N d B 3 2 n F 6 x b F f t K d A y c e a k X C 9 V 7 r 7 f v j 4 b v e J 7 t x + T N K J C E 4 6 V 6 j h 2 o r 0 M S 8 0 I p + N C N 1 U 0 w W S I Q 9 o x V O C I K i + b n j p G F a P 0 U R B L U 0 K j q f p 7 I s O R U q P I N 5 0 R 1 g O 1 6 E 3 E / 7 x O q o O a l z G R p J o K M l s U p B z p G E 3 + R n 0 m K d F 8 Z A g m k p l b E R l g i Y k 2 6 R R M C M 7 i y 8 u k e V J 1 z q q n V y a N G s y Q h x I c w T E 4 c A 5 1 u I Q G u E A g h H t 4 h C e L W w / W s / U y a 8 1 Z 8 5 l D + A P r 9 Q f 3 E J I I < / l a t e x i t > g 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " N 0 o q i 0 e H B r x y X d j R h T S l z H w u 9 W c = " > A A A B 6 n i c b V C 7 S g N B F L 0 b X z G + o p Z p B k P A K u w G i S k D N p Y R z Q O S J c x O Z j d D Z m e X m V k h L O l s b S w U s f V b / A A 7 / Q C / w A 9 w 8 i g 0 8 c C F w z n 3 c u 8 9 X s y Z 0 r b 9 Y W X W 1 j c 2 t 7 L b u Z 3 d v f 2 D / O F R S 0 W J J L R J I h 7 J j o c V 5 U z Q p m a a 0 0 4 s K Q 4 9 T t v e 6 G L q t 2 + p V C w S N 3 o c U z f E g W A + I 1 g b 6 T r o V / r 5 o l 2 2 Z 0 C r x F m Q Y r 1 Q u v t + + / p s 9 P P v v U F E k p A K T T h W q u v Y s X Z T L D U j n E 5 y v U T R G J M R D m j X U I F D q t x 0 d u o E l Y w y Q H 4 k T Q m N Z u r v i R S H S o 1 D z 3 S G W A / V s j c V / / O 6 i f Z r b s p E n G g q y H y R n 3 C k I z T 9 G w 2 Y p E T z s S G Y S G Z u R W S I J S b a p J M z I T j L L 6 + S V q X s V M t n V y a N G s y R h Q K c w C k 4 c A 5 1 u I Q G N I F A A P f w C E 8 W t x 6 s Z + t l 3 p q x F j P H 8 A f W 6 w / 4 l J I J < / l a t e x i t > g 2 < l a t e x i t s h a 1 _ b a s e 6 4 = " N j 7 K U R J s e F l r u 7 t U 2 j T M s d 8 d K t U = " > A A A B 8 H i c b V B L S g N B E K 2 J v x h / U Z f Z N I a A q z A j o l k G 3 L i M Y D 6 S h N D T 0 5 M 0 6 e 4 Z u n u E M G T n D d y 4 U M S t B / E A 7 v Q A n s A D 2 P k s N P F B w e O 9 K q r q + T F n 2 r j u h 5 N Z W V v s H D R 0 l i t A 6 i X i k W j 7 W l D N J 6 4 Y Z T l u x o l j 4 n D b 9 4 c X E b 9 5 S p V k k r 8 0 o p l 2 B + 5 K F j G B j p Z u O Y T y g a X / c y x f d s j s F W i b e n B S r h d L d 9 9 v X Z 6 2 X f + 8 E E U k E l Y Z w r H X b c 2 P T T b E y j H A 6 z n U S T W N M h r h P 2 5 Z K L K j u p t O D x 6 h k l Q C F k b I l D Z q q v y d S L L Q e C d 9 2 C m w G e t G b i P 9 5 7 c S E l W 7 K Z J w Y K s l s U Z h w Z C I 0 + R 4 F T F F i + M g S T B S z t y I y w A o T Y z P K 2 R C 8 x Z e X S e O k 7 J 2 V T 6 9 s G h W Y I Q s F O I J j 8 O A c q n A J N a g D A Q H 3 8 A h P j n I e n G f n Z d a a c e Y z h / A H z u s P L t 2 V G g = = < / l a t e x i t > g < l a t e x i t P g j Z r B b q 6 M s 5 b E p J 1 L E = " > A A A B + H i c b V D L S s N A F L 2 p V W t 9 N O r S z W g R 6 q Y k I t q N U H D j s o J 9 S B P K Z D p p h 0 4 m Y W Y i 1 N A v c e N C E b d + g d / g z r 9 x + l h o 9 c C F w z n 3 c u 8 9 Q c K Z 0 o 7 z Z e V W 8 q t r 6 4 W N 4 u b W 9 k 7 J 3 t 1 r q T i V h D Z J z G P Z C b C i n A n a 1 E x z 2 k k k x V H A a T s Y X U 3 9 9 j 2 V i s X i V o 8 T 6 k d 4 I F j I C N Z G 6 t k l L x m y i q c Z 7 9 N s M D n p 2 W W n 6 s y A / h J 3 Q c p 1 d C n y H 3 e H j Z 7 9 6 f V j k k Z U a M K x U l 3 X S b S f Y a k Z 4 X R S 9 F J F E 0 x G e E C 7 h g o c U e V n s 8 M n 6 N g o f R T G 0 p T Q a K b + n M h w p N Q 4 C k x n h P V Q L X t T 8 T + v m + q w 5 m d M J K m m g s w X h S l H O k b T F F C f S U o 0 H x u C i W T m V k S G W G K i T V Z F E 4 K 7 / P J f 0 j q t u u f V s x u T R g 3 m K M A B H E E F X L i A O l x D A 5 p A I I V H e I Y X 6 8 F 6 s l 6 t t 3 l r z l r M 7 M M v W O / f l u O V Q A = = < / l a t e x i t > (g) < l a t e x i t s h a 1 _ b a s e 6 4 = " r n r t w e 4 F q V + Z d W V 9 b 9 / p I d n F n 3 w = " > A A A B 8 H i c b V D L S g N B E O z V q D G + o h 6 9 j A Y h X s J u E M 1 F C H j x G M G 8 S J Y w O 5 l N h s z M L j O z Q l j y F V 4 8 K O L V r / A b v P k 3 T h 4 H T S x o K K q 6 6 e 4 K Y s 6 0 c d 1 v Z 2 0 9 s 7 G 5 l d 3 O 7 e z u 7 R / k D 4 8 a O k o U o X U S 8 U i 1 A q w p Z 5 L W D T O c t m J F s Q g 4 b Q a j 2 6 n f f K R K s 0 g + m H F M f Y E H k o W M Y G O l d j c e s u K g V 7 7 o 5 Q t u y Z 0 B r R J v Q Q p V d C M z n + 3 T W i / / 1 e 1 H J B F U G s K x 1 h 3 P j Y 2 f Y m U Y 4 X S S 6 y a a x p i M 8 I B 2 L J V Y U O 2 n s 4 M n 6 N w q f R R G y p Y 0 a K b + n k i x 0 H o s A t s p s B n q Z W 8 q / u d 1 E h N W / J T J O D F U k v m i M O H I R G j 6 P e o z R Y n h Y 0 s w U c z e i s g Q K 0 y M z S h n Q / C W X 1 4 l j X L J u y p d 3 t s 0 K j B H F k 7 g D I r g w T V U 4 Q 5 q U A c C A p 7 g B V 4 d 5 T w 7 b 8 7 7 v H X N W c w c w x 8 4 H z / b K p H + < / l a t e x i t > (g 2 ) < l a t e x i t s h a 1 _ b a s e 6 4 = " e 9 J H + q u w P h 7 j X s c Q C T Y j 8 W l e 8 M U = " > A A A B 8 H i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g x j I j R b s s u H F Z w d Z K O 5 R M J t O G 5 j E k G a E M / Q o 3 L h R x 6 + e 4 8 2 9 M H w t t P R A 4 n H M u u f d E K W f G + v 6 3 V 1 h b 3 9 j c K m 6 X d n b 3 9 g / K h 0 d t o z J N a I s o r n Q n w o Z y J m n L M s t p J 9 U U i 4 j T h 2 h 0 M / U f n q g 2 T M l 7 O 0 5 p K P B A s o Q R b J 3 0 G F z 0 u A v H u F + u + F V / B r R K g g W p N G C O Z r / 8 1 Y s V y Q S V l n B s T D f w U x v m W F t G O J 2 U e p m h K S Y j P K B d R y U W 1 I T 5 b O E J O n N K j B K l 3 Z M W z d T f E z k W x o x F 5 J I C 2 6 F Z 9 q b i f 1 4 3 s 0 k 9 z J l M M 0 s l m X + U Z B x Z h a b X o 5 h p S i w f O 4 K J Z m 5 X R I Z Y Y 2 J d R y V X Q r B 8 8 i p p X 1 a D q 2 r t r l Z p 1 B d 1 F O E E T u E c A r i G B t x C E 1 p A Q M A z v M K b p 7 0 X 7 9 3 7 m E c L 3 m L m G P 7 A + / w B b t 2 Q L g = = < / l a t e x i t > 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " e 9 J H + q u w P h 7 j  X s c Q C T Y j 8 W l e 8 M U = " > A A A B 8 H i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g x j I j R b s s u H F Z w d Z K O 5 R M J t O G 5 j E k G a E M / Q o 3 L h R x 6 + e 4 8 2 9 M H w t t P R A 4 n H M u u f d E K W f G + v 6 3 V 1 h b 3 9 j c K m 6 X d n b 3 9 g / K h 0 d t o z J N a I s o r n Q n w o Z y J m n L M s t p J 9 U U i 4 j T h 2 h 0 M / U f n q g 2 T M l 7 O 0 5 p K P B A s o Q R b J 3 0 G F z 0 u A v H u F + u + F V / B r R K g g W p N G C O Z r / 8 1 Y s V y Q S V l n B s T D f w U x v m W F t G O J 2 U e p m h K S Y j P K B d R y U W 1 I T 5 b O E J O n N K j B K l 3 Z M W z d T f E z k W x o x F 5 J I C 2 6 F Z 9 q b i f 1 4 3 s 0 k 9 z J l M M 0 s l m X + U Z B x Z h a b X o 5 h p S i w f O 4 K J Z m 5 X R I Z Y Y 2 J d R y V X Q r B 8 8 i p p X 1 a D q 2 r t r l Z p 1 B d 1 F O E E T u E c A r i G B t x C E 1 p A Q M A z v M K b p 7 0 X 7 9 3 7 m E c L 3 m L m G P 7 A + / w B b t 2 Q L g = = < / l a t e x i t > 1 < l a t e x i t s h a 1 _ b a s e 6 4 = " b t j G N 3 7 o d a f v N 8 8 L S f Y F F + P v y V Q = " > A A A B 7 n i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s x I q V 0 W 3 L i s Y B / Q D i W T y b S h m c y Q 3 B H K 0 I 9 w 4 0 I R t 3 6 P O / / G 9 L H Q 1 g O B w z n n k n t P k E p h 0 H W / n c L W 9 s 7 u X n G / d H B 4 d H x S P j 3 r m C T T j L d Z I h P d C 6 j h U i j e R o G S 9 1 L N a R x I 3 g 0 m d 3 O / + 8 S 1 E Y l 6 x G n K / Z i O l I g E o 2 i l 7 k D a a E i H 5 Y p b d R c g m 8 R b k U o T l m g N y 1 + D M G F Z z B U y S Y 3 p e 2 6 K f k 4 1 C i b 5 r D T I D E 8 p m 9 A R 7 1 u q a M y N n y / W n Z E r q 4 Q k S r R 9 C s l C / T 2 R 0 9 i Y a R z Y Z E x x b N a 9 u f i f 1 8 8 w a v i 5 U G m G X L H l R 1 E m C S Z k f j s J h e Y M + P v y V Q = " > A A A B 7 n i c b V D L S g M x F L 1 T X 7 W + q i 7 d B I v g q s x I q V 0 W 3 L i s Y B / Q D i W T y b S h m c y Q 3 B H K 0 I 9 w 4 0 I R t 3 6 P O / / G 9 L H Q 1 g O B w z n n k n t P k E p h 0 H W / n c L W 9 s 7 u X n G / d H B 4 d H x S P j 3 r m C T T j L d Z I h P d C 6 j h U i j e R o G S 9 1 L N a R x I 3 g 0 m d 3 O / + 8 S 1 E Y l 6 x G n K / Z i O l I g E o 2 i l 7 k D a a E i H 5 Y p b d R c g m 8 R b k U o T l m g N y 1 + D M G F Z z B U y S Y 3 p e 2 6 K f k 4 1 C i b 5 r D T I D E 8 p m 9 A R 7 1 u q a M y N n y / W n Z E r q 4 Q k S r R 9 C s l C / T 2 R 0 9 i Y a R z Y Z E x x b N a 9 u f i f 1 8 8 w a v i 5 U G m G X L H l R 1 E m C S Z k f j s J h e Y M G 0 X C y d t h V v Z S M 9 a D f 4 r V D B q n 8 A Q = " > A A A B 6 3 i c b V C 7 S g N B F L 0 b X z G + o p Z p B k P A K u y K a M q A j W U E 8 4 A k y O x k N j t k Z n a Z m R X C k s 7 a x k I R W 3 / F D 7 D T D / A L / A B n k x S a e O D C 4 Z x 7 u f c e P + Z M G 9 f 9 c H I r q 2 v r G / n N w t b 2 z u 5 e c f + g p a N E E d o k E Y 9 U x 8 e a c i Z p 0 z D D a S d W F A u f 0 7 Y / u s j 8 9 i 1 V m k X y 2 o x j 2 h d 4 K F n A C D a Z 1 I t D d l M s u 1 V 3 C r R M v D k p 1 0 u V u + + 3 r 8 / G T f G 9 N 4 h I I q g 0 h G O t u 5 4 b m 3 6 K l W G E 0 0 m h l 2 g a Y z L C Q 9 q 1 V G J B d T + d 3 j p B F a s M U B A p W 9 K g q f p 7 I s V C 6 7 H w b a f A J t S L X i b + 5 3 U T E 9 T 6 K Z N x Y q g k s 0 V B w p G J U P Y 4 G j B F i e F j S z B R z N 6 K S I g V J s b G U 7 A h e I s v L 5 P W S d U 7 q 5 5 e 2 T R q M E M e S n A E x + D B O d T h E h r Q B A I h 3 M M j P D n C e X C e n Z d Z a 8 6 Z z x z C H z i v P x j d k r g = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " G 0 X C y d t h V v Z S M 9 a D f 4 r V D B q n 8 A Q = " > A A A B 6 3 i c b V C 7 S g N B F L 0 b X z G + o p Z p B k P A K u y K a M q A j W U E 8 4 A k y O x k N j t k Z n a Z m R X C k s 7 a x k I R W 3 / F D 7 D T D / A L / A B n k x S a e O D C 4 Z x 7 u f c e P + Z M G 9 f 9 c H I r q 2 v r G / n N w t b 2 z u 5 e c f + g p a N E E d o k E Y 9 U x 8 e a c i Z p 0 z D D a S d W F A u f 0 7 Y / u s j 8 9 i 1 V m k X y 2 o x j 2 h d 4 K F n A C D a Z 1 I t D d l M s u 1 V 3 C r R M v D k p 1 0 u V u + + 3 r 8 / G T f G 9 N 4 h I I q g 0 h G O t u 5 4 b m 3 6 K l W G E 0 0 m h l 2 g a Y z L C Q 9 q 1 V G J B d T + d 3 j p B F a s M U B A p W 9 K g q f p 7 I s V C 6 7 H w b a f A J t S L X i b + 5 3 U T E 9 T 6 K Z N x Y q g k s 0 V B w p G J U P Y 4 G j B F i e F j S z B R z N 6 K S I g V J s b G U 7 A h e I s v L 5 P W S d U 7 q 5 5 e 2 T R q M E M e S n A E x + D B O d T h E h r Q B A I h 3 M M j P D n C e X C e n Z d Z a 8 6 Z z x z C H z i v P x j d k r g = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " G 0 X C y d t h V v Z S M 9 a D f 4 r V D B q n 8 A Q = " > A A A B 6 3 i c b V C 7 S g N B F L 0 b X z G + o p Z p B k P A K u y K a M q A j W U E 8 4 A k y O x k N j t k Z n a Z m R X C k s 7 a x k I R W 3 / F D 7 D T D / A L / A B n k x S a e O D C 4 Z x 7 u f c e P + Z M G 9 f 9 c H I r q 2 v r G / n N w t b 2 z u 5 e c f + g p a N E E d o k E Y 9 U x 8 e a c i Z p 0 z D D a S d W F A u f 0 7 Y / u s j 8 9 i 1 V m k X y 2 o x j 2 h d 4 K F n A C D a Z 1 I t D d l M s u 1 V 3 C r R M v D k p 1 0 u V u + + 3 r 8 / G T f G 9 N 4 h I I q g 0 h G O t u 5 4 b m 3 6 K l W G E 0 0 m h l 2 g a Y z L C Q 9 q 1 V G J B d T + d 3 j p B F a s M U B A p W 9 K g q f p 7 I s V C 6 7 H w b a f A J t S L X i b + 5 3 U T E 9 T 6 K Z N x Y q g k s 0 V B w p G J U P Y 4 G j B F i e F j S z B R z N 6 K S I g V J s b G U 7 A h e I s v L 5 P W S d U 7 q 5 5 e 2 T R q M E M e S n A E x + D B O d T h E h r Q B A I h 3 M M j P D n C e X C e n Z d Z a 8 6 Z z x z C H z i v P x j d k r g = < / l a t e x i t > view1 view2 < l a t e x i t s h a 1 _ b a s e 6 4 = " m q q g z K I D 3 H s h 2 y D A V t A 3 / P J A g X c = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K i B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 r 4 p e 9 V y p V k p 1 S q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + u d j Q I = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " m q q g z K I D 3 H s h 2 y D A V t A 3 / P J A g X c = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K i B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 r 4 p e 9 V y p V k p 1 S q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + u d j Q I = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " m q q g z K I D 3 H s h 2 y D A V t A 3 / P J A g X c = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K i B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 r 4 p e 9 V y p V k p 1 S q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + u d j Q I = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " m q q g z K  I D 3 H s h 2 y D A V t A 3 / P J A g X c = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K i B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0

C

< l a t e x i t s h a 1 _ b a s e 6 4 = " m q q g z K I D 3 H s h 2 y D A V t A 3 / P J A g X c = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K i B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 r 4 p e 9 V y p V k p 1 S q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + u d j Q I = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " m q q g z K  I D 3 H s h 2 y D A V t A 3 / P J A g X c = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K i B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o L J / d z v P K H S P J Y P Z p q g H 9 G R 5 C F n 1 F i p W R 8 U S 2 7 Z X Y B s E m 9 F S j V Y o j E o f v W H M U s j l I Y J q n X P c x P j Z 1 Q Z z g T O C v 1 U Y 0 L Z h I 6 w Z 6 m k E W o / W x w 6 I 1 d W G Z I w V r a k I Q v 1 9 0 R G I 6 2 n U W A 7 I 2 r G e t 2 b i / 9 5 v d S E d 3 7 G Z Z I a l G y 5 K E w F M T G Z f 0 2 G X C E z Y m o J Z Y r b W w k b U 0 W Z s d k U b A j e + s u b p H 1 T 9 q r l S r N S q l V W c e T h A i 7 h G j y 4 h R r U o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B / M x j Q c = < / l a t e x i t > H < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 1 L 3 k m Z f y Z L N j U n T c L U S 8 o U 0 M C M = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K i B 4 D X n J M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o L J / d z v P K H S P J Y P Z p q g H 9 G R 5 C F n 1 F i p W R 8 U S 2 7 Z X Y B s E m 9 F S j V Y o j E o f v W H M U s j l I Y J q n X P c x P j Z 1 Q Z z g T O C v 1 U Y 0 L Z h I 6 w Z 6 m k E W o / W x w 6 I 1 d W G Z I w V r a k I Q v 1 9 0 R G I 6 2 n U W A 7 I 2 r G e t 2 b i / 9 5 v d S E d 3 7 G Z Z I a l G y 5 K E w F M T G Z f 0 2 G X C E z Y m o J Z Y r b W w k b U 0 W Z s d k U b A j e + s u b p H 1 T 9 q r l S r N S q l V W c e T h A i 7 h G j y 4 h R r U o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B / M x j Q c = < / l a t e x i t > H < l a t e x i t s h a 1 _ b a s e 6 4 = " m q q g z K I D 3 H s h 2 y D A V t A 3 /  P J A g X c = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K i B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 < l a t e x i t s h a 1 _ b a s e 6 4 = " m q q g z K I D 3 H s h 2 y D A V t A 3 / P J A g X c = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K i B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 r 4 p e 9 V y p V k p 1 S q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + u d j Q I = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " m q q g z K I D 3 H s h 2 y D A V t A 3 / P J A g X c = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K i B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 r 4 p e 9 V y p V k p 1 S q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + u d j Q I = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " m q q g z K I D 3 H s h 2 y D A V t A 3 / After randomly dropping some nodes, one view could hold a cyano group (-C≡N) that determines the property of molecule hypertoxic, while another could corrupt this functional group. Thus, intra-graph augmentations are inadequate for presenting a holistic view of the anchor graph. • Worse still, aggressive augmentations easily make two positive views far from each other, but the invariance mechanism blindly forces their representations to be invariant. Considering the molecule graph's views again (cf. Figure 1a ), invariance-guided contrastive learning simply maximizes their representation agreement, regardless of the changes in the hypertoxic property. Therefore, it might amplify the negative impact of aggressive intra-graph augmentations and restrain representations from reflecting the instance semantics faithfully. P J A g X c = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K i B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W M S q G 1 C N g k t s G W 4 E d h O F N A o E d o J J f e 5 3 n l B p H s s H M 0 3 Q j + h I 8 p A z a q z U r A + K J b f s L k A 2 i b c i p R o s 0 R g U v / r D m K U R S s M E 1 b r n u Y n x M 6 o M Z w J n h X 6 q M a F s Q k f Y s 1 T S C L W f L Q 6 d k S u r D E k Y K 1 v S k I X 6 e y K j k d b T K L C d E T V j v e 7 N x f + 8 X m r C O z / j M k k N S r Z c F K a C m J j M v y Z D r p A Z M b W E M s X t r Y S N q a L M 2 G w K N g R v / e V N 0 To mitigate these negative influences, we get inspiration from the recent work on equivariant selfsupervised learning (E-SSL) (Dangovski et al., 2022) . It splits the augmentations into two parts, to which representations should be insensitive and sensitive, and then establishes the invariance and equivariance mechanisms correspondingly. The idea of "equivariance" is our focus, which makes representations aware of semantic changes caused by certain augmentations H. Here we formulate it as ϕ(T h (g)) = T ′ h (ϕ(g)), ∀h ∈ H, where T h (g) and T ′ h (ϕ(g)) are the actions of augmentation h on graph g and representation ϕ(g), respectively. Jointly learning equivariance to sensitive augmentations H and invariance to insensitive augmentations P is promising to shield representations from the harms of aggressive augmentations. Nonetheless, it is hard, without domain knowledge (Dangovski et al., 2022; Chuang et al., 2022) or extensive testing (Dangovski et al., 2022) , to tell apart sensitive and insensitive augmentations. To embody equivariance in GCL, we propose a simple but effective approach of Equivariant Graph Contrastive Learning (E-GCL). E-GCL is an instantiation of E-SSL for graphs. Unlike previous E-SSL works, E-GCL leaves existing intra-graph augmentations untouched, and creates new augmentations through the "cross-graph" strategy. Concretely, inspired by mixup (Guo & Mao, 2021; Zhang et al., 2018) , the cross-graph augmentation interpolates the raw features of two graphs (i.e., T h ), while employing the same interpolation strategy on the graph labels that are portrayed by graph representations (i.e., T ′ h ). The augmentations across graphs not only maintain the holistic information on self-discrimination, but also are orthogonal to the intra-graph augmentations. On the top of intraand cross-graph of augmentations, E-GCL separately builds the invariance and equivariance principles to guide the representation learning. The equivariance to cross-graph augmentations diminishes the harmful invariance to aggressive augmentations that change global semantics. Integrating two principles enables representations to be sensitive to global semantic shifts across different graphs and insensitive to local substructure perturbations of single graphs. Experiments show that E-GCL achieves promising performances to surpass current state-of-the-art GCL models, across diverse settings. We also demonstrate E-GCL's generalization to various SSL frameworks, including BarlowTwins (Zbontar et al., 2021) , GraphCL (You et al., 2020) and SimSiam (Chen & He, 2021) .

2. PRELIMINARIES: INVARIANT GRAPH CONTRASTIVE LEARNING

We begin by presenting the instance discrimination task and the invariance mechanism of I-GCL, and then introduce two key ingredients: graph augmentations and contrastive learning. Instance Discrimination. Let G = {g n } N n=1 be the set of unlabeled graph instances. We denote a graph instance g ∈ G by (V, E) involving the node set V and the edge set E. This graph structure can be represented as an adjacency matrix A ∈ {0, 1} |V|×|V| , where A uv = 1 if the edge (u, v) ∈ E from node u to node v holds, otherwise A uv = 0. Moreover, each node v ∈ V could have d 1 -dimensional features x v ∈ R d1 , while each edge (u, v) ∈ E might have d 2 -dimensional features e uv ∈ R d2 . On the graph data G without annotations, contrastive self-supervised learning (SSL) aims to pre-train a graph encoder ϕ : G → R d that projects the graph instances to a d-dimensional space, so as to enhance the encoder's representation ability and facilitate its fine-tuning in downstream tasks. Towards this end, a prevailing task of pre-training is instance discrimination (Dosovitskiy et al., 2014; Purushwalkam & Gupta, 2020; Li et al., 2021) -treating each graph instance as one single class, and distinguishing it from the other graph instances. Invariance. A leading solution to instance discrimination is to maximize the representation agreement between augmented views of the same graph, while minimizing the representation agreement between views of two different graphs. It essentially encourages each instance's representation to be invariant to the augmentations (Dangovski et al., 2022; Grill et al., 2020; Zbontar et al., 2021) . Mathematically, invariance can be described by groups (Dangovski et al., 2022; Kondor & Trivedi, 2018; Maron et al., 2019a) . Let P be a group of augmentations (aka. transformations). Invariance makes the encoder ϕ insensitive to the actions T : P × G → G of the group P on the graphs G, formally: ϕ(g) = ϕ(T p (g)), ∀p ∈ P, ∀g ∈ G, (1) where T p (g) := T (p, g) is an action of applying the augmentation p on the instance g. Dictating invariance to the encoder will output the same representations for the original and augmented graphs. Probing into Equation (1), we find two key ingredients: intra-graph augmentation and contrastive learning, and will present their common practices in prior studies. Intra-graph Augmentation. Typically, the augmentation group P is pre-determined to imply prior knowledge of graph data. Early studies (Hu et al., 2020; Qiu et al., 2020; You et al., 2020; Zhu et al., 2020) instantiate augmentations as randomly corrupting the topological structure, node features, or edge features of individual graph instances. For example, AttrMasking (Hu et al., 2020) masks node and edge attributes, and applies an objective to reconstruct them. GCC (Qiu et al., 2020) explores random walks over the anchor graph to create different subgraph views. GraphCL (You et al., 2020) systematically investigates the combined effect of various random augmentations. Despite the success, random corruptions are too aggressive to maintain the semantic consistency (Guo & Mao, 2021) between the anchor graph and its augmented views. The invariance principle blindly ignores the semantic shift, thus easily pushing dissimilar patterns together and making a pernicious impact on the representation learning. Some follow-on studies (Zhu et al., 2021; Subramonian, 2021; Suresh et al., 2021; Xu et al., 2021) learn augmentations instead to underscore salient substructures, so as to mitigate the semantic shift. For instance, GCA (Zhu et al., 2021) applies node centralities to discover important substructures in social networks. MICRO-Graph (Subramonian, 2021) learns chemically meaningful motifs to help the informative subgraph sampling. More recently, AD-GCL (Suresh et al., 2021) adopts the idea of information bottleneck to adversarially learn the salient subgraphs. Contrastive Learning. Upon the augmented views, the contrastive learning objective is to classify whether they come from identical instances. Specifically, it pulls the augmented views derived from the same instance (i.e., positive samples) together and pushes the views of different instances (i.e., negative samples) apart (Chen et al., 2020; He et al., 2020) . The common practices of this objective are InfoNCE (van den Oord et al., 2018), NCE (Misra & van der Maaten, 2020) , and NT-Xent (Chen et al., 2020) . Here we consider the NT-Xent adopted by GraphCL. Given a minibatch of graph instances {g i } N i=1 , it first generates two different augmented views, denoted as {g 1 i |g 1 i = T p1 (g), p 1 ∼ P} N i=1 and {g 2 i |g 2 i = T p2 (g), p 2 ∼ P} N i=1 , and then feeds them into the encoder to yield the representations as {z 1 i |z 1 i = ρ(ϕ(g 1 i ))} N i=1 and {z 2 i |z 2 i = ρ(ϕ(g 2 i ))} N i=1 , where ρ(•) is an MLP projection head. Formally, the loss of NT-Xent is: ℓ({z 1 i } N i=1 , {z 2 i } N i=1 ) = - 1 N N i=1 log exp(s(z 1 i , z 2 i )/τ ) N j=1,j̸ =i exp(s(z 1 i , z 2 j )/τ ) , where s(•) is the function of cosine similarity, and τ is a temperature hyperparameter. In a nutshell, the interplay between intra-graph augmentation and contrastive learning is tailor-made for invariance to make the encoder insensitive to differences between the anchor and augmented views. In this work, we explore equivariance on cross-graph augmentations to make the encoder sensitive to the changes in self-discriminative information.

3. METHODOLOGY: EQUIVARIANT GRAPH CONTRASTIVE LEARNING

Here we present the E-GCL framework, which imposes two principles -invariance to intra-graph augmentations (Figure 2 left) and equivariance to cross-graph augmentations (Figure 2 right) -on the representation learning, aiming to mitigate the potential limitations of I-GCL. Next, we start with the concepts of equivariance and cross-graph augmentations.

3.1. EQUIVARIANCE

Inspired by the recent E-SSL studies (Dangovski et al., 2022; Chuang et al., 2022) , we aim to patch invariance's potential limitations with equivariance. Mathematically, with a group of augmentations H, the encoder ϕ(•) is said to be H-equivariant w.r.t. the actions T : H × G → G and T ′ : H × R d → R d of the group H applied on the graph space G and the representation space R d , if ϕ(T h (g)) = T ′ h (ϕ(g)), ∀h ∈ H, ∀g ∈ G, where T h (g) is the action of applying the transformation h on the graph instance g, while T ′ h (ϕ(g)) is the action of h on the representation ϕ(g). H-equivariant requires that, given a transformation h ∈ H, h's influence on the graph should be faithfully reflected by the change of the graph's representation. Taking Figure 1b as an example, given the graph's global semantics are perturbed by graph interpolation, the representation yielded by the equivariant encoder should transform in a definite way. Jointly analyzing Equations ( 1) and ( 3), it can be shown that invariance is a special case of equivariance when setting T ′ h as the identity mapping. However, generalizing GCL to equivariance remains unexplored, thus a focus of our work. Furthermore, as suggested in the recent E-SSL studies (Dangovski et al., 2022; Chuang et al., 2022) , jointly imposing invariance to some transformations P and equivariance to other transformations H is promising to result in better representations than relying solely on one of them. Here we term P and H as insensitive and sensitive transformations, respectively. For example, in computer vision, E-SSL (Dangovski et al., 2022) sets grayscale of images as P, while treating rotations as H; in natural language processing, DiffCSE (Chuang et al., 2022) treats the model dropout as P, while using the word replacement as H. Clearly, it is of crucial importance to partition augmentations into P and H. Nonetheless, these studies either conduct extensive testings on the impact of different partitions (Dangovski et al., 2022) which is time-consuming, or exploit domain knowledge to heuristically partition (Chuang et al., 2022) which might generalize poorly to other domains. Hence, it is infeasible to apply these strategies on graph augmentations. Worse still, different graph augmentations stem mostly from the perturbation of graph structures, thus highly likely to corrupt the same attributes of graphs. Taking the graph g in Figure 1a as an example, masking the nitrogen N atom or dropping the C ≡ N bond will both corrupt the cyano group and break the corresponding molecular properties. In a nutshell, owing to (1) the common paradigm of structure corruption and (2) the risk of categorizing them all as insensitive augmentations, we conservatively argue that it is hard to partition graph augmentations into sensitive H and insensitive parts P. In this work, leaving partitioning untouched, we remain intra-graph augmentations as the insensitive transformations P and propose new augmentations across graphs as the sensitive transformations H.

3.2. CROSS-GRAPH AUGMENTATION

We first introduce graph interpolation (Guo & Mao, 2021) to create cross-graph augmentations as H. Different from previous work, we propose an extension of graph interpolation for SSL (Section 3.3). We also connect it to group theory and address its limitation of sensitivity to the relative permutation. Interpolating Graphs as Cross-graph Augmentations. Given two graph instances g ∈ G and g ′ ∈ G, we employ mixup (Zhang et al., 2018) , a simple yet effective linear interpolation approach, on the input features and class labels, respectively: g = λg + (1 -λ)g ′ , ỹ = λy + (1 -λ)y ′ , ) where y and y ′ separately denote the one-hot encodings to indicate the instance identities of g and g ′ in the instance discrimination task; λ ∼ Beta(α, α) ∈ [0, 1] is the interpolation ratio sampled from a Beta distribution, in which α is a hyperparameter. This mixup strategy is initially proposed for supervised learning, aiming to put the interpolated samples in-between different classes and make the decision boundary robust to slightly corrupted samples (Verma et al., 2019; Zhang et al., 2018; 2021) . Despite the success of mixing image and text, it is challenging to interpolate graphs due to the structural differences between graph instances (e.g., varying topologies and sizes). To this end, we draw inspiration from the recent work (Guo & Mao, 2021) to perform linear interpolation between graphs. Specifically, with g = (V, E) and g ′ = (V ′ , E ′ ), we mitigate their structural differences by padding virtual nodes and edges, which are associated with zero features 0. Assuming |V| ≤ |V ′ |, g can be updated as a new graph with |V ′ | nodes, where the original node set V remains unchanged but adds |V ′ | -|V| dummy virtual nodes, and the original nodes connect the virtual nodes with dummy virtual edges. Having padded two graphs to the same size, now we can directly add them up. Before the interpolation, we first merge two node and edge sets as the new ones: 4) is achieved by exerting linear interpolation on the adjacency matrices, node features, and edge features: Ṽ = V ∪ V ′ , Ẽ = E ∪ E ′ . Then, g = λg + (1 -λ)g ′ in Equation ( Ã = λA + (1 -λ)A ′ , xv = λx v + (1 -λ)x ′ v , ẽuv = λe uv + (1 -λ)e ′ uv , (5) where A and A ′ are the adjacency matrices of g and g ′ after padding; x v and x ′ v are the features of node v ∈ Ṽ, which separately come from g and g ′ ; similarly, e uv and e ′ uv separately denote the features of edge (u, v) ∈ Ẽ from g and g ′ . Consequently, we generate a cross-graph augmentation. Connecting Cross-graph Augmentations to Groups. In the language of groups, we can describe the cross-graph augmentation in Equation ( 4) as a group of transformations. Given the input (λ, g, g ′ ), we can systemize the graph interpolation as two steps: (1) feature rescaling: ĝ = λg, which rescales the node and edge features of g with the ratio λ; (2) instance composition: g = C(ĝ, ĝ′ ) = ĝ + ĝ′ , which adds the other rescaled graph ĝ′ = (1λ)g ′ . To construct a closed space for graph interpolation, we first define Ĝ = {ĝ|λ ∈ [-1, 1], g ∈ G} by performing feature rescaling on G to enable direct sampling of rescaled graphs. We allow λ < 0 to include the inverse elements of graphs. Then, we generate I =< Ĝ > by combining the graphs in Ĝ via instance composition. We show that (I, C) forms a group in Appendix A.1. It is worth noting that each instance can be viewed as a transformation to others, i.e., C(g, •) := C g (•), such that semantic shifts can be described via algebraic operators. Group Averaging for Insensitivity to Relative Permutation. Note that Equation (5) can output different interpolations, when the node orders of one graph or padding positions of dummy nodes change. We ascribe operations on "node orders" and "padding positions" to the "relative permutation" between two input graphs. Unlike images and texts, the canonical permutations (orderings) of nodes in graphs are unlabeled. The default node permutations encode nothing useful about graph semantics (Xu et al., 2019; Hamilton et al., 2017) . Thus, we enforce insensitivity to relative permutations by randomly permuting nodes in the bigger graph before graph interpolation. Next, we justify and develop this design with group averaging. Group averaging can make known architectures invariant to new symmetries (Puny et al., 2022; Yarotsky, 2022) . It can be used for strict invariance to relative permutations. Let P ∼ S n be a random permutation, and T P • g be permuting graph g by P . Assuming |g| ≥ |g ′ |, we obtain the graph interpolation as λT P • g + (1λ)g ′ and its representation as ϕ(λT P • g + (1λ)g ′ ). To achieve strict invariance to relative permutations, we apply group averaging on the permutation operators: Φ(λ, g, g ′ ) = 1 |S n | P ∈Sn ϕ(λT P -1 • g + (1 -λ)g ′ ), ( ) where Φ is the function of group averaging. Φ is invariant to relative permutations between g and g ′ , in the sense that Φ(λ, g, g ′ ) = Φ(λ, T P • g, g ′ ) = Φ(λ, g, T P ′ • g ′ ) for all P, P ′ ∼ S n . Intuitively, it is achieved by averaging over all relative permutations. See Appendix A.2 for the proof. However, the intractability of averaging over S n naturally arises as a problem. Following Murphy et al. (2019) , our random permutation strategy ϕ(λT P • g + (1λ)g ′ ) is an unbiased estimator of Φ. Further, this strategy optimizes ρ • ϕ toward an optima insensitive to the relative permutation. By Proposition 1, we conclude that using random permutation is a tractable surrogate for optimizing an invariant network to relative permutation. Appendix D.2 shows the influence of sampled permutation numbers. For simplicity, we still use λg + (1λ)g ′ to represent graph interpolation in the rest text. Proposition 1. The contrastive loss with ϕ(λT P •g +(1-λ)g ′ ) upper bounds the loss of an invariant network to relative permutation 1 |Sn| P ∈Sn ρ(ϕ(λ i T P • g i + (1 -λ i )g ′ i )).

3.3. EQUIVARIANCE TO CROSS-GRAPH AUGMENTATIONS

Revisiting Equation ( 4), we can find the two terms of interpolation strategy align with the equivariance mechanism in Equation ( 3). The semantic change caused by the feature interpolation is equivalently reflected by the label interpolation. Hence, we can instantiate the equivariance mechanism based on the feature and label interpolations. In the SSL setting, we interpolate graph representations as the alternative for label interpolation. Graph representations are derived from the encoder ϕ(•) with a global readout layer to summarize the graphs' global semantics. Hence, as shown by the right side of Figure 2 , we can parameterize equivariance approximately as: ϕ(λg + (1 -λ)g ′ ) ≈ λϕ(g) + (1 -λ)ϕ(g ′ ). ( ) Minimizing the distance between Equation (7)'s two sides allows the encoder to improve sensitivity to the global semantic shifts caused by cross-graph augmentations. Although the strict equivariance is hardly guaranteed, experiments show that approaching Equation ( 7) can boost the performance on downstream tasks (cf. Section 4.2). Furthermore, if the encoder is powerful enough to distinguish the interpolated graphs, it has a deterministic reflection in the representation space ϕ(G) for a fixed transformation C(g, •) in graph space G (Non-trivial Equivariance (Dangovski et al., 2022) ). See Appendix A.1 for the proof. Proposition 2. Assuming the encoder can detect the isomorphism of interpolated graphs, there exists a GNN encoder ϕ that is non-trivially equivariant to the graph interpolation transformation.

3.4. IMPLEMENTING E-GCL

This section details our implementation of E-GCL (Figure 2 ). Specifically, given a minibatch of graph instances {g i } N i=1 , we impose (1) invariance to intra-graph augmentation and (2) equivariance to cross-graph augmentation simultaneously on the shared encoder. Invariance. For the invariance principle, we follow the I-GCL paradigm to resort to the standard intra-graph augmentations P (e.g., randomly dropping nodes in GraphCL), and create two augmented views of individual graphs: {g 1 i |g 1 i = T p1 (g), p 1 ∼ P} N i=1 and {g 2 i |g 2 i = T p2 (g), p 2 ∼ P} N i=1 . Consequently, the encoder ϕ(•) brings forth two representation lists: {z 1 i |z 1 i = ρ(ϕ(g 1 i ))} N i=1 and {z 2 i |z 2 i = ρ(ϕ(g 2 i ))} N i=1 , in which ρ(•) is an MLP projector. Equivariance. For the equivariance principle, we first randomly shuffle the graphs in {g 2 i } N i=1 , termed as {g 2 π(i) } N i=1 , where π : [N ] → [N ] is the function of random shuffling. Following the left side of Equation ( 7) and applying random permutations P i ∼ S n for all i ∈ [N ], we create the feature interpolations and then generate their representations as {z 3 i |z 3 i = ρ(ϕ(λT Pi •g 1 i +(1-λ)λg 2 π(i) ))} N i=1 . Meanwhile, according to the right side of Equation ( 7), we arrive at the representation interpolations as Cooperative Game between Invariance and Equivariance. Based on the representations, we optimize the invariance and equivariance losses together with a weighting hyperparameter ω ∈ [0, 1]: {z 4 i |z 4 i = ρ(λϕ(g 1 i ) + (1 -λ)ϕ(λg 2 π(i) ))} N i=1 . LE-GCL = (1 -ω) • ℓ({z 1 i } N i=1 , {z 2 i } N i=1 ) invariance loss +ω • ℓ({z 3 i } N i=1 , {z 4 i } N i=1 ) equivariance loss , where ℓ(•, •) is the loss encouraging the insensitivity between augmented views of the same graph, which is determined by the SSL backbone, such as NT-Xent (cf. Equation ( 2)) adopted by the GraphCL. Beyond contrastive learning, E-GCL is also applicable to various other SSL backbones, including BarlowTwins and SimSiam. In a nutshell, the invariance loss underscores the insensitivity to intra-graph augmentations, while the equivariance loss induces the sensitivity to cross-graph augmentations. The cooperative game between these two losses helps resolve the potential limitations of the conventional I-GCL paradigm, thus improving the expressive power of the encoder.

4. EXPERIMENT

In this section, we conduct experiments to answer the following research questions: RQ1: How effective is the proposed E-GCL in graph representation learning, and how does it generalize to existing SSL frameworks? RQ2: What are the properties of E-GCL and the effects of its components? In Appendix D.1, we present more ablation studies about 1) using G-mixup (Han et al., 2022) , 2) interpolating representations at different positions, and 3) interpolating large and small graphs.

4.1. EXPERIMENTAL SETUP

Here we briefly introduce the baselines, datasets and evaluations. Details are in Appendix E. For a fair comparison, E-GCL uses the same intra-graph augmentation as GraphCL. If not noted, E-GCL employs a BarlowTwins backbone. We study E-GCL's generalization to other SSL frameworks later. Baselines. We compare E-GCL with the following state-of-the-art graph pre-training methods: Infomax (Veličković et al., 2019) , InfoGraph (Sun et al., 2020) , ContextPred (Hu et al., 2020) , GraphCL (You et al., 2020) , JOAO (You et al., 2021) , AD-GCL (Suresh et al., 2021) , GraphLOG (Xu et al., 2021) , GraphMAE (Hou et al., 2022) , and RGCL (Li et al., 2022) . Unsupervised Learning evaluates the pre-trained GNNs for prediction on the same dataset. Following (You et al., 2020) , we evaluate E-GCL on the eight TU datasets, including biochemical graphs and social networks. Specifically, we pre-train a three-layer GIN (Xu et al., 2019) and feed the generated graph representations into SVMs for evaluation. We report the average values and standard deviations of accuracies (%) of five different runs, each of which corresponds to a 10-fold evaluation. Following (You et al., 2021) , we report test accuracy of the epoch selected by validation set. Transfer Learning tests the pre-trained GNN's transferability to downstream tasks. Following (Hu et al., 2020) , we use the two million molecule samples from the ZINC15 dataset (Sterling & Irwin, 2015) for pre-training and eight multi-label classification datasets derived from the MoleculeNet (Wu et al., 2018) for fine-tuning. Fine-tuning datasets are divided by scaffold split to create distribution shifts among the train/valid/test sets, so as to provide more realistic estimations of the molecule property prediction performance. Following (Hu et al., 2020) , we implement E-GCL with a five-layer GINE. Further, we push the limits of E-GCL with the GraphTrans (Wu et al., 2021) backbone. Graph-Trans stacks a four-layer Transformer (Vaswani et al., 2017) on top of the GINE to learn long range interactions. For evaluation, we pre-train a model and repeatedly fine-tune it on downstream datasets ten times. We report the averages and standard deviations of ROC-AUC (%) scores. Following (You et al., 2020) , we report test set performance of the last epoch.

4.2. MAIN RESULTS (RQ1)

Unsupervised Learning Results. Table 1a presents the unsupervised learning performance in TU datasets. The last column denotes the improvement compared to a randomly initialized GNN. E-GCL achieves the best performance in six out of eight datasets and the top three performances in the other two datasets. It also achieves the best average improvement of 7.43% compared to a randomly initialized GNN. We attribute E-GCL's good performances to the equivariance principle of crossgraph augmentation. Other methods apply only the invariance principle of intra-graph augmentation, thus failing to generate representations as discriminative as E-GCL. Transfer Learning Results. Table 1b presents the fine-tuning performances in transfer learning. E-GCLs have achieved the best performances among all methods. Specifically, E-GCL with GraphTrans achieves the best performances in four out of eight datasets and the best average improvement of 5.5% compared to a randomly initialized GNN. When using the GINE backbone, E-GCL continues to show improvements over all the baseline models in average performance. It shows that the equivariant pre-training of cross-graph augmentation gives E-GCL a better starting point for fine-tuning. Previous models apply only the intra-graph augmentation, which might bring together dissimilar patterns. Notice that, E-GCL and other models' improvements over previous works are not consistent across fine-tuning datasets. We attribute the inconsistency to the Out of Distribution (OOD) evaluation setting in MoleculeNet. The validation set does not overlap the test set, which makes preventing overfitting and underfitting troublesome. We follow the evaluation protocol of previous works (You et al., 2020; Xu et al., 2021) , which, however, does not guarantee the convergence of performance. We leave a more stable fine-tuning for the OOD evaluation as future work. In summary, E-GCL establishes a new state-of-the-art in unsupervised learning and transfer learning. Generalization to Different SSL Frameworks. To highlight E-GCL's improvement and generalization ability, we apply the equivariance principle to three representative SSL frameworks of different flavors: GraphCL -GCL, SimSiam (Chen & He, 2021) -asymmetric Siamese Networks, and BarlowTwins -decorrelating feature dimensions (Table 2 ). We observe that equivariance consistently improves about 1% of SSL backbones' average performances in both unsupervised learning and transfer learning. The results demonstrate the effectiveness of equivariance and its generalization ability to diverse SSL frameworks and different experimental settings.

4.3. ANALYZING THE PROPERTIES OF E-GCL (RQ2)

Hyper-parameter Sensitivity. Figure 3 presents E-GCL's sensitivity with respect to the shape parameter α of Beta(α, α) distribution and the trade-off factor ω between invariance and equivariance. Shown by Figure 3a , α = 0.1 gives the best average performance for both the BarlowTwins and GraphCL backbones in unsupervised learning. We also observe that the optimal ω value differs for SSL backbones (Figure 3b ). The best ω for BarlowTwins and GraphCL are 0.4 and 0.3, respectively. When ω = 1, the invariance loss vanishes and the performances drop to the lowest. This demonstrates that the invariance mechanism and equivariance mechanism are complementary to each other and their cooperation makes for better graph representation learning. Training Dynamics of Alignment and Uniformity. To understand how E-GCL improves over I-GCL, we study their behaviors through the lens of alignment and uniformity losses (Wang & Isola, 2020) , which constitute the asymptotic objective of contrastive learning. On a unit hypersphere, Alignment measures the closeness of the positive pairs and uniformity measures the evenness of the sample distribution. We apply the contrastive backbone GraphCL with dropNode as the intra-graph augmentation and graph interpolation as the cross-graph augmentation. Figure 4 shows the losses on each type of augmented samples and their concatenations (i.e., intra+cross aug). Compared to E-GCL cross-aug, E-GCL intra+cross aug has better uniform loss but worse alignment loss. E-GCL achieves much better alignment and uniformity on cross-graph augmentations ⋆ than I-GCL, with a slight sacrifice of alignment on intra-augmentations . This is in expectation as E-GCL applies equivariance to explicitly optimize the cross-graph augmentations and trade-off the optimization of intra-graph augmentations. Combining intra-and cross-graph augmentations, E-GCL achieves better alignment and uniformity than I-GCL , which explains the better performance.

5. CONCLUSION AND FUTURE WORKS

In this paper, we propose Equivariant Graph Contrastive Learning (E-GCL) that combines equivariance and invariance to learn better graph representations. E-GCL encourages the sensitivity to global semantic shifts by grounding the equivariance principle as a cross-graph augmentation of graph interpolation. This equivariance principle protects GNNs from aggressive intra-graph augmentations that can harmfully align dissimilar patterns and enables GNNs to discriminate cross-graph augmented samples. Extensive experiments in unsupervised learning and transfer learning demonstrate E-GCL's significant improvements over state-of-the-art methods and its generalization ability to different SSL frameworks. In the future, we will explore more groundings of the equivariance principle in graphs. 1-WL test (Leman & Weisfeiler, 1968; Azizian & Lelarge, 2021) and GraphTrans improves its ability to model long range interactions. Our experiments show that GIN and GraphTrans is sufficient to demonstrate the improvement of E-GCL over previous methods. Therefore, we leave the adoption of universal GNNs (Puny et al., 2022) as future work. The Intrusion-Freeness Theorem (Guo & Mao, 2021) relies on the assumption in either Lemma 2 or Lemma 3 of their paper. The assumption of their Lemma 2 states that: "The node feature vectors for all graphs take values from a finite set and the values in the set are linearly independent". This assumption holds strictly for chemical molecules (Hu et al., 2020) , in which the node features are atom types that are finite and linearly independent (i.e., you cannot combine two atoms to form another). It is also automatically satisfied by anonymous social networks without any node information, i.e., COLLAB, RDT-B, RDT-M5K, IMDB-B in experiments. For other graphs with continuous node features such as word vectors, Lemma 3 provides a much weaker condition. Lemma 3 requires the linear independence of the entire node feature matrix for each graph in the dataset, which is more likely to hold in practice.

A.2 PROOFS ON GROUP AVERAGING

Proof of that Φ(λ, g, g ′ ) is invariant to the relative permutation between g and g ′ in the sense that Φ(λ, g, g ′ ) = Φ(λ, T P • g, g ′ ) = Φ(λ, g, T P ′ • g ′ ) for all P, P ′ ∼ S n . Proof. For all P ′ ∈ S n , we have Φ(λ, T P ′ • g, g ′ ) = 1 |S n | P ∈Sn ϕ(λT P -1 • T P ′ • g + (1 -λ)g ′ ) (9) = 1 |S n | P ∈Sn ϕ(λT P -1 P ′ • g + (1 -λ)g ′ ) Let P ′′-1 = P -1 P ′ , we have P = P ′ P ′′ and Φ(λ, T P ′ • g, g ′ ) = 1 |S n | P ′ P ′′ ∈Sn ϕ(λT P ′′-1 • g + (1 -λ)g ′ ) (11) = 1 |S n | P ′′ ∈P ′-1 Sn ϕ(λT P ′′-1 • g + (1 -λ)g ′ ) (12) = 1 |S n | P ′′ ∈Sn ϕ(λT P ′′-1 • g + (1 -λ)g ′ ) (13) = Φ(λ, g, g ′ ) We can similarly prove Φ(λ, g, g ′ ) = Φ(λ, g, T P ′ • g ′ ). Proposition 1. The contrastive loss with ϕ(λT P •g +(1-λ)g ′ ) upper bounds the loss of an invariant network to relative permutation 1 |Sn| P ∈Sn ρ(ϕ(λ i T P • g i + (1 -λ i )g ′ i )). Proof. The loss of NT-Xent is: ℓ({z 3 i } N i=1 , {z 4 i } N i=1 ) = - 1 N N i=3 log exp(s(z 3 i , z 4 i )/τ ) N j=1,j̸ =i exp(s(z 3 i , z 4 j )/τ ) (15) = - 1 N N i=1 s(z 3 i , z 4 i )/τ ℓpos + 1 N N i=1 log N j=1,j̸ =i exp(s(z 3 i , z 4 j )/τ ) where ℓ pos aims to minimize the distance between positive views. Let {(λ 1 , g 1 , g ′ 1 ), (λ 2 , g 2 , g ′ 2 ), ..., (λ N , g N , g ′ N )} be the batch of original graph pairs and the mixup coefficients. If we use MSE to minimize the distance between the feature interpolation view and representation interpolation view and omit the temperature hyperparameter τ , ℓ pos can be written as: ℓ pos = 1 N N i=1 ||ρ(ψ(λ i , T Pi • g i , g ′ i )) -ρ(λ i ϕ(g i ) + (1 -λ i )ϕ(g ′ i ))|| 2 where P i ∼ S n for all i ∈ [N ]. We have E P ∼Sn [ℓ pos ] = E P ∼Sn 1 N N i=1 ||ρ(ψ(λ i , T P • g i , g ′ i )) -ρ(λ i ϕ(g i ) + (1 -λ i )ϕ(g ′ i ))|| 2 (18) = 1 N N i=1 1 |S n | Pj ∈Sn ||ρ(ψ(λ i , T Pj • g i , g ′ i )) -ρ(λ i ϕ(g i ) + (1 -λ i )ϕ(g ′ i ))|| 2 (19) ≥ 1 N N i=1 || 1 |S n | Pj ∈Sn ρ(ψ(λ i , T Pj • g i , g ′ i )) -ρ(λ i ϕ(g i ) + (1 -λ i )ϕ(g ′ i ))|| 2 (20) where the last step is by Jensen's inequality. Notice that, the contrastive loss upper bounds the distance between ρ(λ i ϕ(g i ) + (1 -λ i )ϕ(g ′ i )) and 1 |Sn| Pj ∈Sn ρ(ψ(λ i , T Pj • g i , g ′ i ) ), which is the group averaging of ρ • ψ. By the property of group averaging (Puny et al., 2022; Yarotsky, 2022; Murphy et al., 2019) , it follows that 1 |Sn| Pj ∈Sn ρ(ψ(λ i , T Pj • g i , g ′ i ) ) is invariant to relative permutation as well.

B RELATED WORKS

We have briefly introduced GCL methods in Section 2. In this section, we first discuss E-GCL's connections and differences with E-SSL and IfMixup. Then, we present E-GCL's relations to other graph mixup methods and geometric deep learning. E-SSL. E-GCL is inspired by the pioneer E-SSL works (Dangovski et al., 2022; Chuang et al., 2022) in CV and NLP. Dangovski et al. (2022) find that, when the previous insensitive objective fail on certain augmentations, applying a sensitive objective to the same augmentations can improve performance. Specifically, they apply a sensitive objective on the four-fold rotations of images to improve existing SSL methods. In NLP, Chuang et al. (2022) implement the sensitive objective as discriminating word replacement to improve sentence level embedding. Adapting E-SSL for graphs is challenging because existing intra-graph augmentations share the common paradigm of structure corruption, making it hard to categorize them into sensitive and insensitive augmentations. This works is different from previous E-SSL works that we introduce cross-graph augmentation to create global semantic shifts. By encouraging the sensitivity to cross-graph augmentation, we protect representations from the harmful aggressive intra-graph augmentations. IfMixup. In this work, we extend IfMixup (Guo & Mao, 2021) for cross-graph augmentation in SSL. IfMixup mitigates the structural differences by padding virtual nodes for graph interpolation. It is previously developed for supervised learning. For SSL, we propose to supervise mixed graphs by the interpolation of the original graphs' representations. We also connect graph mixup to group theory and address its limitation of sensitivity to the relative permutation. Graph Mixup. Graph mixup has been a challenging task due to graphs' irregular structures. GraphMix (Verma et al., 2021) sidesteps the structural differences by mixing only node features. Wang et al. (2021) mixup the graph representations for graph classification. G-Mixup (Han et al., 2022) samples adjacency matrices from the mixed graphons of two classes as graph mixup. We opt out G-Mixup in our method due to its following limitations: • G-mixup does not support node feature mixup. Their paper has no experiments on attributed graphs. Moreover, their instruction for sampling mixed node features is vague: the instruction does not describe the sampling strategy and the used distribution. • G-mixup does not scale to large graphs due to its O(N 3 ) complexity (N is the number of nodes). The high complexity is due to the SVD (Chatterjee, 2015) in graphon estimation. In comparison, IfMixup is scalable to large graphs with a linear complexity to edge and node numbers O(E + N ). • G-mixup requires class labeling to estimate the graphon in each class. In SSL, class labeling is unavailable. If we were to take risks and treated each graph as a class, the obtained graphon would be suboptimal due to the limited sample. In this case, it is outperformed by IfMixup (Table 3 ). Geometric Deep Learning. Invariance and equivariance have been heavily studied under the scope of geometric deep learning (Bronstein et al., 2021) . The goal is to explore geometric symmetries in neural architecture designs for effective weight sharing to reduce sample complexity (Cohen & Welling, 2016) . For example, generalizing the convolution operation from the Z 2 grids to the p4 group makes convolution equivariant to the four-fold rotation (Cohen & Welling, 2016) ; the message passing operation in GNNs maintains the node-level output equivariant to the permutation group S n (Battaglia et al., 2018) . Exploring geometric symmetries like rotation and permutation have greatly improved performances in various applications, including galaxy morphology (Dieleman et al., 2015) , point clouds (Chen et al., 2021; Zaheer et al., 2017) , and spherical images (Cohen et al., 2018) . Our work is different from geometric deep learning because we do not study neural architectures. We study transformations that change the underlying semantics of graphs rather than their "poses" in the geometric space. C LIMITATIONS SSL is limited in that it has little knowledge of the downstream tasks. Each type of intra-graph augmentation represents a human prior that performs differently on different downstream datasets (Purushwalkam & Gupta, 2020). Our work grounds the equivariance mechanism as a domain agnostic cross-graph augmentation to facilitate representations with the sensitivity to global semantic shifts. Previous E-SSL works (Dangovski et al., 2022; Chuang et al., 2022) in CV and NLP divide existing data augmentations into two sets of sensitive augmentations and insensitive augmentations. Our limitation is that we leave the existing intra-graph augmentations untouched as the insensitive augmentations, although the insensitivity to some aggressive intra-graph augmentations might diminish the sensitivity to cross-graph augmentations. However, disentangling the aggressive augmentations from the others requires extensive tests or domain knowledge. We leave it as future work. Further, our equivariance branch is a patch to the limitations of the invariance branch. In future, we will explore GCL of the complete focus on equivariance without the invariance branch. Equivariance is a high-level mathematical concept unifying sensitivity and insensitivity. It has promising potential in graph representation learning. Our work is limited that we explore the equivariance to a simple cross-graph augmentation of graph mixup. We believe there are other equivariance principles worth exploring. The limitations and assumptions of the theoretical results have been discussed in Appendix A. Comparison with G-mixup. Our E-GCL framework is agnostic to the graph mixup strategy for crossgraph augmentation. We also exploit G-Mixup (Han et al., 2022) for the cross-graph augmentation. Different from the linear interpolation strategy of IfMixup (Guo & Mao, 2021) , G-Mixup yields discrete adjacency matrices by sampling from the mixed graphons of two classes. To adapt G-Mixup to the SSL setting, we use their source code and treat each graph as a class. Grid search is conducted to tune the hyperparameters: α and ω. Table 3 shows the performance comparison between different graph mixup strategies. We do not compare performances on attributed graph datasets because G-Mixup does not support mixing node features. We have discussed the limitations of G-mixup in the related works section (Appendix B).

D EXPERIMENT D.1 ABLATION STUDIES FOR GRAPH INTERPOLATION

Our linear interpolation strategy substantially outperforms G-mixup in three out of four TU datasets. IfMixup also shows 1.15% better average accuracy than G-mixup. While G-mixup improves over the BarlowTwins baseline in the last three datasets, it performs worse in the COLLAB dataset. Interpolation Position. The cross-graph augmentation is supervised by the interpolation of the original graphs' representations. We interpolate the representations before the projector to let the gradient mainly optimize the equivariance of the GNN encoder ϕ. With this design, the encoder is trained to approach Equation ( 7), which is equivariance to cross-graph augmentation. However, strict equivariance is hardly achieved by GNNs. Therefore, we have the projector learn to ignore the subtle difference between the two sides of Equation ( 7) and reduce the task's difficulty. Table 4 compares the performances of not using equivariance and the performances of E-GCL when applying representation interpolation at different positions: before projector, after projector, and interpolating the cosine similarity scores. We verify that 1) before the projector is the optimal choice for both GraphCL and BarlowTwins, and 2) using equivariance at all positions consistently outperforms No Equivariance. More than better performance, before projector also allows our equivariance principle to work with a broader family of SSL frameworks (Zbontar et al., 2021; Bardes et al., 2022) . Influence of Mixing Dummy Nodes. We pad dummy nodes to the original graphs to the same size before mixup. As dummy nodes have zero features, adding them to original graphs does not introduce any noise into the interpolations. We demonstrate the neutral effect of dummy nodes by comparing the performances of mixing graphs with very different sizes and the original E-GCL in Table 5 . The performance difference is only 0.04%. We deliberately create size differences between mixed graphs in the different size experiment. Specifically, we sort the in-batch graphs by their sizes {g 1 , g 2 , ..., g N }, such that |g 1 | ≤ |g 2 | ≤ ... ≤ |g N |. Then, we mix the i-th graph with the (Ni + 1)-th graph. In this way, we create size differences in every batch such that the smallest graph g 1 will be mixed with the largest graph g N . We use the BarlowTwins SSL framework for the experiments.

D.2 INFLUENCE OF SAMPLED PERMUTATION NUMBERS

Experimental Settings. In the existing experiments (cf. Section 4), we randomly permute one of the input graphs before graph interpolation. This strategy is a 1-sample estimator of the group averaging and shows improvements over the state-of-the-art baselines. Here we conduct ablation experiments to show that: 1) using random permutation improves performance; 2) using more samples to approximate group averaging improves performance, and 3) the improvement of using more samples is only marginal and comes at the cost of increased complexity. We conduct unsupervised learning experiments using the TU datasets. We employ E-GCL with BarlowTwins framework, set α = 0.1, and perform 5 experiments of different ω = {0.1, 0.2, 0.3, 0.4, 0.5} values. We report the average and max values of the mean accuracies (%) of these 5 different experiments. The forward time is measured across 100 batches of size 128 on the COLLAB dataset. Experimental Results. Table 6 shows that using random permutation consistently outperforms not using random permutation (No rand. perm.). Specifically, the 1-sample estimator shows better performance and adds no computational cost compared to No rand. perm.. Further, it shows that using more samples to estimate the group averaging improves performance. The 10-sample estimator gives the best performance. However, the 10-sample estimator's improvement is only marginal (0.05% ∼ 0.11%) compared to the 1-sample estimator, but leads to almost three times increase in time complexity. Thus, we recommend the 1-sample estimator in implementation.

D.3 INFLUENCE OF EQUIVARIANCE ON AGGRESSIVE AUGMENTATIONS.

Intra-graph augmentations are problematic that they sometimes harmfully enforce insensitivity to semantically shifted graphs (i.e., aggressive augmentations). To patch the problem, cross-graph augmentations always enforce sensitivity to semantically shifted graphs that are generated by graph interpolation. Consequently, the equivariance to cross-graph augmentations diminish the harmful invariance of aggressive intra-graph augmentations that change global semantics, leading to better performance. To justify that equivariance can mitigate the negative effect of aggressive augmentations, we conduct experiments w.r.t. Average Confusion Ratio (ACR) (Wang et al., 2022) . ACR is the metric to measure the ratio that the anchor graph's nearest neighbors are the views of different graphs, rather than the other views of the same anchor. Higher ACR indicates that the graph representations are less powerful to distinguish different graphs, thus reflecting worse negative influences of aggressive augmentations. We use the GraphCL checkpoints from Table 2b and report the ACR scores on the ZINC15 dataset. As shown by Table 7 , applying equivariance to cross-graph augmentation improves the ACR for GraphCL. This demonstrates that the equivariance principle mitigates the negative influences of aggressive augmentations, thus leading to better graph discrimination performance.

D.4 RESULTS IN THE FIRST SUBMISSION.

We include the results from our first submission for your reference (Table 8 ). We use Table 1 to replace Table 8 to report baseline performances under a consistent evaluation protocol.

E IMPLEMENTATION DETAILS E.1 E-GCL PSEUDO CODE

Algorithm 1 presents the pseudo code of E-GCL. Table 8 : Main experiment performances. † denotes results borrowed from (Li et al., 2022) . * denotes reproduced results using the released codes. Other baseline results are borrowed from (You et al., 2021) . Bold indicates the best performance and underline indicates the second best performance. (a) Unsupervised learning accuracies (%) on the TU datasets. 

E.2 IMPLEMENTATION

We implement GNNs with the PyTorch Geometric library (Fey & Lenssen, 2019) , which is opensource under the MIT license. We conduct experiments using an NVIDIA V100 GPU (32 GB memory) on a server with a 40-core Intel CPU. Baselines. For comparison, we report the performances of the following baseline methods: • graph2vec (Narayanan et al., 2017) treats each graph as a document and employs a document embedding approach to learn graph embeddings. • Infomax (Veličković et al., 2019) maximizes the mutual information between the patch representations that summarize the subgraphs centered around nodes and the global readout of graphs. • ContextPred (Hu et al., 2020) aims to predict the surrounding graph structures using the embeddings of the local subgraphs. The prediction problem is formulated as a binary prediction with negative samples to resolve the intractability of predicting graph structures. • InfoGraph (Sun et al., 2020) learns representations by maximizing the mutual information between graph-level representations and graph substructures of different scales, e.g., nodes and edges. • GraphCL (You et al., 2020) explores the combination of intra-graph augmentations, such as node dropping, edge dropping, and subgraph, for contrastive graph representation learning. • AD-GCL (Suresh et al., 2021) trains an augmenter to adversarially drop edges to remove redundant information. • GraphLOG (Xu et al., 2021) utilizes clustering to contrast local instances and the corresponding hierarchical prototypes at every clustering layer. • JOAO (You et al., 2021) aims to automate the selection of graph augmentations via solving a bi-level optimization problem. • GraphMAE (Hou et al., 2022) explores GNN pretraining by reconstructing node features using a masked autoencoder. • RGCL (Li et al., 2022) learns a rationale generator to protect the salient features during data augmentation. Baseline Hyperparameters. We have re-evaluated some baselines to present a consistent experimental comparison with E-GCL. In the re-evaluation, we report the test performance selected by validation set for unsupervised learning; we report the last epoch test performance for transfer learning. When reproducing baselines, we change only the evaluation setting and leave other hyperparameters unchanged as much as possible. Table 12 summarizes hyperparameter details. We now describe the process of selecting the baseline hyperparameters. For unsupervised learning, we use the validation set to make decisions when a range of hyperparameters are provided in the original paper. We use the same set of hyperparameters for all datasets. For transfer learning, we fine-tune the same pre-training checkpoint for all downstream datasets for fair comparison.

E.4 DATASET STATISTICS

Table 13 and Table 14 present the statistics of our used datasets.

F COMPLEXITY ANALYSIS

The equivariant principle is computationally affordable. In this section, we analyze the complexity of GCL methods in two parts: 1) neural computation, and 2) data augmentation. Symbols. Formally, we define the symbols as follows: B ∈ Z + is the batch size; N ∈ Z + is the number of nodes in a graph; E ∈ Z + is the number of edges in a graph; L ∈ Z + is the number of GNN layers; k ∈ (0, 1) is the ratio of the cutted subgraph for subgraph augmentation; D ∈ Z + is the maximum degree of nodes in graph; F ∈ Z + is the dimension of features of nodes and edges. For graph interpolation of two graphs, let N , E, and D refer to the values of the bigger graph. Complexity of Neural Computation. We consider the complexity of GNN encoding and SSL loss function. Here, we analyze the complexity when using the GIN architecture. The E-GCL uses the BarlowTwins framework. Let O(X) = O(2BL(N F 2 + EF )) be the complexity of encoding two batches of intra-graph augmentation graphs. Shown by Table 15 , E-GCL's complexity is comparable to RGCL. E-GCL's complexity is at most twice of BarlowTwin. The additional O(X) complexity of GNN encoding comes from encoding the cross-graph augmentation batch, whose size is at most the sum of the two intra-graph augmentations batches. Similarly, E-GCL's complexity of loss function is twice of that of BarlowTwins. Complexity of Graph Augmentation. Table 16 shows the complexity of some popular graph augmentations (You et al., 2020; Han et al., 2022) . Note that the complexity of our Graph Interpolation is linear to the graph size times the feature dimension. It is lower than G-mixup and scalable to large graphs with thousands of nodes. Although the complexity of graph interpolation is higher than Drop Node and Drop Edge, it is scalable to large datasets. In practice, a PyTorch dataloader with 4 multiprocessing workers can process graph interpolation of 2048 chemical molecules in batches without putting GPU on wait.



A A A B i c b V C S g N B F L b X z G + o p Z p B k P A K u y K a M q A j W U E A k y O x k N j t k Z n a Z m R X C k s a x k I R W / F D D T D / A L / A B n k x S a e O D C Z x u f c e P + Z M G f c H I r q v r G / n N w t b z u e c f + g p a N E E d o k E Y U x e a c i Z p z D D a S d W F A u f Y / u s j i V m k X y o x j h d K F n A C D a Z I t D d l M s u V C r R M v D k p u V u + + r / G T f G N h I I q g h G O t u b m K l W G E m h l g a Y z L C Q q V G J B d T + d j p B F a s M U B A p W K g q f p I s V C H w b a f A J t S L X i b + U T E T K Z N x Y q g k s V B w p G J U P Y G j B F i e F j Sz B R z N K S I g V J s b G U A h e I s v L P W S d U q e T R q M E M e S n A E x + D B O d T h E h r Q B A I h M M j P D n C e X C e n Z d Z a Z z x z C H z i v P x j d k r g = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " t 8 J s 8 z p 7 a 2 k A D g 5 d 2 e l W P t 3 V R 3 c = " > A A A B 7 n i c b Z D L S g M x F I b P 1 K q 1 3 q o u 3 U S L U D d l p o h 2 I x T c u K x g L 9 I O J Z N m 2 t B M J i Q Z o Q x 9 C D c u F H H r Y / g M 7 n w b 0 8 t C W 3 8 I f P z / O e S c E 0 j O t H H d b y e z l l 3 f 2 M x t 5 b d 3 d v f 2 C w e H T R 0 n i t A G i X m s 2 g H W l D N B G 4 Y Z T t t S U R w F n L a C 0 c 0 0 b z 1 S

< l a t e x i t s h a 1 _ b a s e 6 4 = " I i Y n i 4 S M w 6 / K m 5 O r c F + N k o G z n N o = " > A A A B 8 n i c b V D L S g N B E J z 1 G e M r 6 t H L Y h B y C r s h q

y S t 4 c d F 6 c d + d j 2 b r l r G Y u y B 8 4 n z 9 q + 5 F M < / l a t e x i t > CH2 < l a t e x i t s h a 1 _ b a s e 6 4 = " I i Y n i 4 S M w 6 / K m 5 O r c F

y S t 4 c d F 6 c d + d j 2 b r l r G Y u y B 8 4 n z 9 q + 5 F M < / l a t e x i t > CH2 < l a t e x i t s h a 1 _ b a s e 6 4 = " I i Y n i 4 S M w 6 / K m 5 O r c F

y S t 4 c d F 6 c d + d j 2 b r l r G Y u y B 8 4 n z 9 q + 5 F M < / l a t e x i t > CH2 < l a t e x i t s h a 1 _ b a s e 6 4 = " I i Y n i 4 S M w 6 / K m 5 O r c F

< l a t e x i t s h a 1 _ b a s e 6 4 = " D 6 t B + X I n L z e v V 7 d 2 A I 7 P 4 3 r 1 S t U = " > A A A B 8 n i c b Z D L S g M x F I Y z 1 k u t t 6 p L N 6 l F q J s y U 0 S 7 L L h x W a E 3 m A 5 D J s 2 0 o Z l k S D J C G e r O R 3 D j Q h G 3 P o c P 4 M 5 H c W e m 7 U J b f w h 8 / P 8 5 5 J w T x I w q b d t f 1 l p u f W N z K 7 9 d 2 N n d 2 z 8 o H h 5 1 l E g k J m 0 s m J C 9 A C n C K C d t T T U j v V g S F A W M d I P x d Z Z 3 7 4 h U V P C W n s T E i 9 C Q 0 5 B i p I 3 l t v w 0 9 m v T y v C 8 4 B f L d t W e C a 6 C s 4 B y o / R d y j 1 8 3 D f 9 4 m d / I H A S E a 4 x Q 0

5 d Q S y r S w u x I 2 p p o y t A 2 V b A n e + s m b p H N T 9 e r V 2 k O t 0 m y s 6 i j C B V z C N X h w C 0 2 4 h x a 0 g c E E n u E V 3 p z U e X H e n Y 9 l t O C s Z s 7 h D 5 z P H 5 O x j 7 w = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = " b t j G N 3 7 o d a f v N 8 8 L S f Y F F

5 d Q S y r S w u x I 2 p p o y t A 2 V b A n e + s m b p H N T 9 e r V 2 k O t 0 m y s 6 i j C B V z C N X h w C 0 2 4 h x a 0 g c E E n u E V 3 p z U e X H e n Y 9 l t O C s Z s 7 h D 5 z P H 5 O x j 7 w = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = "

r 4 p e 9 V y p V k p 1 S q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + u d j Q I = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " m q q g z K I D 3 H s h 2 y D A V t A 3 / P J A g X c = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K i B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W

r 4 p e 9 V y p V k p 1 S q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + u d j Q I = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " m q q g z K I D 3 H s h 2 y D A V t A 3 / P J A g X c = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K i B 4 D u X h M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y x W

r 4 p e 9 V y p V k p 1 S q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + u d j Q I = < / l a t e x i t >

r 4 p e 9 V y p V k p 1 S q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + u d j Q I = < / l a t e x i t > C < l a t e x i t s h a 1 _ b a s e 6 4 = " 0 1 L 3 k m Z f y Z L N j U n T c L U S 8 o U 0 M C M = " > A A A B 6 H i c b V D L S g N B E O y N r x h f U Y 9 e B o P g K e x K i B 4 D X n J M w D w g W c L s p D c Z M z u 7 z M w K Y c k X e P G g i F c / y Z t / 4 + R x 0 M S C h q K q m + 6 u I B F c G 9 f 9 d n J b 2 z u 7 e / n 9 w s H h 0 f F J 8 f S s r e N U M W y

r 4 p e 9 V y p V k p 1 S q r O P J w A Z d w D R 7 c Q g 3 u o Q E t Y I D w D K / w 5 j w 6 L 8 6 7 8 7 F s z T m r m X P 4 A + f z B + u d j Q I = < / l a t e x i t >

Figure 1: (a) invariance to intra-graph augmentations; (b) equivariance to cross-graph augmentations. • Limiting the augmentations to local substructures of an individual graph is aggressive (Purushwalkam & Gupta, 2020; Wang et al., 2022) insofar as the augmented views fragmentarily or even wrongly describe the characteristics of the anchor graph. Take a molecule graph as an example.After randomly dropping some nodes, one view could hold a cyano group (-C≡N) that determines the property of molecule hypertoxic, while another could corrupt this functional group. Thus, intra-graph augmentations are inadequate for presenting a holistic view of the anchor graph. • Worse still, aggressive augmentations easily make two positive views far from each other, but the invariance mechanism blindly forces their representations to be invariant. Considering the molecule graph's views again (cf. Figure1a), invariance-guided contrastive learning simply maximizes their representation agreement, regardless of the changes in the hypertoxic property. Therefore, it might amplify the negative impact of aggressive intra-graph augmentations and restrain representations from reflecting the instance semantics faithfully.

Figure 2: The framework of E-GCL. The GNN encoder learns invariance for intra-graph augmentation of dropNode and equivariance for cross-graph augmentation of graph interpolation.

Figure 3: Hyper-parameter sensitivity study in unsupervised learning.

Figure 4: The alignment and uniformity losses when pre-training with chemical molecules. Losses are evaluated every 100 pre-train steps and lower numbers are better. Arrows denote the losses' changing directions.

Main experiment performances. * denotes our reproduced results using the released codes. Other baseline results are borrowed from the original papers. Bold indicates the best performance and underline indicates the second best performance.(a) Unsupervised learning accuracies (%) on the TU datasets.

Generalization to diverse SSL frameworks. Red denotes equivariance improves performance.

Unsupervised learning accuracies (%) on the TU datasets.

Average fine-tuning performances (ROC-AUC (%)) in transfer learning downstream datasets.

Unsupervised learning accuracies (%) of E-GCL on the TU datasets.

Unsupervised learning performance in TU datasets. We ablate E-GCL using different numbers of samples to approximate group averaging. We report the average and max of mean accuracies for E-GCL with different ω = {0.1, 0.2, 0.3, 0.4, 0.5}. We set α = 0.1.

ACR with GraphCL backbone. Lower is better

Transfer learning ROC-AUC (%) scores on the MoleculeNet. GTS denotes GraphTrans. 7±0.7 73.9±0.7 62.4±0.6 60.5±0.9 76.0±2.6 69.8±2.7 78.5±1.2 75.4±1.4 70.8 3.8 JOAO 71.4±0.9 74.3±0.6 63.2±0.5 60.5±0.7 81.0±1.6 73.7±1.0 77.5±1.2 75.5±1.3 72.1 5.1 ADGCL † 68.3±1.0 73.6±0.7 63.1±0.7 59.2±0.9 77.6±4.2 74.9±2.5 75.4±1.3 75.0±1.9 70.9 3.9 GraphLOG † 71.0±1.9 74.6±0.6 62.3±0.5 57.9±1.4 78.7±2.6 75.0±2.0 75.2±2.0 82.6±1.2 72.2 5.2 GraphMAE* 72.2±0.9 75.1±0.4 63.0±0.3 58.5±0.7 80.5±2.0 75.7±1.2 76.4±0.8 81.3±1.0 72.8 5.8 RGCL † 71.4±0.7 75.2±0.3 61.4±0.6 61.4±0.6 83.4±0.9 76.7±1.0 77.9±0.8 76.0±0.8 73.2 6.2 E-GCL 72.3±0.6 74.9±0.7 64.0±0.3 62.8±0.5 83.1±2.5 78.8±0.8 76.3±0.6 78.1±1.1 73.8 6.8 E-GCL, GTS 72.3±0.8 77.9±0.6 66.0±0.6 62.4±1.0 80.7±3.0 79.4±2.1 77.8±1.1 79.7±2.4 74.5 7.5

Complexity of E-GCL and representative GCL baselines. O(X) = O(2BL(N F 2 + EF )).

Complexity of graph augmentations.

availability

//anonymous.4open.science/

A PROOF A.1 PROOFS ON GRAPH INTERPOLATION

Proof of that (I, C) forms a group. We first recall of the definition of (I, C) as follows:When viewing g as the anchor being augmented, we can systemize the mixup as a combination of two steps: (1) feature rescaling: ĝ = λg, which rescales the node and edge features of g with the ratio λ; (2) instance composition: g = C(ĝ, ĝ′ ) = ĝ + ĝ′ , which adds another rescaled graph ĝ′ . Let Ĝ = {ĝ|λ ∈ [0, 1], g ∈ G} be the enlarged set of graphs via feature rescaling, I =< Ĝ > be the generated group by combining the graphs in Ĝ via instance composition. We show that (I, C) forms a group.Proof. I is a set of graphs and C(•, •) is a binary instance composition operation on I. It suffices to show that (I, C) is a group if it satisfies the following conditions:• Associativity Law. C(g 1 , C(g 2 , g 3 )) = g 1 + (g 2 + g 3 ) = (g 1 + g 2 ) + g 3 = C(C(g 1 , g 2 ), g 3 ) for all g 1 , g 2 , g 3 ∈ I. Notice that, both resulting node feature matrices and adjacency matrices of both sides are equivalent.• Existence of Identity. Let e be the empty graph with no nodes and no edges, then C(g, e) = g = C(e, g) for all g ∈ I, then the empty graph e is the identity.• Existence of Inverse. For any graph g ∈ I, there exists a graph g -1 ∈ I who has the same structure of g but with inverse values of node features and edge weights, aka.Proposition 2. Assuming the encoder can detect the isomorphism of interpolated graphs, there exists a GNN encoder ϕ that is non-trivially equivariant to the graph interpolation transformation.Proof. For proof, it suffices to show that graph interpolation transformation satisfies the assumption of E-SSL's (Dangovski et al., 2022) Non-trivial Equivariance Proposition. We can then apply the Non-trivial Equivariance Proposition to conclude the proof.The assumption states that, "given P as the group of our interest, if ϕ(T p (g)) = ϕ(T ′ p (g ′ )), then g = g ′ and p = p ′ for all p, p ′ ∈ P and g, g ′ ∈ G." We rewrite the assumption with the graph interpolation formulation of equivariance:We now prove this assumption.We consider graphs without edge features. We have assumed that GNN encoder can detect graph isomorphism of interpolated graphs. Therefore, we can infer the equivalence of the graphs based on the equivalence of graph embeddings. LetUsing the Intrusion-Freeness Theorem in (Guo & Mao, 2021) , the graph interpolation transformation is invertible and the resulting original graph pair and mixup coefficient is unique, aka. the equivalence of the interpolated graphs infers the equivalence of the original graphs and the interpolation coefficient λ. Thus, we have either 1)In either case, the assumption holds.Discussion. Proposition 2 relies on a powerful GNN encoder that can detect the isomorphism of interpolated graphs. This powerful GNN encoder exists both in theory and practice: 1) in theory, the universality of GNNs has been proved that they can approximate any function on graphs (Azizian & Lelarge, 2021) ; 2) in practice, Puny et al. (2022) have proposed a family of universal GNNs. The possible limitation lies in the complexity of these methods (Puny et al., 2022; Maron et al., 2019b) . We have tested E-GCL with the GIN and GraphTrans architectures. GIN is at most as powerful as the We do not compare with DGCL (Li et al., 2021) because their experiments follow a different protocol. DGCL selects GNN layers, dimension sizes and batch sizes based on the test set performance on each dataset. However, other baselines and our methods stick to the same GNN configuration for all the datasets. Also, re-implementation is hard because the source code has not been released.Augmentations. Our intra-graph augmentation follows GraphCL (You et al., 2020) . We use dropNode for the unsupervised learning experiments and use both dropNode and subgraph for the transfer learning experiments. For the cross-graph augmentation of graph mixup, we include the self-loops of virtual nodes in the adjacency matrix due to better empirical performance. Before graph mixup, we randomly shuffle the node order of one of the input graphs to have random relative permutations between input graphs, which leads to slightly better empirical performance.Implementation to Different SSL Frameworks. For the BarlowTwins and SimSiam backbones, which use no negative samples, E-GCL follows strictly to their original loss functions. For the contrastive backbone GraphCL, E-GCL uses both intra-graph augmentations and cross-graph augmentations as negative samples to facilitate better cross-graph discrimination. Specifically, we have two types of embeddings for cross-graph augmentations: the feature interpolation embeddingsTo avoid overfitting to one type of cross-graph augmentations, we use half of each type as the anchor graphs and the other half as the negative , where 1(•) is a binary indicator that returns 1 when the condition holds and returns 0 otherwise. {m 3 j } N j=1 contains half of the feature interpolation embeddings and half of the representation interpolation embeddings; {m 4 j } N j=1 contains the other half. The loss function when using GraphCL backbone can be written as: invariance: ℓ({zNotice that, we treat sample pairs that share partial graph identities (e.g., (m 3 j , z 1 j ) and (m 3 j , m 4 π(j) )) as semi-positives and exclude them from negative samples.Alignment and Uniformity Loss. We use dropNode as the intra-graph augmentation and graph interpolation as the cross-graph augmentation. We pre-train GraphCL for 1000 steps before evaluation. The GraphCL setup follows that in Table 10 . We split the original pre-training dataset into two subsets. The 80% subset is used for pre-training, and 51200 samples from the other 20% subset are used for loss evaluation.

E.3 HYPER-PARAMETERS

Unsupervised Learning. The hyper-parameters are shown in Table 9 . We use the same three-layer GIN from (You et al., 2020) . Following Zbontar et al. (2021) , the BarlowTwins backbone uses a threelayer MLP projector with hidden dimensions that are four times the input dimension. Following You et al. (2020) , we use a learning rate 0.01, batch size 128 and no weight decay. For E-GCL, we conduct grid search for α and ω in the following ranges α = {0.1, 0.4, 1, 2, 4}, ω = {0.1, 0.2, 0.3, 0.4, 0.5}. We train the GNN for 60 epochs and evaluate the generated embedding using non-linear SVMs every 10 epochs. Following You et al. (2020) , we search for the regularization parameter of SVMs in {0.001, 0.01, 0.1, 1, 10, 100, 1000}. Following (You et al., 2021) , we report test accuracy of the epoch selected by validation set.Transfer Learning. The hyper-parameters are shown in Table 10 . We use a large batch size 2048 for BarlowTwins and SimSiam to speed up pre-training. The large batch size increases no complexity because Barlowwins and SimSiam use no negative samples. Following Zbontar et al. (2021) , the BarlowTwins uses a three-layer MLP projector with hidden dimensions that are four times the input dimension. The SimSiam (Chen & He, 2021) uses a two-layer MLP projector and a two-layer MLP predictor. Following Tian et al. (2021) , we let the predictor network use a learning rate that is ten times that for the GNN encoder and the projector. The SimSiam backbone does not perform well with the default learning rate of 0.001 and weight decay of 0. Thus, we search for the SimSiam backbone's learning rate and weight decay values in the following ranges: learning rate = {1e -3, 5e -4}, weight decay = {1e -4, 1e -5, 5e -5}. We then fix the learning rate and weight decay and add the equivariance mechanism. For E-GCL, we search for α and ω from a random subset of the following ranges α = {0.1, 0.4, 1, 2, 4}, ω = {0.1, 0.2, 0.3, 0.4, 0.5}. We do not conduct a grid search because the dataset is large. For fine-tuning, the pre-trained model are re-trained 100 epochs on the transferred dataset. The fine-tuning learning rate is 1e -3 for GINE and 1e -4 for GraphTrans. Following (You et al., 2020) , we report test performance of the last epoch.Table 11 shows the configuration details of our used GNNs, in which the GINE is from (Hu et al., 2020) and the GraphTrans is from the Molpcba experiment of (Wu et al., 2021) .

