Department of Computer Science and Technology

Technical reports

Probing the foundations of neural algorithmic reasoning

Euan Ong

December 2023, 75 pages

This technical report is based on a dissertation submitted May 2023 by the author for the degree of Bachelor of Arts (Computer Science Tripos) to the University of Cambridge, Trinity College.

DOIhttps://doi.org/10.48456/tr-990

Abstract

While the field of neural algorithmic reasoning (NAR) — training neural networks to imitate algorithms and using them as algorithmic inductive biases in real-world problems — has risen in popularity, there has been no investigation confirming that its fundamental claims hold in general. Indeed, we argue that such an investigation has so far been infeasible, due to the lack of a general extensible library creating a very high barrier to entry for reproductions and systematic studies.

As such, we develop an extensible laboratory for NAR, by introducing a novel framework for multi-domain, type-driven, declarative ML, and using its components to derive flexible NAR pipelines from first principles through the paradigm of representations-as-types. We use this laboratory to perform systematic analyses, reproductions and comparisons of prior work in NAR, matching (and often beating) state-of-the-art performance across various domains by identifying and alleviating bottlenecks across popular NAR frameworks and architectures.

We then conduct a systematic investigation into the fundamental claims of NAR, in the context of a new synthetic dataset inspired by recent work in neural algorithmics. Through a series of statistically-robust ablation tests, while we confirm the established result that algorithmic modules beat non-algorithmic baselines, we find evidence to refute one of the central claims of NAR, showing that neural algorithmic processors (NAPs) do not overcome the ‘scalar bottleneck’ of differentiable algorithmic black-boxes (sDABs).

Based on our observations, we develop a new hypothesis to replace this claim: that sDABs instead suffer from an ‘ensembling bottleneck’ of not being able to execute multiple instances of the same algorithm in parallel, which is alleviated not by NAPs, but by simply using an unfrozen, structurally-aligned neural network. And, through exploring the effects of parallelising sDABs, we not only find strong evidence in support of this hypothesis, but also achieve a long-standing goal of neural algorithmics: developing a way to deterministically distill an algorithm into a robust, high-dimensional processor network that preserves both the efficiency and correctness guarantees of sDABs while avoiding their performance bottleneck.

Full text

PDF (5.0 MB)

BibTeX record

@TechReport{UCAM-CL-TR-990,
  author =	 {Ong, Euan},
  title = 	 {{Probing the foundations of neural algorithmic reasoning}},
  year = 	 2023,
  month = 	 dec,
  url = 	 {https://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-990.pdf},
  institution =  {University of Cambridge, Computer Laboratory},
  doi = 	 {10.48456/tr-990},
  number = 	 {UCAM-CL-TR-990}
}