PROMPTING GPT-3 TO BE RELIABLE

Abstract

Large language models (LLMs) show impressive abilities via few-shot prompting. Commercialized APIs such as OpenAI GPT-3 further increase their use in real-world language applications. However, the crucial problem of how to improve the reliability of GPT-3 is still under-explored. While reliability is a broad and vaguely defined term, we decompose reliability into four main facets that correspond to the existing framework of ML safety and are well-recognized to be important: generalizability, social biases, calibration, and factuality. Our core contribution is to establish simple and effective prompts that improve GPT-3's reliability as it: 1) generalizes out-of-distribution, 2) balances demographic distribution and uses natural language instructions to reduce social biases, 3) calibrates output probabilities, and 4) updates the LLM's factual knowledge and reasoning chains. With appropriate prompts, GPT-3 is more reliable than smaller-scale supervised models on all these facets. We release all processed datasets, evaluation scripts, and model predictions. 1 Our systematic empirical study not only sheds new insights on the reliability of prompting LLMs, but more importantly, our prompting strategies can help practitioners more reliably use LLMs like GPT-3.

1. INTRODUCTION

NLP is dominated by large language models (LLMs) -pretrained on large, unlabeled text data -that are then used for downstream tasks (Devlin et al., 2019a; Brown et al., 2020) . Scaling the model and data size often brings gains on downstream tasks (Kaplan et al., 2020; BIG-Bench, 2022) , allowing what some call emergent abilities (Wei et al., 2022a) . These emergent behaviors are accomplished through prompting-a crafted, natural language text to shape predictions or offer relevant information without expensive supervised data. Among all the existing LLMs, GPT-3 (Brown et al., 2020) is particularly popular due to its flexibility and ease of use from the OpenAI APIfoot_1 . Existing empirical studies investigate GPT-3 on specific tasks such as mathematical reasoning (Hendrycks et al., 2021a) , multi-hop reasoning (Wei et al., 2022b; Kojima et al., 2022) , and code generation (Chen et al., 2021a) . However, rising numbers on these evaluations do not ensure LLM reliability. For example, LLMs (including GPT-3) produce biased (Lucy & Bamman, 2021) generations, false statements (Lin et al., 2022b) , and outdated information (Chen et al., 2021b; Kasai et al., 2022) . Deploying such models in the real world could result in catastrophic harm. In the context of prompting LLMs, several previous works have explored their reliability. For example, in the release reports of GPT-3 (Brown et al., 2020) , OPT (Zhang et al., 2022 ), Gopher (Rae et al., 2021) and PaLM (Chowdhery et al., 2022) , there are dedicated experiments evaluating these LLMs' representational bias and toxicity. Another line of work has evaluated calibration (Lin et al., 2022a; Kadavath et al., 2022) of prompting-based LLMs on math questions or multiple-choice questions. We differ from these prior works in two key aspects: (i) We perform a more comprehensive study of four core facets of reliability, serving as a meta-analysis. (ii) We focus particularly on find- ing prompting strategies that are effective under these reliability facets, rather than just evaluating intrinsic model characteristics (Figure 1 ). Our reliability testing framework takes inspiration from the survey of unsolved problems in ML safety (Hendrycks et al., 2021b) : withstanding hazards (generalizability), identifying hazards (calibration), steering ML systems and reducing deployment hazards (reducing social biases and improving factuality). These facets also aim to address the risks of ML systems identified in existing conceptual frameworks (Tan et al., 2022; 2021) . We have a more extensive discussion of related works in Appendix Section A. As summarized in Figure 1 , our simple prompting strategies beat smaller-scale supervised models on all reliability metrics we consider: 1) prompting with randomly sampled examples from the source domain allows GPT-3 to generalize robustly on unseen domains and challenge examples; 2) examples sampled from a balanced demographic distribution and natural language intervention reduce social biases; 3) language model probabilities are calibrated to reflect accuracy; and 4) appending up-to-date knowledge can supplant GPT-3's memorized knowledge or reasoning chains.

2. FACET 1: GENERALIZABILITY

LLMs are often criticized for missing the forest for the trees. They overfit training data from a particular domain (domain shift), are not robust to minor changes in a text (perturbations), or use shortcuts to make predictions (spurious correlations). These pathologies make models unreliable since these distribution shifts happen all the time in real-world data and could incur significant performance drops. In this section, we study whether GPT-3 can stay robust when the test data come from different distributions than the demo examples in the prompt, and how their generalization compares to supervised models.

Experiment Setup

We study all three types of distribution shifts mentioned above. For each of them, researchers have created datasets that target modern language models' weaknesses which we adopt for evaluation. For domain shift, MRQA (Fisch et al., 2019) 



https://github.com/NoviScl/GPT3-Reliability By default, we use the CODE-DAVINCI-002 model (also known as Codex or GPT 3.5) in our experiments unless otherwise specified, because our preliminary results show that this is the most accurate model on most NLP datasets we tried.



Figure 1: Four main reliability factors we examined and the core findings.

trains on six machine reading datasets from the source domain and tests on six different target domains; for perturbations, AdvGLUE (Wang et al., 2021) craft adversarial versions of GLUE (Wang et al., 2018) based on automatic adversarial perturbations and human filtering, and Contrast Sets (Gardner et al., 2020) are expert-authored minimal edits that change the label; for spurious correlation, HANS (McCoy et al., 2019) and PAWS (Zhang et al., 2019) are challenge sets designed for models trained on MNLI and

