Prompt and Chaining

After the first paper that has demonstrated that language models are great few-shot learners.

Language Models are Few-Shot Learners

Prompting refers to an idea of using a customized function to help big language models to become effective on down-stream tasks.

Consider a pre-trained language model now is working on a prediction task that produces a class-wise prediction likelihood of $p(y|x_i)$, we can effective have a prompting function $f_{prompt}$ so that now the prediction task becomes $p(y|f_{prompt}(x_i))$.

There are a few great papers in 2020 and 2021, describing in details how prompting would work to help boost the performance of large language models such as GPT3:

Differentiable Prompt Makes Pre-trained Language Models Better...

GitHub - ucinlp/autoprompt: AutoPrompt: Automatic Prompt Construction for Masked Language Models.

AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts | Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems

Vulnerability of the model

After the release of ChatGPT3, prompting becomes almost the default method for human to interact with large language models.

There are obvious ways to use prompts to ignore your inputs, this is known as prompt injection: