Task Contamination
Task Contamination is a type of data leakage that affects the perceived capabilities of Large Language Models. Namely: the fact that some LLMs perform well on some N-shot learning tasks may be due to the fact that their training data included examples of those tasks, in which case they are not in fact acting as N-shot learners.