Knowledge Distillation

Knowledge distillation refers to a range of techniques used to "distill" the knowledge of a large ML model into a smaller one. The larger model is often referred to as the "teacher" model, and the smaller one as the "student" (or distilled, or compressed) model.
Related concepts:
Shrink and Fine-Tune