- 演讲人: 齐正灵(George Washington University, Assistant Professor)
- 时间:2025年6月13日11:00(北京时间)
- 地点:Zoom线上会议
Zoom线上报告(会议号: 94339838776; 密码: 886452):
https://zoom.us/j/94339838776?pwd=Fc6x9hRpPimR6bBC1BT7bhYt5Uwtbo.1
Abstract: In-Context Learning (ICL) allows Large Language Models (LLM) to adapt to new tasks with just a few examples, but their predictions often suffer from systematic biases, leading to unstable performances in classification. While calibration techniques are proposed to mitigate these biases, we show that, in the logit space, many of these methods are equivalent to merely shifting the LLM's decision boundary without having the ability to alter its orientation. This proves inadequate when biases cause the LLM to be severely misdirected. To address these limitations and provide a unifying framework, we propose Supervised Calibration (SC), a loss-minimization based framework which learns an optimal, per-class affine transformation of LLM's predictive probabilities in the logit space. By using a more expressive functional class, SC not only subsumes many existing calibration methods in ICL as special cases but also enables the ability of altering and even completely reversing the orientation of the LLM's decision boundary. Furthermore, SC's loss-based nature facilitates the seamless integration of two purpose-built regularization techniques—context-invariance and directional trust-region regularizers. The former is designed to tackle the instability issue in ICL, while the latter is to control the degree of calibration. Finally, SC delivers state-of-the-art performance over calibration baselines in the 4-shot, 8-shot, and 16-shot settings across all nine datasets for Mistral-7B-Instruct-v0.3, Llama-2-7B-chat, and Qwen2-7B-Instruct.
Bio: Zhengling Qi is an assistant professor
of Decision Sciences at George Washington University. He got his PhD degree
from Department of Statistics and Operations Research at the University of
North Carolina, Chapel Hill. His research has been focused on statistical
machine Learning and related non-convex optimization. He is now mainly working
on reinforcement learning and causal inference problems.