Thursday, December 18, 2025

Environment friendly Calibration for Choice Making


A call-theoretic characterization of good calibration is that an agent looking for to reduce a correct loss in expectation can’t enhance their consequence by post-processing a wonderfully calibrated predictor. Hu and Wu (FOCS’24) use this to outline an approximate calibration measure known as calibration determination loss (CDL), which measures the maximal enchancment achievable by any post-processing over any correct loss. Sadly, CDL seems to be intractable to even weakly approximate within the offline setting, given black-box entry to the predictions and labels. We propose circumventing this by proscribing consideration to structured households of post-processing features Ok. We outline the calibration determination loss relative to Ok, denoted CDLOk the place we think about all correct losses however prohibit post-processings to a structured household Ok. We develop a complete concept of when CDLOk is information-theoretically and computationally tractable, and use it to show each higher and decrease bounds for pure lessons Ok. Along with introducing new definitions and algorithmic methods to the speculation of calibration for determination making, our outcomes give rigorous ensures for some broadly used recalibration procedures in machine studying.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles