[フレーム]
Forgot your password?
Please wait...

We can help you reset your password using the email address linked to your Project Euclid account.

Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches. Please note that a Project Euclid web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content. Contact customer_support@projecteuclid.org with any questions.
View Project Euclid Privacy Policy
All Fields are Required
*
*
*
*
Password Requirements: Minimum 8 characters, must include as least one uppercase, one lowercase letter, and one number or permitted symbol Valid Symbols for password:
~ Tilde
! Exclamation Mark
@ At sign
$ Dollar sign
^ Caret
( Opening Parenthesis
) Closing Parenthesis
_ Underscore
. Period
*
Please wait...
Web Account created successfully
Project Euclid
Advanced Search
Home > Journals > Ann. Appl. Stat. > Volume 19 > Issue 2 > Article
June 2025 Learn then test: Calibrating predictive algorithms to achieve risk control
Anastasios N. Angelopoulos, Stephen Bates, Emmanuel J. Candès, Michael I. Jordan, Lihua Lei
Author Affiliations +
Anastasios N. Angelopoulos,1 Stephen Bates,2 Emmanuel J. Candès,3 Michael I. Jordan,1 Lihua Lei4
1Department of Electrical Engineering and Computer Science, University of California, Berkeley
2Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology
3Department of Statistics, Stanford University
4Stanford Graduate School of Business
Ann. Appl. Stat. 19(2): 1641-1662 (June 2025). DOI: 10.1214/24-AOAS1998
PERSONAL SIGN IN
Full access may be available with your subscription
Forgot your password?
PURCHASE THIS CONTENT
PURCHASE SINGLE ARTICLE
Price: 30ドル.00 ADD TO CART
Includes PDF & HTML, when available
PURCHASE SINGLE ARTICLE
This article is only available to subscribers. It is not available for individual sale.
This will count as one of your downloads.
You will have access to both the presentation and article (if available).
This content is available for download via your institution's subscription. To access this item, please sign in to your personal account.
Forgot your password?
No Project Euclid account? Create an account
My Library
You currently do not have any folders to save your paper to! Create a new folder below.

Abstract

We introduce a framework for calibrating machine learning models to satisfy finite-sample statistical guarantees. Our calibration algorithms work with any model and (unknown) data-generating distribution and do not require retraining. The algorithms address, among other examples, false discovery rate control in multilabel classification, intersection-over-union control in instance segmentation, and simultaneous control of the type-1 outlier error and confidence set coverage in classification or regression. Our main insight is to reframe risk control as multiple hypothesis testing, enabling different mathematical arguments. We demonstrate our algorithms with detailed worked examples in computer vision and tabular medical data. The computer vision experiments demonstrate the utility of our approach in calibrating state-of-the-art predictive architectures that have been deployed widely, such as the detectron2 object detection system.

Funding Statement

This work was supported in part by the Mathematical Data Science program of the Office of Naval Research under grant number N00014-21-1-2840.

Acknowledgments

The authors would like to thank the anonymous referees, an Associate Editor, and the Editor for their constructive comments that improved the quality of this paper. Lihua Lei is grateful for the support of National Science Foundation Grant DMS-2338464.

Citation

Download Citation

Anastasios N. Angelopoulos. Stephen Bates. Emmanuel J. Candès. Michael I. Jordan. Lihua Lei. "Learn then test: Calibrating predictive algorithms to achieve risk control." Ann. Appl. Stat. 19 (2) 1641 - 1662, June 2025. https://doi.org/10.1214/24-AOAS1998

Information

Received: 1 July 2023; Revised: 1 November 2024; Published: June 2025
First available in Project Euclid: 28 May 2025

Digital Object Identifier: 10.1214/24-AOAS1998

Keywords: Computer vision , conformal prediction , deep learning , machine learning

Rights: Copyright © 2025 Institute of Mathematical Statistics

My Library
You currently do not have any folders to save your paper to! Create a new folder below.
Vol.19 • No. 2 • June 2025
Anastasios N. Angelopoulos, Stephen Bates, Emmanuel J. Candès, Michael I. Jordan, Lihua Lei "Learn then test: Calibrating predictive algorithms to achieve risk control," The Annals of Applied Statistics, Ann. Appl. Stat. 19(2), 1641-1662, (June 2025)
Include:
Format:
Back to Top

KEYWORDS/PHRASES

Keywords
in
Remove
in
Remove
in
Remove
+ Add another field

PUBLICATION TITLE:


PUBLICATION YEARS

Range
Single Year

Clear Form

AltStyle によって変換されたページ (->オリジナル) /