Fairness Indicators

Fairness Indicators is a library that enables easy computation of commonly-identified fairness metrics for binary and multiclass classifiers. With the Fairness Indicators tool suite, you can:

  • Compute commonly-identified fairness metrics for classification models
  • Compare model performance across subgroups to a baseline, or to other models
  • Use confidence intervals to surface statistically significant disparities
  • Perform evaluation over multiple thresholds

Use Fairness Indicators via the:

eval_config_pbtxt = """
model_specs {
 label_key: "%s"
}
metrics_specs {
 metrics {
 class_name: "FairnessIndicators"
 config: '{ "thresholds": [0.25, 0.5, 0.75] }'
 }
 metrics {
 class_name: "ExampleCount"
 }
}
slicing_specs {}
slicing_specs {
 feature_keys: "%s"
}
options {
 compute_confidence_intervals { value: False }
 disabled_outputs{values: "analysis"}
}
""" % (LABEL_KEY, GROUP_KEY)

Resources

Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2020年11月16日 UTC.