iml: Interpretable Machine Learning

Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) <doi:10.48550/arxiv.1801.01489>, accumulated local effects plots described by Apley (2018) <doi:10.48550/arxiv.1612.08468>, partial dependence plots described by Friedman (2001) <www.jstor.org/stable/2699986>, individual conditional expectation ('ice') plots described by Goldstein et al. (2013) <doi:10.1080/10618600.2014.907095>, local models (variant of 'lime') described by Ribeiro et. al (2016) <doi:10.48550/arXiv.1602.04938>, the Shapley Value described by Strumbelj et. al (2014) <doi:10.1007/s10115-013-0679-x>, feature interactions described by Friedman et. al <doi:10.1214/07-AOAS148> and tree surrogate models.

Version: 0.11.4
Published: 2025年02月24日
Author: Giuseppe Casalicchio [aut, cre], Christoph Molnar [aut], Patrick Schratz ORCID iD [aut]
Maintainer: Giuseppe Casalicchio <giuseppe.casalicchio at lmu.de>
License: MIT + file LICENSE
NeedsCompilation: no
Citation: iml citation info
Materials: NEWS
In views: MachineLearning
CRAN checks: iml results

Documentation:

Reference manual: iml.html , iml.pdf

Downloads:

Package source: iml_0.11.4.tar.gz
Windows binaries: r-devel: iml_0.11.4.zip, r-release: iml_0.11.4.zip, r-oldrel: iml_0.11.4.zip
macOS binaries: r-release (arm64): iml_0.11.4.tgz, r-oldrel (arm64): iml_0.11.4.tgz, r-release (x86_64): iml_0.11.4.tgz, r-oldrel (x86_64): iml_0.11.4.tgz
Old sources: iml archive

Reverse dependencies:

Reverse imports: counterfactuals, fastml, moreparty, PEAXAI

Linking:

Please use the canonical form https://CRAN.R-project.org/package=iml to link to this page.

AltStyle によって変換されたページ (->オリジナル) /