Note

Go to the end to download the full example code. or to run this example in your browser via JupyterLite or Binder

Introducing the set_output API#

This example will demonstrate the set_output API to configure transformers to output pandas DataFrames. set_output can be configured per estimator by calling the set_output method or globally by setting set_config(transform_output="pandas"). For details, see SLEP018.

First, we load the iris dataset as a DataFrame to demonstrate the set_output API.

fromsklearn.datasetsimport load_iris
fromsklearn.model_selectionimport train_test_split
X, y = load_iris (as_frame=True, return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split (X, y, stratify=y, random_state=0)
X_train.head()
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)
60 5.0 2.0 3.5 1.0
1 4.9 3.0 1.4 0.2
8 4.4 2.9 1.4 0.2
93 5.0 2.3 3.3 1.0
106 4.9 2.5 4.5 1.7


To configure an estimator such as preprocessing.StandardScaler to return DataFrames, call set_output. This feature requires pandas to be installed.

fromsklearn.preprocessingimport StandardScaler
scaler = StandardScaler ().set_output(transform="pandas")
scaler.fit(X_train)
X_test_scaled = scaler.transform(X_test)
X_test_scaled.head()
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm)
39 -0.894264 0.798301 -1.271411 -1.327605
12 -1.244466 -0.086944 -1.327407 -1.459074
48 -0.660797 1.462234 -1.271411 -1.327605
23 -0.894264 0.576989 -1.159419 -0.933197
81 -0.427329 -1.414810 -0.039497 -0.275851


set_output can be called after fit to configure transform after the fact.

scaler2 = StandardScaler ()
scaler2.fit(X_train)
X_test_np = scaler2.transform(X_test)
print(f"Default output type: {type(X_test_np).__name__}")
scaler2.set_output(transform="pandas")
X_test_df = scaler2.transform(X_test)
print(f"Configured pandas output type: {type(X_test_df).__name__}")
Default output type: ndarray
Configured pandas output type: DataFrame

In a pipeline.Pipeline, set_output configures all steps to output DataFrames.

fromsklearn.feature_selectionimport SelectPercentile
fromsklearn.linear_modelimport LogisticRegression
fromsklearn.pipelineimport make_pipeline
clf = make_pipeline (
 StandardScaler (), SelectPercentile (percentile=75), LogisticRegression ()
)
clf.set_output(transform="pandas")
clf.fit(X_train, y_train)
Pipeline(steps=[('standardscaler', StandardScaler()),
 ('selectpercentile', SelectPercentile(percentile=75)),
 ('logisticregression', LogisticRegression())])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

AltStyle によって変換されたページ (->オリジナル) /



Each transformer in the pipeline is configured to return DataFrames. This means that the final logistic regression step contains the feature names of the input.

clf[-1].feature_names_in_
array(['sepal length (cm)', 'petal length (cm)', 'petal width (cm)'],
 dtype=object)

Note

If one uses the method set_params, the transformer will be replaced by a new one with the default output format.

clf.set_params(standardscaler=StandardScaler ())
clf.fit(X_train, y_train)
clf[-1].feature_names_in_
array(['x0', 'x2', 'x3'], dtype=object)

To keep the intended behavior, use set_output on the new transformer beforehand

scaler = StandardScaler ().set_output(transform="pandas")
clf.set_params(standardscaler=scaler)
clf.fit(X_train, y_train)
clf[-1].feature_names_in_
array(['sepal length (cm)', 'petal length (cm)', 'petal width (cm)'],
 dtype=object)

Next we load the titanic dataset to demonstrate set_output with compose.ColumnTransformer and heterogeneous data.

fromsklearn.datasetsimport fetch_openml
X, y = fetch_openml ("titanic", version=1, as_frame=True, return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split (X, y, stratify=y)

The set_output API can be configured globally by using set_config and setting transform_output to "pandas".

fromsklearnimport set_config
fromsklearn.composeimport ColumnTransformer
fromsklearn.imputeimport SimpleImputer
fromsklearn.preprocessingimport OneHotEncoder , StandardScaler
set_config (transform_output="pandas")
num_pipe = make_pipeline (SimpleImputer (), StandardScaler ())
num_cols = ["age", "fare"]
ct = ColumnTransformer (
 (
 ("numerical", num_pipe, num_cols),
 (
 "categorical",
 OneHotEncoder (
 sparse_output=False, drop="if_binary", handle_unknown="ignore"
 ),
 ["embarked", "sex", "pclass"],
 ),
 ),
 verbose_feature_names_out=False,
)
clf = make_pipeline (ct, SelectPercentile (percentile=50), LogisticRegression ())
clf.fit(X_train, y_train)
clf.score(X_test, y_test)
0.801829268292683

With the global configuration, all transformers output DataFrames. This allows us to easily plot the logistic regression coefficients with the corresponding feature names.

importpandasaspd
log_reg = clf[-1]
coef = pd.Series (log_reg.coef_.ravel(), index=log_reg.feature_names_in_)
_ = coef.sort_values().plot.barh()
plot set output

In order to demonstrate the config_context functionality below, let us first reset transform_output to its default value.

set_config (transform_output="default")

When configuring the output type with config_context the configuration at the time when transform or fit_transform are called is what counts. Setting these only when you construct or fit the transformer has no effect.

fromsklearnimport config_context
scaler = StandardScaler ()
scaler.fit(X_train[num_cols])
StandardScaler()
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.


with config_context (transform_output="pandas"):
 # the output of transform will be a Pandas DataFrame
 X_test_scaled = scaler.transform(X_test[num_cols])
X_test_scaled.head()
age fare
629 0.628306 -0.063210
688 -0.057984 -0.515704
439 1.314596 0.566624
664 -0.675645 -0.512279
669 -0.744274 -0.496950


outside of the context manager, the output will be a NumPy array

X_test_scaled = scaler.transform(X_test[num_cols])
X_test_scaled[:5]
array([[ 0.62830616, -0.06320955],
 [-0.05798371, -0.51570367],
 [ 1.31459603, 0.56662405],
 [-0.6756446 , -0.51227857],
 [-0.74427358, -0.49694966]])

Total running time of the script: (0 minutes 0.147 seconds)

Related examples

Release Highlights for scikit-learn 1.2

Release Highlights for scikit-learn 1.2

Displaying Pipelines

Displaying Pipelines

Column Transformer with Mixed Types

Column Transformer with Mixed Types

Release Highlights for scikit-learn 1.4

Release Highlights for scikit-learn 1.4

Gallery generated by Sphinx-Gallery