Note
Go to the end to download the full example code. or to run this example in your browser via JupyterLite or Binder
Comparing Target Encoder with Other Encoders#
The TargetEncoder
uses the value of the target to encode each
categorical feature. In this example, we will compare three different approaches
for handling categorical features: TargetEncoder
,
OrdinalEncoder
, OneHotEncoder
and dropping the category.
Note
fit(X, y).transform(X)
does not equal fit_transform(X, y)
because a
cross fitting scheme is used in fit_transform
for encoding. See the
User Guide. for details.
# Authors: The scikit-learn developers # SPDX-License-Identifier: BSD-3-Clause
Loading Data from OpenML#
First, we load the wine reviews dataset, where the target is the points given be a reviewer:
fromsklearn.datasetsimport fetch_openml wine_reviews = fetch_openml (data_id=42074, as_frame=True) df = wine_reviews.frame df.head()
country | description | designation | points | price | province | region_1 | region_2 | variety | winery | |
---|---|---|---|---|---|---|---|---|---|---|
0 | US | This tremendous 100% varietal wine hails from ... | Martha's Vineyard | 96 | 235.0 | California | Napa Valley | Napa | Cabernet Sauvignon | Heitz |
1 | Spain | Ripe aromas of fig, blackberry and cassis are ... | Carodorum Selección Especial Reserva | 96 | 110.0 | Northern Spain | Toro | NaN | Tinta de Toro | Bodega Carmen Rodríguez |
2 | US | Mac Watson honors the memory of a wine once ma... | Special Selected Late Harvest | 96 | 90.0 | California | Knights Valley | Sonoma | Sauvignon Blanc | Macauley |
3 | US | This spent 20 months in 30% new French oak, an... | Reserve | 96 | 65.0 | Oregon | Willamette Valley | Willamette Valley | Pinot Noir | Ponzi |
4 | France | This is the top wine from La Bégude, named aft... | La Brûlade | 95 | 66.0 | Provence | Bandol | NaN | Provence red blend | Domaine de la Bégude |
For this example, we use the following subset of numerical and categorical features in the data. The target are continuous values from 80 to 100:
numerical_features = ["price"] categorical_features = [ "country", "province", "region_1", "region_2", "variety", "winery", ] target_name = "points" X = df[numerical_features + categorical_features] y = df[target_name] _ = y.hist()
Training and Evaluating Pipelines with Different Encoders#
In this section, we will evaluate pipelines with
HistGradientBoostingRegressor
with different encoding
strategies. First, we list out the encoders we will be using to preprocess
the categorical features:
fromsklearn.composeimport ColumnTransformer fromsklearn.preprocessingimport OneHotEncoder , OrdinalEncoder , TargetEncoder categorical_preprocessors = [ ("drop", "drop"), ("ordinal", OrdinalEncoder (handle_unknown="use_encoded_value", unknown_value=-1)), ( "one_hot", OneHotEncoder (handle_unknown="ignore", max_categories=20, sparse_output=False), ), ("target", TargetEncoder (target_type="continuous")), ]
Next, we evaluate the models using cross validation and record the results:
fromsklearn.ensembleimport HistGradientBoostingRegressor fromsklearn.model_selectionimport cross_validate fromsklearn.pipelineimport make_pipeline n_cv_folds = 3 max_iter = 20 results = [] defevaluate_model_and_store(name, pipe): result = cross_validate ( pipe, X, y, scoring="neg_root_mean_squared_error", cv=n_cv_folds, return_train_score=True, ) rmse_test_score = -result["test_score"] rmse_train_score = -result["train_score"] results.append( { "preprocessor": name, "rmse_test_mean": rmse_test_score.mean(), "rmse_test_std": rmse_train_score.std(), "rmse_train_mean": rmse_train_score.mean(), "rmse_train_std": rmse_train_score.std(), } ) for name, categorical_preprocessor in categorical_preprocessors: preprocessor = ColumnTransformer ( [ ("numerical", "passthrough", numerical_features), ("categorical", categorical_preprocessor, categorical_features), ] ) pipe = make_pipeline ( preprocessor, HistGradientBoostingRegressor (random_state=0, max_iter=max_iter) ) evaluate_model_and_store(name, pipe)
Native Categorical Feature Support#
In this section, we build and evaluate a pipeline that uses native categorical
feature support in HistGradientBoostingRegressor
,
which only supports up to 255 unique categories. In our dataset, the most of
the categorical features have more than 255 unique categories:
n_unique_categories = df[categorical_features].nunique().sort_values(ascending=False) n_unique_categories
winery 14810 region_1 1236 variety 632 province 455 country 48 region_2 18 dtype: int64
To workaround the limitation above, we group the categorical features into low cardinality and high cardinality features. The high cardinality features will be target encoded and the low cardinality features will use the native categorical feature in gradient boosting.
high_cardinality_features = n_unique_categories[n_unique_categories > 255].index low_cardinality_features = n_unique_categories[n_unique_categories <= 255].index mixed_encoded_preprocessor = ColumnTransformer ( [ ("numerical", "passthrough", numerical_features), ( "high_cardinality", TargetEncoder (target_type="continuous"), high_cardinality_features, ), ( "low_cardinality", OrdinalEncoder (handle_unknown="use_encoded_value", unknown_value=-1), low_cardinality_features, ), ], verbose_feature_names_out=False, ) # The output of the of the preprocessor must be set to pandas so the # gradient boosting model can detect the low cardinality features. mixed_encoded_preprocessor.set_output(transform="pandas") mixed_pipe = make_pipeline ( mixed_encoded_preprocessor, HistGradientBoostingRegressor ( random_state=0, max_iter=max_iter, categorical_features=low_cardinality_features ), ) mixed_pipe
Pipeline(steps=[('columntransformer', ColumnTransformer(transformers=[('numerical', 'passthrough', ['price']), ('high_cardinality', TargetEncoder(target_type='continuous'), Index(['winery', 'region_1', 'variety', 'province'], dtype='object')), ('low_cardinality', OrdinalEncoder(handle_unknown='use_encoded_value', unknown_value=-1), Index(['country', 'region_2'], dtype='object'))], verbose_feature_names_out=False)), ('histgradientboostingregressor', HistGradientBoostingRegressor(categorical_features=Index(['country', 'region_2'], dtype='object'), max_iter=20, random_state=0))])In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Parameters
Parameters
['price']
passthrough
Index(['winery', 'region_1', 'variety', 'province'], dtype='object')
Parameters
Index(['country', 'region_2'], dtype='object')