Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit 13bac8a

Browse files
author
Algorithmica
authored
Add files via upload
1 parent 4532a1d commit 13bac8a

File tree

1 file changed

+43
-0
lines changed

1 file changed

+43
-0
lines changed
Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
import pandas as pd
2+
import os
3+
from sklearn import tree, ensemble, model_selection, preprocessing, decomposition, manifold, feature_selection, svm
4+
import seaborn as sns
5+
import numpy as np
6+
7+
import sys
8+
sys.path.append("E:/New Folder/utils")
9+
10+
import classification_utils as cutils
11+
import common_utils as utils
12+
13+
dir = 'C:/Users/Algorithmica/Downloads/dont-overfit-ii'
14+
train = pd.read_csv(os.path.join(dir, 'train.csv'))
15+
print(train.info())
16+
print(train.columns)
17+
18+
#filter unique value features
19+
train1 = train.iloc[:,2:]
20+
y = train['target'].astype(int)
21+
22+
#filter zero-variance features
23+
variance = feature_selection.VarianceThreshold()
24+
train2 = variance.fit_transform(train1)
25+
26+
#embedded feature selection
27+
rf_estimator = ensemble.RandomForestClassifier()
28+
rf_grid = {'max_depth':list(range(1,9)), 'n_estimators':list(range(1,300,100)) }
29+
rf_final_estimator = cutils.grid_search_best_model(rf_estimator, rf_grid, train1, y)
30+
embedded_selector = feature_selection.SelectFromModel(rf_final_estimator, prefit=True, threshold='mean')
31+
utils.plot_feature_importances(rf_final_estimator,train1, cutoff=50)
32+
train2 = embedded_selector.transform(train1)
33+
34+
#statistical feature selection
35+
statistical_selector = feature_selection.SelectKBest(feature_selection.f_classif, k = 20)
36+
train2 = statistical_selector.fit_transform(train1, y)
37+
print(statistical_selector.scores_)
38+
39+
#recursive feature elimination(rfe)
40+
rf_estimator = ensemble.RandomForestClassifier()
41+
rfe_selector = feature_selection.RFE(rf_estimator, n_features_to_select=10, step=5)
42+
train2 = rfe_selector.fit_transform(train1, y)
43+
print(rfe_selector.ranking_)

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /