Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit 63cb3c3

Browse files
fix typos and minor changes in changelog
1 parent 55727d4 commit 63cb3c3

File tree

4 files changed

+62
-32
lines changed

4 files changed

+62
-32
lines changed

‎CHANGELOG

Lines changed: 32 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,36 @@ File for tracking changes in SysIdentPy
55
Changes in SysIdentPy
66
=====================
77

8-
v0.3.0
8+
v0.3.4
9+
------
10+
11+
CONTRIBUTORS
12+
~~~~~~~~~~~~
13+
14+
- wilsonrljr
15+
- dj-gauthier
16+
17+
CHANGES
18+
~~~~~~~
19+
20+
- The update **v0.3.4** has been released with additional features, API changes and fixes.
21+
22+
- MAJOR: Ridge Regression Parameter Estimation:
23+
- Now you can use AILS to estimate parameters of NARMAX models (and variants) using a multiobjective approach.
24+
- AILS can be accessed using `from sysidentpy.multiobjective_parameter_estimation import AILS`
25+
- See the docs for a more in depth explanation of how to use AILS.
26+
- This feature is related to Issue #101. This work is the result of an undergraduate research conducted by Gabriel Bueno Leandro under the supervision of Samir Milani Martins and Wilson Rocha Lacerda Junior.
27+
28+
- API Change: plotting.py code was improved. Added type hints and added new options for plotting results.
29+
30+
- DATASET: Added buck_id.csv and buck_valid.csv dataset to SysIdentPy repository.
31+
32+
- DOC: Add a Multiobjective Parameter Optimization Notebook showing how to use the new AILS method
33+
34+
- DOC: Minor additions and grammar fixes.
35+
36+
37+
v0.3.3
938
------
1039

1140
CONTRIBUTORS
@@ -26,11 +55,11 @@ CHANGES
2655
- See the docs for a more in depth explanation of how to use AILS.
2756
- This feature is related to Issue #101. This work is the result of an undergraduate research conducted by Gabriel Bueno Leandro under the supervision of Samir Milani Martins and Wilson Rocha Lacerda Junior.
2857

29-
- API Change: `regressor_code` variable was renamed as `enconding` to avoid using the same name as the method in `narmax_tool` `regressor_code` method.
58+
- API Change: `regressor_code` variable was renamed as `encoding` to avoid using the same name as the method in `narmax_tool` `regressor_code` method.
3059

3160
- DATASET: Added buck_id.csv and buck_valid.csv dataset to SysIdentPy repository.
3261

33-
- DOC: Add a Multiobjetive Parameter Optimization Notebook showing how to use the new AILS method
62+
- DOC: Add a Multiobjective Parameter Optimization Notebook showing how to use the new AILS method
3463

3564
- DOC: Minor additions and grammar fixes.
3665

‎pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ exclude_lines = [
138138

139139
[tool.black]
140140
line-length = 88
141-
target_version = ['py37', 'py38', 'py39', 'py310']
141+
target_version = ['py37', 'py38', 'py39', 'py310', 'py311']
142142
preview = true
143143
exclude = '''
144144
/(

‎sysidentpy/model_structure_selection/forward_regression_orthogonal_least_squares.py

Lines changed: 16 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,7 @@ class FROLS(Estimators, BaseMSS):
8989
eps : float
9090
Normalization factor of the normalized filters.
9191
ridge_param : float
92-
Regularization prameter used in ridge regression
92+
Regularization parameter used in ridge regression
9393
gama : float, default=0.2
9494
The leakage factor of the Leaky LMS method.
9595
weight : float, default=0.02
@@ -166,7 +166,7 @@ def __init__(
166166
offset_covariance: float = 0.2,
167167
mu: float = 0.01,
168168
eps: np.float64 = np.finfo(np.float64).eps,
169-
ridge_param: np.float64 = np.finfo(np.float64).eps, # default is machine eps
169+
ridge_param: np.float64 = np.finfo(np.float64).eps, # default is machine eps
170170
gama: float = 0.2,
171171
weight: float = 0.02,
172172
basis_function: Union[Polynomial, Fourier] = Polynomial(),
@@ -194,7 +194,7 @@ def __init__(
194194
offset_covariance=offset_covariance,
195195
mu=mu,
196196
eps=eps,
197-
ridge_param=ridge_param, # ridge regression parameter
197+
ridge_param=ridge_param, # ridge regression parameter
198198
gama=gama,
199199
weight=weight,
200200
basis_function=basis_function,
@@ -298,15 +298,16 @@ def error_reduction_ratio(self, psi, y, process_term_number):
298298
for j in np.arange(i, dimension):
299299
# Add `eps` in the denominator to omit division by zero if
300300
# denominator is zero
301-
# To implement regularized regression (ridge regression), add
302-
# ridgePparam to psi.T @ psi. See S. Chen, Local regularization assisted
301+
# To implement regularized regression (ridge regression), add
302+
# ridgeParam to psi.T @ psi. See S. Chen, Local regularization assisted
303303
# orthogonal least squares regression, Neurocomputing 69 (2006) 559–585.
304-
# The version implemeted below uses the same regularization for every feature,
304+
# The version implemented below uses the same regularization for every feature,
305305
# What Chen refers to Uniform regularized orthogonal least squares (UROLS)
306306
# Set to tiny (self.eps) when you are not regularizing. ridge_param = eps is
307307
# the default.
308308
tmp_err[j] = (np.dot(tmp_psi[i:, j].T, tmp_y[i:]) ** 2) / (
309-
(np.dot(tmp_psi[i:, j].T, tmp_psi[i:, j]) + self.ridge_param) * squared_y
309+
(np.dot(tmp_psi[i:, j].T, tmp_psi[i:, j]) + self.ridge_param)
310+
* squared_y
310311
) + self.eps
311312

312313
if i == process_term_number:
@@ -329,7 +330,7 @@ def error_reduction_ratio(self, psi, y, process_term_number):
329330
psi_orthogonal = psi[:, tmp_piv]
330331
return err, piv, psi_orthogonal
331332

332-
def information_criterion(self, X_base, y):
333+
def information_criterion(self, X, y):
333334
"""Determine the model order.
334335
335336
This function uses a information criterion to determine the model size.
@@ -343,7 +344,7 @@ def information_criterion(self, X_base, y):
343344
----------
344345
y : array-like of shape = n_samples
345346
Target values of the system.
346-
X_base : array-like of shape = n_samples
347+
X : array-like of shape = n_samples
347348
Input system values measured by the user.
348349
349350
Returns
@@ -354,14 +355,12 @@ def information_criterion(self, X_base, y):
354355
vector position + 1).
355356
356357
"""
357-
if self.n_info_values is not None and self.n_info_values > X_base.shape[1]:
358-
self.n_info_values = X_base.shape[1]
358+
if self.n_info_values is not None and self.n_info_values > X.shape[1]:
359+
self.n_info_values = X.shape[1]
359360
warnings.warn(
360-
(
361-
"n_info_values is greater than the maximum number of all"
362-
" regressors space considering the chosen y_lag, u_lag, and"
363-
f" non_degree. We set as {X_base.shape[1]}"
364-
),
361+
"n_info_values is greater than the maximum number of all"
362+
" regressors space considering the chosen y_lag, u_lag, and"
363+
f" non_degree. We set as {X.shape[1]}",
365364
stacklevel=2,
366365
)
367366

@@ -372,7 +371,7 @@ def information_criterion(self, X_base, y):
372371

373372
for i in range(0, self.n_info_values):
374373
n_theta = i + 1
375-
regressor_matrix = self.error_reduction_ratio(X_base, y, n_theta)[2]
374+
regressor_matrix = self.error_reduction_ratio(X, y, n_theta)[2]
376375

377376
tmp_theta = getattr(self, self.estimator)(regressor_matrix, y)
378377

‎sysidentpy/parameter_estimation/estimators.py

Lines changed: 13 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -23,13 +23,13 @@ def __init__(
2323
offset_covariance=0.2,
2424
mu=0.01,
2525
eps=np.finfo(np.float64).eps,
26-
ridge_param=np.finfo(np.float64).eps, # for regularized ridge regression
26+
ridge_param=np.finfo(np.float64).eps, # for regularized ridge regression
2727
gama=0.2,
2828
weight=0.02,
2929
basis_function=None,
3030
):
3131
self.eps = eps
32-
self.ridge_param = ridge_param # for regularized ridge regression
32+
self.ridge_param = ridge_param # for regularized ridge regression
3333
self.mu = mu
3434
self.offset_covariance = offset_covariance
3535
self.max_lag = max_lag
@@ -51,7 +51,7 @@ def _validate_params(self):
5151
"offset_covariance": self.offset_covariance,
5252
"mu": self.mu,
5353
"eps": self.eps,
54-
"ridge_param": self.ridge_param, # for regularized ridge regression
54+
"ridge_param": self.ridge_param, # for regularized ridge regression
5555
"gama": self.gama,
5656
"weight": self.weight,
5757
}
@@ -76,10 +76,8 @@ def _validate_params(self):
7676
def _check_linear_dependence_rows(self, psi):
7777
if np.linalg.matrix_rank(psi) != psi.shape[1]:
7878
warnings.warn(
79-
(
80-
"Psi matrix might have linearly dependent rows."
81-
"Be careful and check your data"
82-
),
79+
"Psi matrix might have linearly dependent rows."
80+
"Be careful and check your data",
8381
stacklevel=2,
8482
)
8583

@@ -119,7 +117,7 @@ def least_squares(self, psi, y):
119117
y = y[self.max_lag :, 0].reshape(-1, 1)
120118
theta = np.linalg.lstsq(psi, y, rcond=None)[0]
121119
return theta
122-
120+
123121
def ridge_regression(self, psi, y):
124122
"""Estimate the model parameters using the regularized least squares method
125123
known as ridge regression. Based on the least_squares module and uses
@@ -146,19 +144,23 @@ def ridge_regression(self, psi, y):
146144
----------
147145
- Wikipedia entry on ridge regression
148146
https://en.wikipedia.org/wiki/Ridge_regression
149-
147+
150148
ridge_parm multiplied by the identity matrix (np.eye) favors models (theta) that
151149
have small size using an L2 norm. This prevents over fitting of the model.
152150
For applications where preventing overfitting is important, see, for example,
153151
D. J. Gauthier, E. Bollt, A. Griffith, W. A. S. Barbosa, ‘Next generation
154152
reservoir computing,’ Nat. Commun. 12, 5564 (2021).
155153
https://www.nature.com/articles/s41467-021-25801-2
156-
154+
157155
"""
158156
self._check_linear_dependence_rows(psi)
159157

160158
y = y[self.max_lag :, 0].reshape(-1, 1)
161-
theta = (np.linalg.pinv(psi.T @ psi + self.ridge_param * np.eye(psi.shape[1])) @ psi.T @ y)
159+
theta = (
160+
np.linalg.pinv(psi.T @ psi + self.ridge_param * np.eye(psi.shape[1]))
161+
@ psi.T
162+
@ y
163+
)
162164
return theta
163165

164166
def _unbiased_estimator(self, psi, y, theta, elag, max_lag, estimator):

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /