You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
***modules** : BayesLinear, BayesConv2d, BayesBatchNorm2d are added.
32
+
***utils** : convert_model(nonbayes_to_bayes, bayes_to_nonbayes) is added.
33
+
***functional.py** : bayesian_kl_loss is added.
34
34
35
35
### Version 0.2
36
-
***prior_sigma** is used when initialize modules and functions instead of **prior_log_sigma**
37
-
* **Modules(BayesLinear, BayesConv2d, BayesBatchNorm2d)** are re-defined with prior_sigma instead of prior_log_sigma.
38
-
* **convert_model(nonbayes_to_bayes, bayes_to_nonbayes)** is also changed with prior_sigma instead of prior_log_sigma.
39
-
***Modules(BayesLinear, BayesConv2d, BayesBatchNorm2d)** : Base initialization method is changed to the method of Adv-BNN from the original torch method.
40
-
***functional** : **bayesian_kl_loss** is changed similar to ones in **torch.functional**
41
-
***loss** : **BKLLoss** is added based on bayesian_kl_loss similar to ones in **torch.loss**
36
+
***prior_sigma** is used when initialize modules and functions instead of **prior_log_sigma**.
37
+
* **modules** are re-defined with prior_sigma instead of prior_log_sigma.
38
+
* **utils/convert_model.py** is also changed with prior_sigma instead of prior_log_sigma.
39
+
***modules** : Base initialization method is changed to the method of Adv-BNN from the original torch method.
40
+
***functional.py** : **bayesian_kl_loss** is changed similar to ones in **torch.functional**.
41
+
***modules/loss.py** : **BKLLoss** is added based on bayesian_kl_loss similar to ones in **torch.loss**.
42
42
43
43
### Version 0.3
44
-
***bayesian_kl_loss/BKLLoss returns tensor.Tensor([0]) as default**
45
-
* In the previous version, bayesian_kl_loss/BKLLoss returns 0 of int type if there is no Bayesian layers. However, considering all torch loss returns tensor and .item() is used to make them to int type, they are changed to return tensor.Tensor([0]) if there is no Bayesian layers.
44
+
***functional.py** :
45
+
***bayesian_kl_loss returns tensor.Tensor([0]) as default** : In the previous version, bayesian_kl_loss returns 0 of int type if there is no Bayesian layers. However, considering all torch loss returns tensor and .item() is used to make them to int type, they are changed to return tensor.Tensor([0]) if there is no Bayesian layers.
46
46
47
47
### Version 0.4
48
-
***bayesian_kl_loss/BKLLoss is modified**
49
-
* In some cases, the device problem(cuda/cpu) has occurred. Thus, losses are initialized with tensor.Tensor([0]) on the device on which the model is.
48
+
***functional.py** :
49
+
***bayesian_kl_loss is modified** : In some cases, the device problem(cuda/cpu) has occurred. Thus, losses are initialized with tensor.Tensor([0]) on the device on which the model is.
50
50
51
51
### Version 0.5
52
-
***nonbayes_to_bayes, bayes_to_nonbayes is modified**
53
-
* Before this version, they replace the original model. From now, we can handle it with the 'inplace' argument. Set 'inplace=True' for replace the input model and 'inplace=False' for getting a new model. 'inplace=True' is recommended cause it shortens memories and no future problems with deepcopy.
52
+
***utils/convert_model.py** :
53
+
***nonbayes_to_bayes, bayes_to_nonbayes is modified** : Before this version, they replace the original model. From now, we can handle it with the 'inplace' argument. Set 'inplace=True' for replace the input model and 'inplace=False' for getting a new model. 'inplace=True' is recommended cause it shortens memories and no future problems with deepcopy.
54
+
55
+
### Version 0.6
56
+
***utils/freeze_model.py** :
57
+
***freeze, unfreeze methods are added** : bayesian modules always returns different outputs even if inputs are same. It is because of their randomized forward propagation. Sometimes, however, we need to freeze this randomized process for analyzing the model deeply. Then you can use this freeze method for changing the bayesian model into non-bayesian model with same parameters.
58
+
***modules** : For supporting **freeze** method, freeze, weight_eps and bias_eps is added to each modules. If freeze is False (Defalt), weight_eps and bias_eps will be initialized with normal noise at every forward. If freeze is True, weight_eps and bias_eps won't be changed.
0 commit comments