864 questions
- Bountied 0
- Unanswered
- Frequent
- Score
- Trending
- Week
- Month
- Unanswered (my tags)
1
vote
0
answers
29
views
How to handle unstable best_iteration in LightGBM when using Optuna for hyperparameter optimization?
I'm using Optuna to optimize LightGBM hyperparameters, and I'm running into an issue with the variability of best_iteration across different random seeds.
Current Setup
I train multiple models with ...
0
votes
0
answers
26
views
FutureWarning in Optuna during TabM hyperparameter tuning causes notebook failure after trials complete on Kaggle GPU
I’m running Optuna to tune hyperparameters for a TabM regression model (10 trials) on Kaggle (GPU: Tesla P100) to minimize RMSE.
The optimization runs fine — all trials complete — but right after ...
0
votes
0
answers
96
views
Optuna: Selection of parameters during k-fold CV
I am using Optuna for hyperparameter tuning. I get messages as shown below:
Trial 15 finished with value: 6.226334123011727 and parameters: {'iterations': 1100, 'learning_rate': 0.04262148853587423, '...
2
votes
1
answer
144
views
how to pass pre-computed folds to successiveHalving in sklearn
I want to undersample 3 cross-validation folds from a dataset, using say, RandomUnderSampler from imblearn, and then, optimize the hyperparameters of various gbms using those undersampled folds as ...
1
vote
0
answers
56
views
My RandomSampler() is always generating the same parameters
I used TPESampler and set it as follows while optimizing with optuna: sampler=optuna.samplers.TPESampler(multivariate=True, n_startup_trials=10, seed=None). But in the 10 startup_trials process, it ...
0
votes
0
answers
25
views
Why are Optuna trials running sequentially to completion instead of interleaved with pruning?
My impression is that every trial is run for one step. Then some trials are pruned and the remaining continue for another step and so on.
However, the logs show:
Trial 0 completed
Trial 1 completed
...
0
votes
1
answer
54
views
Hyperparameter tuning using Wandb or Keras Tuner - 10 fold Cross Validation
If I am using stratified 10-folds for classification/regression tasks, where do I need to define the logic for hyperparameter tuning using Scikit or Wandb?
Should it be inside the loop or outside?
I ...
0
votes
1
answer
58
views
Tuning starting and final learning rate
If you use cosine decay for example and you have starting learning rate and final learning rate, can you tune those hyperparameters so that final learning rate is in some ratio of starting learning ...
2
votes
1
answer
68
views
Cannot see all `Dense` layer info from `search_space_summary()` when using `RandomSearch Tuner` in Keras-Tuner?
I am trying to use keras-tuner to tune hyperparameters, like
!pip install keras-tuner --upgrade
import keras_tuner as kt
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers ...
0
votes
1
answer
73
views
Hyperparameter Optimisation
Im trying to forecast a time series using prophet model in python, for which I would like to find the optimal tuning parameters (like changepoint_range, changepoint_prior_scale, ...
0
votes
0
answers
27
views
Hyperopt: attribute 'quniformint' not recognized
I use mlflow and hyperopt for tuning a model, and trying to figure out hyperopt sampling methods.
I have directly used line of codes from the documentation, as such:
my code:
space = {"...
-2
votes
1
answer
110
views
Serialization error using ray-tuner for hyperparameter tuning [closed]
I am trying to tune some hyperparameters for my neural network for an image segmentational problem. I set up the tuner as simple as it can be, but when I run my code i get the following error:
2025-02-...
0
votes
0
answers
123
views
Why ray.train.get_checkpoint() from Ray Tune is returning None even after saving the checkpoint?
I am trying to tune my model with ray tune for pytorch. I would really like to be able to save the tuning progress, stop the execution and resume the execution from where I left. Unfortunately, I am ...
0
votes
1
answer
77
views
How do you usually perform hyperparameter tuning on a large dataset?
I'm working on training a model that predicts which way in cache to evict based on cache features, access information, etc, etc...
However, I have millions and millions of data samples. Thus, I cannot ...
1
vote
0
answers
91
views
Calculate correlation on dict type variables
I have a dataframe named hyperparam_df which looks like the following:
repo_name file_name \
0 DeepCoMP deepcomp/util/simulation.py ...