Skip to main content
Stack Overflow
  1. About
  2. For Teams
Filter by
Sorted by
Tagged with
1 vote
1 answer
75 views

I'm using Optuna to optimize LightGBM hyperparameters, and I'm running into an issue with the variability of best_iteration across different random seeds. Current Setup I train multiple models with ...
0 votes
0 answers
44 views

I’m running Optuna to tune hyperparameters for a TabM regression model (10 trials) on Kaggle (GPU: Tesla P100) to minimize RMSE. The optimization runs fine — all trials complete — but right after ...
0 votes
0 answers
104 views

I have done a XGBoost model, and used Optuna to get the best parameters. I saved the trained model and the features used. My question is, when I got to load the model and use it when making ...
0 votes
1 answer
62 views

I'm running into a segmentation fault when training a Transformer model with PyTorch 2.6.0 and Optuna on CUDA (12.4). The exact same code used to work fine the issue appeared only after using Optuna. ...
0 votes
0 answers
108 views

I am using Optuna for hyperparameter tuning. I get messages as shown below: Trial 15 finished with value: 6.226334123011727 and parameters: {'iterations': 1100, 'learning_rate': 0.04262148853587423, '...
0 votes
0 answers
217 views

I'm trying to use optuna to find good hyperparameters for a fine-tuning task I'm doing with some different language models. My actual code is more complex, but here's a MWE: import torch import optuna ...
1 vote
0 answers
74 views

I used TPESampler and set it as follows while optimizing with optuna: sampler=optuna.samplers.TPESampler(multivariate=True, n_startup_trials=10, seed=None). But in the 10 startup_trials process, it ...
0 votes
0 answers
52 views

I created a 5 GPU Cluster using three nodes/machines locally using the tensorflow.distributed.MultiWorkerMirrored Strategy. One machine has the Apple M1 Pro Metals GPU, the other two nodes has NVIDIA ...
0 votes
0 answers
33 views

My impression is that every trial is run for one step. Then some trials are pruned and the remaining continue for another step and so on. However, the logs show: Trial 0 completed Trial 1 completed ...
0 votes
0 answers
113 views

I have some code to train a RL agent in jax. The code runs fine. To tune the hyperparameters I would like to use the optuna plugin of hydra since my project is based on the latter. To this end, I ...
HansDoe's user avatar
  • 62
0 votes
0 answers
47 views

I am using Optuna to optimize the parameters of a non-ML task. Now, each trial consists in processing several files in sequence, each of which gets a score. The scores are summed cumulatively in order ...
Svalorzen's user avatar
  • 5,637
0 votes
0 answers
62 views

I'm trying to use FLAML for hyperparameter tuning of my model, and I would like to see how each hyperparameter contributes to the objective value. Similar to Optuna's get_param_importances or ...
0 votes
0 answers
55 views

I'm trying to set up Optuna for hyperparam optimization. I have 2 main doubt/troubles I don't know if are correlated. When lunching the script with 20 or 100 trials no matter, its run but in some ...
0 votes
0 answers
125 views

This problem has been bothering me for a long time. I am using optuna for automatic parameter tuning of deep learning models, and the objective function returns the average AUC of five folds. Unable ...
0 votes
0 answers
27 views

This comment is from this issue: https://github.com/optuna/optuna/issues/5397 I found out what the issue was, If you load a study using "optuna.study.load_study" the settings for the study ...

15 30 50 per page
1
2 3 4 5
...
14

AltStyle によって変換されたページ (->オリジナル) /