106 questions
- Bountied 0
- Unanswered
- Frequent
- Score
- Trending
- Week
- Month
- Unanswered (my tags)
1
vote
1
answer
90
views
Ray: Resource request cannot be scheduled — using offline data and enabled the RL module
I tried training a BC algorithm using offline data and enabled the RL module in the algorithm configuration. I ran the code on Google Colab, which only provides 2 CPUs, and encountered the following ...
3
votes
0
answers
69
views
KeyError: 'advantages' in PPO MARL using Ray RLLib
I use ray 2.50.1 to implement a MARL model using PPO.
However, I meet the following problem:
'advantages'
KeyError: 'advantages'
During handling of the above exception, another exception occurred:
...
1
vote
1
answer
138
views
Error Raised with SAC for Centralized Training, Decentralized Execution in Ray RLlib
I'm using a slight variant of the RockPaperScissors multi-agent environment from the Ray RLlib documentation as a test environment to verify that a custom RLModule for Centralized Training, ...
0
votes
1
answer
97
views
SUMO RL : 'module’ object is not callable
When I run the following code i get error 'module' object is not callable
if __name__ == "__main__":
env_name = "4x4grid"
register_env(
env_name,
lambda _: ...
2
votes
1
answer
194
views
I keep getting this error, cuda available 'RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu
I'm training a transformer model using RLlib's PPO algorithm, but I encounter a device mismatch error:
RuntimeError: Expected all tensors to be on the same device, but found
at least two devices, ...
1
vote
2
answers
318
views
Ray rllib episode_reward_mean not showing
Can anyone explain to me why the episode_reward_mean is NOT part of the results dictionary?
Is it replaced by a different key in the latest API?
I see env_runners/episode_return_mean and env_runners/...
1
vote
1
answer
80
views
the action space of a reinforcement model is 1 dimensional, but when test stage the model output action with 2 dimensional
I trained a PPO model with action space self.action_space = gym.spaces.Box(-1, 1, (1,), data_type)" with rllib
But when i use the trained model to manually call forward_inference, the inference ...
1
vote
1
answer
69
views
Correct way of using foreach_worker and foreach_env
I am quite new to Reinforcement Learning and can’t understand it. I am unable to update configurations for the batch data using PPO.
I am using my custom-defined GYM environment, and want to train it ...
2
votes
0
answers
111
views
Ray custom environment render
I'm creating my own gym environment to test the freeze-tag problem. I'm trying to use Ray to do MAPPO. I have two problems:
1: My simulation is not rendering
2: Its creating multiple PyGame windows
I'...
0
votes
1
answer
132
views
Custom MLPPolicy issues in Ray RLLIB
I'm trying to create a custom MLP-based policy in Ray Rllib using this code below:
python: 3.10
Rayrlib version: 2.23
class CustomMLPModel(TorchModelV2, nn.Module):
def __init__(self, obs_space, ...
1
vote
0
answers
62
views
AttributeError: Can't get attribute 'CustomActionMaskedEnvironment.observation_space' in RLlib with PettingZoo environment
I have this very basic custom parallel multi agent environment written in PettingZoo.
import functools
import random
from copy import copy
import numpy as np
from gymnasium.spaces import Discrete, ...
0
votes
0
answers
30
views
Recurrent NN layers intialization problem
I am having problems initializing the LSTM layers for a PPO+LSTM in RLlib.
The inputs expected are different from what I give, and I do not understand why. Here my code:
class CustomTorchModel(...
1
vote
0
answers
49
views
Does the GTrXL Model from RLLIB supports dict/tuple observation?
im trying to use the AttentionNet GTrXL from RLLIB with a dictionary/tuple gym input. I found this example of complex inputs: Complex input nets. Now im not sure how to combine both properly.
I would ...
0
votes
0
answers
326
views
AttributeError: 'NoneType' object has no attribute 'cuda'
I'm encountering an AttributeError when trying to run a PPO trainer on an OR-GYM environment for inventory management using Ray RLlib and PyTorch in a CPU-only setup. Despite explicitly setting ...
1
vote
0
answers
51
views
Loading pickle file in Mac after writing in linux causing issues
I am using ray rllib to save and load checkpoints. I am using the same version across Mac and Linux. I want to be able train on linux and infer on my Mac but I am getting the following error:
...