reinforcement-learning
Here are 7,788 public repositories matching this topic...
-
Updated
Mar 13, 2022
-
Updated
Mar 6, 2022 - C#
-
Updated
Mar 8, 2022 - Python
-
Updated
Oct 28, 2021 - Python
-
Updated
Jan 8, 2022 - Python
-
Updated
Feb 1, 2022 - HTML
-
Updated
Mar 14, 2022 - C++
-
Updated
Jan 15, 2021 - Jupyter Notebook
-
Updated
Jan 29, 2022 - Python
-
Updated
Nov 1, 2020 - Python
-
Updated
Feb 15, 2022 - Python
Bidirectional RNN
Is there a way to train a bidirectional RNN (like LSTM or GRU) on trax nowadays?
-
Updated
Feb 3, 2022
-
Updated
Mar 14, 2022 - Jupyter Notebook
-
Updated
Jan 20, 2022 - Jupyter Notebook
-
Updated
Mar 14, 2022 - Python
-
Updated
Mar 13, 2022 - Jupyter Notebook
-
Updated
Feb 9, 2022 - Python
-
Updated
Jan 5, 2022 - Python
-
Updated
Mar 13, 2022 - Jupyter Notebook
-
Updated
Mar 11, 2022
-
Updated
Dec 14, 2019 - Jupyter Notebook
-
Updated
May 7, 2021 - JavaScript
-
Updated
Feb 15, 2022 - Jupyter Notebook
-
Updated
Mar 13, 2022 - Python
-
Updated
Dec 22, 2021
-
Updated
Mar 1, 2022 - Jupyter Notebook
The following applies to DDPG and TD3, and possibly other models. The following libraries were installed in a virtual environment:
numpy==1.16.4
stable-baselines==2.10.0
gym==0.14.0
tensorflow==1.14.0
Episode rewards do not seem to be updated in model.learn() before callback.on_step(). Depending on which callback.locals variable is used, this means that:
- episode rewards may n
Improve this page
Add a description, image, and links to the reinforcement-learning topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the reinforcement-learning topic, visit your repo's landing page and select "manage topics."
Search before asking
Ray Component
Ray Clusters
What happened + What you expected to happen
I was trying to launch a Ray cluster on GCP via my macOS. When I disabled the
dockerfield and used thesetup_commandsfield to set up the new node, everything went well. However, when