JMLR: OmniSafe is an infrastructural framework for accelerating SafeRL research.
-
Updated
Oct 15, 2024 - Python
JMLR: OmniSafe is an infrastructural framework for accelerating SafeRL research.
🤖 Elegant implementations of offline safe RL algorithms in PyTorch
🚀 A fast safe reinforcement learning library in PyTorch
🔥 Datasets and env wrappers for offline safe reinforcement learning
PyTorch implementation of Constrained Reinforcement Learning for Soft Actor Critic Algorithm
A Survey Analyzing Generalization in Deep Reinforcement Learning
A Multiplicative Value Function for Safe and Efficient Reinforcement Learning. IROS 2023.
Official Code Repository for the POLICEd-RL Paper:https://arxiv.org/abs/2403.13297
Correct-by-synthesis reinforcement learning with temporal logic constraints (CoRL)
Poster about Curriculum Induction for Safe Reinforcement Learning
Blog Post about Curriculum Induction for Safe Reinforcement Learning
Author implementation of DSUP(q) algorithms from the NeurIPS 2024 paper "Action Gaps and Advantages in Continuous-Time Distributional Reinforcement Learning"
Add a description, image, and links to the safe-rl topic page so that developers can more easily learn about it.
To associate your repository with the safe-rl topic, visit your repo's landing page and select "manage topics."