Offline reinforcement learning (RL) provides a compelling paradigm for training autonomous systems without the risks of online exploration, particularly in safety-critical domains. However, jointly achieving strong safety and performance from fixed datasets remains challenging. Existing safe offline RL methods often rely on soft constraints that allow violations, introduce excessive conservatism, or struggle to balance safety, reward optimization, and adherence to the data distribution. To address this, we propose Epigraph-Guided Flow Matching (EpiFlow), a framework that formulates safe offline RL as a state-constrained optimal control problem to co-optimize safety and performance. We learn a feasibility value function derived from an epigraph reformulation of the optimal control problem, thereby avoiding the decoupled objectives or post-hoc filtering common in prior work. Policies are synthesized by reweighting the behavior distribution based on this epigraph value function and fitting a generative policy via flow matching, enabling efficient, distribution-consistent sampling. Across various safety-critical tasks, including Safety-Gymnasium benchmarks, EpiFlow achieves competitive returns with near-zero empirical safety violations, demonstrating the effectiveness of epigraph-guided policy synthesis.
EpiFlow Framework. We propose a framework to solve the state-constrained offline RL problem by learning an Auxiliary Value Function, $\hat{V}(x,z)$, using an offline dataset. We formulate an epigraph form for the problem that learns $\hat{V}$, which if satisfied to stay positive, ensures the system stays safe while maximising the objective. We finally learn a policy via flow matching which maximises $\hat{V}$ at all time steps to find the most optimal path that the offline data supports.
EpiFlow addresses the challenge of co-optimizing safety and performance in offline RL through three key components:
Evaluation Results. Points towards the left (←) are more safe, and those towards the top (↑) have higher rewards. Evaluated over 500 episodes and 5 seeds. EpiFlow consistently attains high safety rates while maintaining strong performance across all domains, from low-dimensional Boat Navigation to high-dimensional MuJoCo Safe Velocity and Safe Car Navigation tasks.
Boat Environment Analysis. (Top): Trajectories for all methods rolled out from 2 distinct initial states, where the circles are obstacles the agent must avoid while reaching the goal. (Bottom): Learned value functions and policy comparisons showing EpiFlow's safe and performant navigation.
@article{tayal2026epiflow,
title={Epigraph-Guided Flow Matching for Safe and Performant Offline Reinforcement Learning},
author={Manan Tayal and Mumuksh Tayal},
journal={arXiv preprint arXiv:2602.08054},
year={2026}
}