Epigraph-Guided Flow Matching for Safe and Performant Offline Reinforcement Learning

1 Tau Intelligence

Abstract

Offline reinforcement learning (RL) provides a compelling paradigm for training autonomous systems without the risks of online exploration, particularly in safety-critical domains. However, jointly achieving strong safety and performance from fixed datasets remains challenging. Existing safe offline RL methods often rely on soft constraints that allow violations, introduce excessive conservatism, or struggle to balance safety, reward optimization, and adherence to the data distribution. To address this, we propose Epigraph-Guided Flow Matching (EpiFlow), a framework that formulates safe offline RL as a state-constrained optimal control problem to co-optimize safety and performance. We learn a feasibility value function derived from an epigraph reformulation of the optimal control problem, thereby avoiding the decoupled objectives or post-hoc filtering common in prior work. Policies are synthesized by reweighting the behavior distribution based on this epigraph value function and fitting a generative policy via flow matching, enabling efficient, distribution-consistent sampling. Across various safety-critical tasks, including Safety-Gymnasium benchmarks, EpiFlow achieves competitive returns with near-zero empirical safety violations, demonstrating the effectiveness of epigraph-guided policy synthesis.

Framework

EpiFlow Framework

EpiFlow Framework. We propose a framework to solve the state-constrained offline RL problem by learning an Auxiliary Value Function, $\hat{V}(x,z)$, using an offline dataset. We formulate an epigraph form for the problem that learns $\hat{V}$, which if satisfied to stay positive, ensures the system stays safe while maximising the objective. We finally learn a policy via flow matching which maximises $\hat{V}$ at all time steps to find the most optimal path that the offline data supports.

Method Overview

EpiFlow addresses the challenge of co-optimizing safety and performance in offline RL through three key components:

  1. Epigraph Reformulation: We reformulate the state-constrained value function as a two-stage optimization. The original value function $V(x)$ is characterized as the largest performance threshold feasible under safety: $$V(x) = \sup\{z \in \mathbb{R} \mid \hat{V}(x,z) \geq 0\}$$ where the auxiliary epigraph value function $\hat{V}(x,z)$ evaluates whether a given performance threshold $z$ is jointly feasible with safety.
  2. Data-Driven Bellman Recursion: The auxiliary value function satisfies a Bellman-style recursion: $$\hat{V}(x_t, z_t) = \min\{\ell(x_t),\; \gamma\, \hat{V}(x_{t+1}, z_{t+1})\}$$ which we learn from offline transitions using expectile regression to approximate the implicit policy maximization without querying out-of-distribution actions.
  3. Epigraph-Guided Flow Matching: Policies are synthesized via weighted Flow Matching, where the epigraph-guided weights $w(x,a) = \exp(\alpha \hat{A}(x,a; z^\star(x)))$ reweight the behavior distribution, and a learned vector field generates safe, high-performing actions via a single ODE integration at inference.

Results

Evaluation Results

Evaluation Results. Points towards the left (←) are more safe, and those towards the top (↑) have higher rewards. Evaluated over 500 episodes and 5 seeds. EpiFlow consistently attains high safety rates while maintaining strong performance across all domains, from low-dimensional Boat Navigation to high-dimensional MuJoCo Safe Velocity and Safe Car Navigation tasks.

Boat Environment Analysis

Boat Environment Analysis

Boat Environment Analysis. (Top): Trajectories for all methods rolled out from 2 distinct initial states, where the circles are obstacles the agent must avoid while reaching the goal. (Bottom): Learned value functions and policy comparisons showing EpiFlow's safe and performant navigation.

Key Contributions

  • We propose Epigraph-guided Flow Matching (EpiFlow), a framework which casts safe offline RL as a state-constrained optimal control problem, providing a principled path to co-optimize performance and hard safety constraints.
  • We derive a Bellman-style recursion for an auxiliary epigraph value function, enabling the characterization of safe performance envelopes directly from offline data without solving intractable HJB-PDEs.
  • We propose a weighted Flow Matching objective that guides generative policy learning toward safe, high-performing regions of the data distribution, ensuring efficient and distribution-consistent sampling in continuous action spaces.
  • We demonstrate the effectiveness of EpiFlow across a wide range of safety-critical benchmarks, including high-dimensional Safety Gymnasium tasks, showing near-zero safety violations while maintaining high task returns compared to state-of-the-art safe offline RL methods.

BibTeX


  @article{tayal2026epiflow,
    title={Epigraph-Guided Flow Matching for Safe and Performant Offline Reinforcement Learning},
    author={Manan Tayal and Mumuksh Tayal},
    journal={arXiv preprint arXiv:2602.08054},
    year={2026}
  }