This event is part of Theoretical Advances in Reinforcement Learning and Control View Details

Reinforcement Learning from Offline Data and Human Feedback

April 20 — 24, 2026

Description

Back to top

Reinforcement Learning (RL) has seen remarkable progress in recent years, yet many of its most impressive achievements rely on extensive online interaction, curated environments, or simulated data—conditions rarely available in real-world settings. In contrast, real-world decision-making often depends on learning from limited, imperfect, or passively collected data, alongside guidance from human preferences, demonstrations, or corrections.

This workshop brings together researchers and practitioners exploring the frontiers of Offline Reinforcement Learning (Offline RL) and Reinforcement Learning from Human Feedback (RLHF)—two rapidly growing areas that aim to make RL more robust, safe, and deployable in practice.

Organizers

Back to top
C M
Cong Ma University of Chicago
Y C
Yuxin Chen University of Pennsylvania, The Wharton School

Registration

Back to top

IMSI is committed to making all of our programs and events inclusive and accessible. Contact [email protected] to request disability-related accommodations.

In order to register for this workshop, you must have an IMSI account and be logged in. Please use one of the buttons below to login or create an account.