Wallis Lab
2. Interactions between model-based and model-free reinforcement learning systems

Lead Investigator: Celia Ford

Potential treatments: Depression, addiction, schizophrenia, Parkinson's

Reinforcement learning (RL) is a computational framework that models the process of learning to select actions that maximize reward over time. RL is typically thought of as comprising two distinct systems: model-free RL (MF-RL) and model-based RL (MB-RL). MF-RL relies on trial-and-error to slowly, exhaustively, and inflexibly learn value information over time. Meanwhile, MB-RL builds and stores models of the environment, allowing for goal-directed value prediction. These systems trade off learning speed for precision, and flexibility for efficiency: MF-RL is rapid and flexible (but capacity-limited and computationally expensive), while MB-RL is cheap and robust (but inflexible and slow).


These systems interact when we learn to provide the flexibility that characterizes human behavior. However, the neural mechanisms driving this interaction are poorly understood. Here, we investigate how the brain implements these learning processes simultaneously, and what determines the relative contribution of each system during decision-making. We use behavioral, electrophysiological, and computational methods to isolate the relative contribution of MB-RL and MF-RL, and aim to delineate the neural systems driving each process.