Simple Sequence Priors for Reinforcement Learning

Everything else being equal, simpler models should be preferred over more complex ones. In reinforcement learning (RL), simplicity is typically quantified on an action-by-action basis -- but this timescale ignores temporal regularities, like repetitions, often present in sequential strategies. We therefore propose an RL algorithm that learns to solve tasks with sequences of actions that are compressible. We explore two possible sources of simple action sequences: Sequences that can be learned by autoregressive models, and sequences that are compressible with off-the-shelf data compression algorithms. Distilling these preferences into sequence priors, we derive a novel information-theoretic objective that incentivizes agents to learn policies that maximize rewards while conforming to these priors. We show that the resulting RL algorithm leads to faster learning, and attains higher returns than state-of-the-art model-free approaches in a series of continuous control tasks from the DeepMind Control Suite. These priors also produce a powerful information-regularized agent that is robust to noisy observations and can perform open-loop control.

SAC

(Soft Actor Critic)

LZ-SAC

(LZ Soft Actor Critic; ours)

SPAC

(Soft Predictable Actor Critic; ours)

Cheetah
run
Soft Actor Critic LZ4 Agent (ours) Transformer Agent (ours)
Walker
run
Soft Actor Critic LZ4 Agent (ours) Transformer Agent (ours)
Quadruped
walk
Soft Actor Critic LZ4 Agent (ours) Transformer Agent (ours)