Deep Q-Networks: From Tables to Neural Function Approximators
Why Deep Q-Networks?
The posts so far have been honest about a hidden assumption: the state space is small enough to fit in a table. On a 4×4 grid with 16 integer states, Q-tables are the right tool. But the real problems that made reinforcement learning famous — Atari games, robotic locomotion, navigation in continuous space — have state spaces that are either continuous or so vast that enumeration is physically impossible. A Q-table for a 210×160 pixel Atari screen (the native ALE resolution) would need a row for every distinct pixel configuration: rows — and DQN actually operates on 84×84 grayscale preprocessed frames, meaning entries. The table approach is not merely inefficient; it is categorically ruled out.