La notación que usaré es de dos conferencias diferentes de David Silver y también está informada por estas diapositivas .
La ecuación de Bellman esperada es
entonces podemos reescribir como
Esto se puede escribir en forma de matriz
Or, more compactly,
Notice that both sides of are -dimensional vectors. Here is the size of the state space. We can then define an operator as
for any . This is the expected Bellman operator.
Similarly, you can rewrite the Bellman optimality equation
as the Bellman optimality operator
The Bellman operators are "operators" in that they are mappings from one point to another within the vector space of state values, .
Rewriting the Bellman equations as operators is useful for proving that certain dynamic programming algorithms (e.g. policy iteration, value iteration) converge to a unique fixed point. This usefulness comes in the form of a body of existing work in operator theory, which allows us to make use of special properties of the Bellman operators.
Specifically, the fact that the Bellman operators are contractions gives the useful results that, for any policy and any initial vector ,
where is the value of policy and is the value of an optimal policy . The proof is due to the contraction mapping theorem.