Jump to content

Hamilton–Jacobi–Bellman equation

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 63.238.139.248 (talk) at 00:33, 25 January 2007. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

The Hamilton-Jacobi-Bellman (HJB) equation is a partial differential equation which is central to optimal control theory.

The solution of the HJB equation is the 'value function', which gives the optimal cost-to-go for a given dynamical system with an associated cost function. Classical variational problems, for example, the brachistochrone problem can be solved using this method as well.

The equation is a result of the theory of dynamic programming which was pioneered in the 1950s by Richard Bellman and coworkers. The corresponding discrete-time equation is usually referred to as the Bellman equation. In continuous time, the result can be seen as an extension of earlier work in classical physics on the Hamilton-Jacobi equation by William Rowan Hamilton and Carl Gustav Jacob Jacobi.

Consider the following problem in deterministic optimal control

subject to

where is the system state, is assumed given, and for is the control that we are trying to find. For this simple system, the Hamilton Jacobi Bellman partial differential equation is

subject to the terminal condition

The unknown in the above PDE is the Bellman 'value function', that is the cost incurred from starting in state at time and controlling the system optimally from then until time . The HJB equation needs to be solved backwards in time, starting from and ending at . (The notation means the inner product of the vectors a and b).

The HJB equation is a sufficient condition for an optimum. If we can solve for then we can find from it a control that achieves the minimum cost.

The HJB method can be generalized to stochastic systems as well.

In general case, the HJB equation does not have a classical (smooth) solution. Several notions of generalized solutions have been developed to cover such situations, including viscosity solution (Pierre-Louis Lions and Michael Crandall), minimax solution (Andrei Izmailovich Subbotin), and others.

References

  • R. E. Bellman. Dynamic Programming. Princeton, NJ, 1957.