Differential dynamic programming
Differential dynamic programming (DDP) is an optimal control algorithm of the trajectory optimization class. The algorithm was introduced in 1966 by Mayne [1] and subsequently analysed in Jacobson and Mayne's eponymous book.[2] The algorithm uses locally-quadratic models of the dynamics and cost functions, and displays quadratic convergence. It is closely related to Pantoja's step-wise Newton's method.[3] [4]
Finite-horizon discrete-time problems
[edit ]The dynamics
describe the evolution of the state {\displaystyle \textstyle \mathbf {x} } given the control {\displaystyle \mathbf {u} } from time {\displaystyle i} to time {\displaystyle i+1}. The total cost {\displaystyle J_{0}} is the sum of running costs {\displaystyle \textstyle \ell } and final cost {\displaystyle \ell _{f}}, incurred when starting from state {\displaystyle \mathbf {x} } and applying the control sequence {\displaystyle \mathbf {U} \equiv \{\mathbf {u} _{0},\mathbf {u} _{1}\dots ,\mathbf {u} _{N-1}\}} until the horizon is reached:
- {\displaystyle J_{0}(\mathbf {x} ,\mathbf {U} )=\sum _{i=0}^{N-1}\ell (\mathbf {x} _{i},\mathbf {u} _{i})+\ell _{f}(\mathbf {x} _{N}),}
where {\displaystyle \mathbf {x} _{0}\equiv \mathbf {x} }, and the {\displaystyle \mathbf {x} _{i}} for {\displaystyle i>0} are given by Eq. 1 . The solution of the optimal control problem is the minimizing control sequence {\displaystyle \mathbf {U} ^{*}(\mathbf {x} )\equiv \operatorname {argmin} _{\mathbf {U} }J_{0}(\mathbf {x} ,\mathbf {U} ).} Trajectory optimization means finding {\displaystyle \mathbf {U} ^{*}(\mathbf {x} )} for a particular {\displaystyle \mathbf {x} _{0}}, rather than for all possible initial states.
Dynamic programming
[edit ]Let {\displaystyle \mathbf {U} _{i}} be the partial control sequence {\displaystyle \mathbf {U} _{i}\equiv \{\mathbf {u} _{i},\mathbf {u} _{i+1}\dots ,\mathbf {u} _{N-1}\}} and define the cost-to-go {\displaystyle J_{i}} as the partial sum of costs from {\displaystyle i} to {\displaystyle N}:
- {\displaystyle J_{i}(\mathbf {x} ,\mathbf {U} _{i})=\sum _{j=i}^{N-1}\ell (\mathbf {x} _{j},\mathbf {u} _{j})+\ell _{f}(\mathbf {x} _{N}).}
The optimal cost-to-go or value function at time {\displaystyle i} is the cost-to-go given the minimizing control sequence:
- {\displaystyle V(\mathbf {x} ,i)\equiv \min _{\mathbf {U} _{i}}J_{i}(\mathbf {x} ,\mathbf {U} _{i}).}
Setting {\displaystyle V(\mathbf {x} ,N)\equiv \ell _{f}(\mathbf {x} _{N})}, the dynamic programming principle reduces the minimization over an entire sequence of controls to a sequence of minimizations over a single control, proceeding backwards in time:
This is the Bellman equation.
Differential dynamic programming
[edit ]DDP proceeds by iteratively performing a backward pass on the nominal trajectory to generate a new control sequence, and then a forward-pass to compute and evaluate a new nominal trajectory. We begin with the backward pass. If
- {\displaystyle \ell (\mathbf {x} ,\mathbf {u} )+V(\mathbf {f} (\mathbf {x} ,\mathbf {u} ),i+1)}
is the argument of the {\displaystyle \min[\cdot ]} operator in Eq. 2 , let {\displaystyle Q} be the variation of this quantity around the {\displaystyle i}-th {\displaystyle (\mathbf {x} ,\mathbf {u} )} pair:
- {\displaystyle {\begin{aligned}Q(\delta \mathbf {x} ,\delta \mathbf {u} )\equiv &\ell (\mathbf {x} +\delta \mathbf {x} ,\mathbf {u} +\delta \mathbf {u} )&&{}+V(\mathbf {f} (\mathbf {x} +\delta \mathbf {x} ,\mathbf {u} +\delta \mathbf {u} ),i+1)\\-&\ell (\mathbf {x} ,\mathbf {u} )&&{}-V(\mathbf {f} (\mathbf {x} ,\mathbf {u} ),i+1)\end{aligned}}}
and expand to second order
The {\displaystyle Q} notation used here is a variant of the notation of Morimoto where subscripts denote differentiation in denominator layout.[5] Dropping the index {\displaystyle i} for readability, primes denoting the next time-step {\displaystyle V'\equiv V(i+1)}, the expansion coefficients are
- {\displaystyle {\begin{alignedat}{2}Q_{\mathbf {x} }&=\ell _{\mathbf {x} }+\mathbf {f} _{\mathbf {x} }^{\mathsf {T}}V'_{\mathbf {x} }\\Q_{\mathbf {u} }&=\ell _{\mathbf {u} }+\mathbf {f} _{\mathbf {u} }^{\mathsf {T}}V'_{\mathbf {x} }\\Q_{\mathbf {x} \mathbf {x} }&=\ell _{\mathbf {x} \mathbf {x} }+\mathbf {f} _{\mathbf {x} }^{\mathsf {T}}V'_{\mathbf {x} \mathbf {x} }\mathbf {f} _{\mathbf {x} }+V_{\mathbf {x} }'\cdot \mathbf {f} _{\mathbf {x} \mathbf {x} }\\Q_{\mathbf {u} \mathbf {u} }&=\ell _{\mathbf {u} \mathbf {u} }+\mathbf {f} _{\mathbf {u} }^{\mathsf {T}}V'_{\mathbf {x} \mathbf {x} }\mathbf {f} _{\mathbf {u} }+{V'_{\mathbf {x} }}\cdot \mathbf {f} _{\mathbf {u} \mathbf {u} }\\Q_{\mathbf {u} \mathbf {x} }&=\ell _{\mathbf {u} \mathbf {x} }+\mathbf {f} _{\mathbf {u} }^{\mathsf {T}}V'_{\mathbf {x} \mathbf {x} }\mathbf {f} _{\mathbf {x} }+{V'_{\mathbf {x} }}\cdot \mathbf {f} _{\mathbf {u} \mathbf {x} }.\end{alignedat}}}
The last terms in the last three equations denote contraction of a vector with a tensor. Minimizing the quadratic approximation (3) with respect to {\displaystyle \delta \mathbf {u} } we have
giving an open-loop term {\displaystyle \mathbf {k} =-Q_{\mathbf {u} \mathbf {u} }^{-1}Q_{\mathbf {u} }} and a feedback gain term {\displaystyle \mathbf {K} =-Q_{\mathbf {u} \mathbf {u} }^{-1}Q_{\mathbf {u} \mathbf {x} }}. Plugging the result back into (3) , we now have a quadratic model of the value at time {\displaystyle i}:
- {\displaystyle {\begin{alignedat}{2}\Delta V(i)&=&{}-{\tfrac {1}{2}}Q_{\mathbf {u} }^{T}Q_{\mathbf {u} \mathbf {u} }^{-1}Q_{\mathbf {u} }\\V_{\mathbf {x} }(i)&=Q_{\mathbf {x} }&{}-Q_{\mathbf {xu} }Q_{\mathbf {u} \mathbf {u} }^{-1}Q_{\mathbf {u} }\\V_{\mathbf {x} \mathbf {x} }(i)&=Q_{\mathbf {x} \mathbf {x} }&{}-Q_{\mathbf {x} \mathbf {u} }Q_{\mathbf {u} \mathbf {u} }^{-1}Q_{\mathbf {u} \mathbf {x} }.\end{alignedat}}}
Recursively computing the local quadratic models of {\displaystyle V(i)} and the control modifications {\displaystyle \{\mathbf {k} (i),\mathbf {K} (i)\}}, from {\displaystyle i=N-1} down to {\displaystyle i=1}, constitutes the backward pass. As above, the Value is initialized with {\displaystyle V(\mathbf {x} ,N)\equiv \ell _{f}(\mathbf {x} _{N})}. Once the backward pass is completed, a forward pass computes a new trajectory:
- {\displaystyle {\begin{aligned}{\hat {\mathbf {x} }}(1)&=\mathbf {x} (1)\\{\hat {\mathbf {u} }}(i)&=\mathbf {u} (i)+\mathbf {k} (i)+\mathbf {K} (i)({\hat {\mathbf {x} }}(i)-\mathbf {x} (i))\\{\hat {\mathbf {x} }}(i+1)&=\mathbf {f} ({\hat {\mathbf {x} }}(i),{\hat {\mathbf {u} }}(i))\end{aligned}}}
The backward passes and forward passes are iterated until convergence. If the Hessians {\displaystyle Q_{\mathbf {x} \mathbf {x} },Q_{\mathbf {u} \mathbf {u} },Q_{\mathbf {u} \mathbf {x} },Q_{\mathbf {x} \mathbf {u} }} are replaced by their Gauss-Newton approximation, the method reduces to the iterative Linear Quadratic Regulator (iLQR).[6]
Regularization and line-search
[edit ]Differential dynamic programming is a second-order algorithm like Newton's method. It therefore takes large steps toward the minimum and often requires regularization and/or line-search to achieve convergence.[7] [8] Regularization in the DDP context means ensuring that the {\displaystyle Q_{\mathbf {u} \mathbf {u} }} matrix in Eq. 4 is positive definite. Line-search in DDP amounts to scaling the open-loop control modification {\displaystyle \mathbf {k} } by some {\displaystyle 0<\alpha <1}.
Monte Carlo version
[edit ]Sampled differential dynamic programming (SaDDP) is a Monte Carlo variant of differential dynamic programming.[9] [10] [11] It is based on treating the quadratic cost of differential dynamic programming as the energy of a Boltzmann distribution. This way the quantities of DDP can be matched to the statistics of a multidimensional normal distribution. The statistics can be recomputed from sampled trajectories without differentiation.
Sampled differential dynamic programming has been extended to Path Integral Policy Improvement with Differential Dynamic Programming.[12] This creates a link between differential dynamic programming and path integral control,[13] which is a framework of stochastic optimal control.
Constrained problems
[edit ]Interior Point Differential dynamic programming (IPDDP) is an interior-point method generalization of DDP that can address the optimal control problem with nonlinear state and input constraints.[14]
See also
[edit ]References
[edit ]- ^ Mayne, D. Q. (1966). "A second-order gradient method of optimizing non-linear discrete time systems". Int J Control. 3: 85–95. doi:10.1080/00207176608921369.
- ^ Mayne, David Q.; Jacobson, David H. (1970). Differential dynamic programming. New York: American Elsevier Pub. Co. ISBN 978-0-444-00070-5.
- ^ de O. Pantoja, J. F. A. (1988). "Differential dynamic programming and Newton's method". International Journal of Control. 47 (5): 1539–1553. doi:10.1080/00207178808906114. ISSN 0020-7179.
- ^ Liao, L. Z.; C. A Shoemaker (1992). "Advantages of differential dynamic programming over Newton's method for discrete-time optimal control problems". Cornell University. hdl:1813/5474 .
- ^ Morimoto, J.; G. Zeglin; C.G. Atkeson (2003). "Minimax differential dynamic programming: Application to a biped walking robot". Intelligent Robots and Systems, 2003.(IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on. Vol. 2. pp. 1927–1932.
- ^ Baumgärtner, K. (2023). A Unified Local Convergence Analysis of Differential Dynamic Programming, Direct Single Shooting, and Direct Multiple Shooting . 2023 European Control Conference (ECC). pp. 1–7. doi:10.23919/ECC57647.2023.10178367.
- ^ Liao, L. Z; C. A Shoemaker (1991). "Convergence in unconstrained discrete-time differential dynamic programming". IEEE Transactions on Automatic Control. 36 (6): 692. doi:10.1109/9.86943.
- ^ Tassa, Y. (2011). Theory and implementation of bio-mimetic motor controllers (PDF) (Thesis). Hebrew University. Archived from the original (PDF) on 2016年03月04日. Retrieved 2012年02月27日.
- ^ "Sampled differential dynamic programming". 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). doi:10.1109/IROS.2016.7759229. S2CID 1338737.
- ^ Rajamäki, Joose; Hämäläinen, Perttu (June 2018). Regularizing Sampled Differential Dynamic Programming - IEEE Conference Publication . 2018 Annual American Control Conference (ACC). pp. 2182–2189. doi:10.23919/ACC.2018.8430799. S2CID 243932441 . Retrieved 2018年10月19日.
- ^ Rajamäki, Joose (2018). Random Search Algorithms for Optimal Control. Aalto University. ISBN 978-952-60-8156-4. ISSN 1799-4942.
- ^ Lefebvre, Tom; Crevecoeur, Guillaume (July 2019). "Path Integral Policy Improvement with Differential Dynamic Programming". 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM). pp. 739–745. doi:10.1109/AIM.2019.8868359. hdl:1854/LU-8623968 . ISBN 978-1-7281-2493-3. S2CID 204816072.
- ^ Theodorou, Evangelos; Buchli, Jonas; Schaal, Stefan (May 2010). "Reinforcement learning of motor skills in high dimensions: A path integral approach". 2010 IEEE International Conference on Robotics and Automation. pp. 2397–2403. doi:10.1109/ROBOT.2010.5509336. ISBN 978-1-4244-5038-1. S2CID 15116370.
- ^ Pavlov, Andrei; Shames, Iman; Manzie, Chris (2020). "Interior Point Differential Dynamic Programming". IEEE Transactions on Control Systems Technology. 29 (6): 2720. arXiv:2004.12710 . Bibcode:2021ITCST..29.2720P. doi:10.1109/TCST.2021.3049416.
External links
[edit ]- The open-source software framework acados provides an efficient and embeddable implementation of DDP.