Milstein method
In mathematics, the Milstein method is a technique for the approximate numerical solution of a stochastic differential equation. It is named after Grigori Milstein who first published it in 1974.[1] [2]
Description
[edit ]Consider the autonomous Itō stochastic differential equation: {\displaystyle \mathrm {d} X_{t}=a(X_{t}),円\mathrm {d} t+b(X_{t}),円\mathrm {d} W_{t}} with initial condition {\displaystyle X_{0}=x_{0}}, where {\displaystyle W_{t}} denotes the Wiener process, and suppose that we wish to solve this SDE on some interval of time {\displaystyle [0,T]}. Then the Milstein approximation to the true solution {\displaystyle X} is the Markov chain {\displaystyle Y} defined as follows:
- Partition the interval {\displaystyle [0,T]} into {\displaystyle N} equal subintervals of width {\displaystyle \Delta t>0}: {\displaystyle 0=\tau _{0}<\tau _{1}<\dots <\tau _{N}=T{\text{ with }}\tau _{n}:=n\Delta t{\text{ and }}\Delta t={\frac {T}{N}}}
- Set {\displaystyle Y_{0}=x_{0};}
- Recursively define {\displaystyle Y_{n}} for {\displaystyle 1\leq n\leq N} by: {\displaystyle Y_{n+1}=Y_{n}+a(Y_{n})\Delta t+b(Y_{n})\Delta W_{n}+{\frac {1}{2}}b(Y_{n})b'(Y_{n})\left((\Delta W_{n})^{2}-\Delta t\right)} where {\displaystyle b'} denotes the derivative of {\displaystyle b(x)} with respect to {\displaystyle x} and: {\displaystyle \Delta W_{n}=W_{\tau _{n+1}}-W_{\tau _{n}}} are independent and identically distributed normal random variables with expected value zero and variance {\displaystyle \Delta t}. Then {\displaystyle Y_{n}} will approximate {\displaystyle X_{\tau _{n}}} for {\displaystyle 0\leq n\leq N}, and increasing {\displaystyle N} will yield a better approximation.
Note that when {\displaystyle b'(Y_{n})=0} (i.e. the diffusion term does not depend on {\displaystyle X_{t}}) this method is equivalent to the Euler–Maruyama method.
The Milstein scheme has both weak and strong order of convergence {\displaystyle \Delta t} which is superior to the Euler–Maruyama method, which in turn has the same weak order of convergence {\displaystyle \Delta t} but inferior strong order of convergence {\displaystyle {\sqrt {\Delta t}}}.[3]
Intuitive derivation
[edit ]For this derivation, we will only look at geometric Brownian motion (GBM), the stochastic differential equation of which is given by: {\displaystyle \mathrm {d} X_{t}=\mu X\mathrm {d} t+\sigma XdW_{t}} with real constants {\displaystyle \mu } and {\displaystyle \sigma }. Using Itō's lemma we get: {\displaystyle \mathrm {d} \ln X_{t}=\left(\mu -{\frac {1}{2}}\sigma ^{2}\right)\mathrm {d} t+\sigma \mathrm {d} W_{t}}
Thus, the solution to the GBM SDE is: {\displaystyle {\begin{aligned}X_{t+\Delta t}&=X_{t}\exp \left\{\int _{t}^{t+\Delta t}\left(\mu -{\frac {1}{2}}\sigma ^{2}\right)\mathrm {d} t+\int _{t}^{t+\Delta t}\sigma \mathrm {d} W_{u}\right\}\\&\approx X_{t}\left(1+\mu \Delta t-{\frac {1}{2}}\sigma ^{2}\Delta t+\sigma \Delta W_{t}+{\frac {1}{2}}\sigma ^{2}(\Delta W_{t})^{2}\right)\\&=X_{t}+a(X_{t})\Delta t+b(X_{t})\Delta W_{t}+{\frac {1}{2}}b(X_{t})b'(X_{t})((\Delta W_{t})^{2}-\Delta t)\end{aligned}}} where {\displaystyle a(x)=\mu x,~b(x)=\sigma x}
The numerical solution is presented in the graphic for three different trajectories.[4]
Computer implementation
[edit ]The following Python code implements the Milstein method and uses it to solve the SDE describing geometric Brownian motion defined by {\displaystyle {\begin{cases}dY_{t}=\mu Y,円{\mathrm {d} }t+\sigma Y,円{\mathrm {d} }W_{t}\\Y_{0}=Y_{\text{init}}\end{cases}}}
# -*- coding: utf-8 -*- # Milstein Method importnumpyasnp importmatplotlib.pyplotasplt classModel: """Stochastic model constants.""" mu = 3 sigma = 1 defdW(dt): """Random sample normal distribution.""" return np.random.normal(loc=0.0, scale=np.sqrt(dt)) defrun_simulation(): """ Return the result of one full simulation.""" # One second and thousand grid points T_INIT = 0 T_END = 1 N = 1000 # Compute 1000 grid points DT = float(T_END - T_INIT) / N TS = np.arange(T_INIT, T_END + DT, DT) Y_INIT = 1 # Vectors to fill ys = np.zeros(N + 1) ys[0] = Y_INIT for i in range(1, TS.size): t = (i - 1) * DT y = ys[i - 1] dw = dW(DT) # Sum up terms as in the Milstein method ys[i] = y + \ Model.mu * y * DT + \ Model.sigma * y * dw + \ (Model.sigma**2 / 2) * y * (dw**2 - DT) return TS, ys defplot_simulations(num_sims: int): """Plot several simulations in one image.""" for _ in range(num_sims): plt.plot(*run_simulation()) plt.xlabel("time (s)") plt.ylabel("y") plt.grid() plt.show() if __name__ == "__main__": NUM_SIMS = 2 plot_simulations(NUM_SIMS)
See also
[edit ]References
[edit ]- ^ Mil'shtein, G. N. (1974). "Приближенное интегрирование стохастических дифференциальных уравнений" [Approximate integration of stochastic differential equations]. Teoriya Veroyatnostei i ee Primeneniya (in Russian). 19 (3): 583–588.
- ^ Mil’shtein, G. N. (1975). "Approximate Integration of Stochastic Differential Equations". Theory of Probability & Its Applications. 19 (3): 557–000. doi:10.1137/1119062.
- ^ Mackevičius, V. (2011). Introduction to Stochastic Analysis. Wiley. ISBN 978-1-84821-311-1.
- ^ Picchini, Umberto. "SDE Toolbox: simulation and estimation of stochastic differential equations with Matlab".
Further reading
[edit ]- Kloeden, P.E., & Platen, E. (1999). Numerical Solution of Stochastic Differential Equations. Berlin: Springer. ISBN 3-540-54062-8.
{{cite book}}: CS1 maint: multiple names: authors list (link)