Jump to content
Wikipedia The Free Encyclopedia

First-order second-moment method

From Wikipedia, the free encyclopedia
"FOSM" redirects here. For open geographic database project, see Free Open Street Map.

In probability theory, the first-order second-moment (FOSM) method, also referenced as mean value first-order second-moment (MVFOSM) method, is a probabilistic method to determine the stochastic moments of a function with random input variables. The name is based on the derivation, which uses a first-order Taylor series and the first and second moments of the input variables.[1]

Approximation

[edit ]

Consider the objective function g ( x ) {\displaystyle g(x)} {\displaystyle g(x)}, where the input vector x {\displaystyle x} {\displaystyle x} is a realization of the random vector X {\displaystyle X} {\displaystyle X} with probability density function f X ( x ) {\displaystyle f_{X}(x)} {\displaystyle f_{X}(x)}. Because X {\displaystyle X} {\displaystyle X} is randomly distributed, g {\displaystyle g} {\displaystyle g} is also randomly distributed. Following the FOSM method, the mean value of g {\displaystyle g} {\displaystyle g} is approximated by

μ g g ( μ ) {\displaystyle \mu _{g}\approx g(\mu )} {\displaystyle \mu _{g}\approx g(\mu )}

The variance of g {\displaystyle g} {\displaystyle g} is approximated by

σ g 2 i = 1 n j = 1 n g ( μ ) x i g ( μ ) x j cov ( X i , X j ) {\displaystyle \sigma _{g}^{2}\approx \sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}{\frac {\partial g(\mu )}{\partial x_{j}}}\operatorname {cov} \left(X_{i},X_{j}\right)} {\displaystyle \sigma _{g}^{2}\approx \sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}{\frac {\partial g(\mu )}{\partial x_{j}}}\operatorname {cov} \left(X_{i},X_{j}\right)}

where n {\displaystyle n} {\displaystyle n} is the length/dimension of x {\displaystyle x} {\displaystyle x} and g ( μ ) x i {\textstyle {\frac {\partial g(\mu )}{\partial x_{i}}}} {\textstyle {\frac {\partial g(\mu )}{\partial x_{i}}}} is the partial derivative of g {\displaystyle g} {\displaystyle g} at the mean vector μ {\displaystyle \mu } {\displaystyle \mu } with respect to the i-th entry of x {\displaystyle x} {\displaystyle x}. More accurate, second-order second-moment approximations are also available [2]

Derivation

[edit ]

The objective function is approximated by a Taylor series at the mean vector μ {\displaystyle \mu } {\displaystyle \mu }.

g ( x ) = g ( μ ) + i = 1 n g ( μ ) x i ( x i μ i ) + 1 2 i = 1 n j = 1 n 2 g ( μ ) x i x j ( x i μ i ) ( x j μ j ) + {\displaystyle g(x)=g(\mu )+\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})+{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {\partial ^{2}g(\mu )}{\partial x_{i},円\partial x_{j}}}(x_{i}-\mu _{i})(x_{j}-\mu _{j})+\cdots } {\displaystyle g(x)=g(\mu )+\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})+{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {\partial ^{2}g(\mu )}{\partial x_{i},円\partial x_{j}}}(x_{i}-\mu _{i})(x_{j}-\mu _{j})+\cdots }

The mean value of g {\displaystyle g} {\displaystyle g} is given by the integral

μ g = E [ g ( x ) ] = g ( x ) f X ( x ) d x {\displaystyle \mu _{g}=E[g(x)]=\int _{-\infty }^{\infty }g(x)f_{X}(x),円dx} {\displaystyle \mu _{g}=E[g(x)]=\int _{-\infty }^{\infty }g(x)f_{X}(x),円dx}

Inserting the first-order Taylor series yields

μ g [ g ( μ ) + i = 1 n g ( μ ) x i ( x i μ i ) ] f X ( x ) d x = g ( μ ) f X ( x ) d x + i = 1 n g ( μ ) x i ( x i μ i ) f X ( x ) d x = g ( μ ) f X ( x ) d x 1 + i = 1 n g ( μ ) x i ( x i μ i ) f X ( x ) d x 0 = g ( μ ) . {\displaystyle {\begin{aligned}\mu _{g}&\approx \int _{-\infty }^{\infty }\left[g(\mu )+\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})\right]f_{X}(x),円dx\\&=\int _{-\infty }^{\infty }g(\mu )f_{X}(x),円dx+\int _{-\infty }^{\infty }\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})f_{X}(x),円dx\\&=g(\mu )\underbrace {\int _{-\infty }^{\infty }f_{X}(x),円dx} _{1}+\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}\underbrace {\int _{-\infty }^{\infty }(x_{i}-\mu _{i})f_{X}(x),円dx} _{0}\\&=g(\mu ).\end{aligned}}} {\displaystyle {\begin{aligned}\mu _{g}&\approx \int _{-\infty }^{\infty }\left[g(\mu )+\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})\right]f_{X}(x),円dx\\&=\int _{-\infty }^{\infty }g(\mu )f_{X}(x),円dx+\int _{-\infty }^{\infty }\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})f_{X}(x),円dx\\&=g(\mu )\underbrace {\int _{-\infty }^{\infty }f_{X}(x),円dx} _{1}+\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}\underbrace {\int _{-\infty }^{\infty }(x_{i}-\mu _{i})f_{X}(x),円dx} _{0}\\&=g(\mu ).\end{aligned}}}

The variance of g {\displaystyle g} {\displaystyle g} is given by the integral

σ g 2 = E ( [ g ( x ) μ g ] 2 ) = [ g ( x ) μ g ] 2 f X ( x ) d x . {\displaystyle \sigma _{g}^{2}=E\left([g(x)-\mu _{g}]^{2}\right)=\int _{-\infty }^{\infty }[g(x)-\mu _{g}]^{2}f_{X}(x),円dx.} {\displaystyle \sigma _{g}^{2}=E\left([g(x)-\mu _{g}]^{2}\right)=\int _{-\infty }^{\infty }[g(x)-\mu _{g}]^{2}f_{X}(x),円dx.}

According to the computational formula for the variance, this can be written as

σ g 2 = E ( [ g ( x ) μ g ] 2 ) = E ( g ( x ) 2 ) μ g 2 = g ( x ) 2 f X ( x ) d x μ g 2 {\displaystyle \sigma _{g}^{2}=E\left([g(x)-\mu _{g}]^{2}\right)=E\left(g(x)^{2}\right)-\mu _{g}^{2}=\int _{-\infty }^{\infty }g(x)^{2}f_{X}(x),円dx-\mu _{g}^{2}} {\displaystyle \sigma _{g}^{2}=E\left([g(x)-\mu _{g}]^{2}\right)=E\left(g(x)^{2}\right)-\mu _{g}^{2}=\int _{-\infty }^{\infty }g(x)^{2}f_{X}(x),円dx-\mu _{g}^{2}}

Inserting the Taylor series yields

σ g 2 [ g ( μ ) + i = 1 n g ( μ ) x i ( x i μ i ) ] 2 f X ( x ) d x μ g 2 = { g ( μ ) 2 + 2 g μ i = 1 n g ( μ ) x i ( x i μ i ) + [ i = 1 n g ( μ ) x i ( x i μ i ) ] 2 } f X ( x ) d x μ g 2 = g ( μ ) 2 f X ( x ) d x + 2 g μ i = 1 n g ( μ ) x i ( x i μ i ) f X ( x ) d x + [ i = 1 n g ( μ ) x i ( x i μ i ) ] 2 f X ( x ) d x μ g 2 = g μ 2 f X ( x ) d x 1 + 2 g μ i = 1 n g ( μ ) x i ( x i μ i ) f X ( x ) d x 0 + [ i = 1 n j = 1 n g ( μ ) x i g ( μ ) x j ( x i μ i ) ( x j μ j ) ] f X ( x ) d x μ g 2 = g ( μ ) 2 μ g 2 + i = 1 n j = 1 n g ( μ ) x i g ( μ ) x j ( x i μ i ) ( x j μ j ) f X ( x ) d x cov ( X i , X j ) μ g 2 = i = 1 n j = 1 n g ( μ ) x i g ( μ ) x j cov ( X i , X j ) . {\displaystyle {\begin{aligned}\sigma _{g}^{2}&\approx \int _{-\infty }^{\infty }\left[g(\mu )+\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})\right]^{2}f_{X}(x),円dx-\mu _{g}^{2}\\&=\int _{-\infty }^{\infty }\left\{g(\mu )^{2}+2g_{\mu }\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})+\left[\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})\right]^{2}\right\}f_{X}(x),円dx-\mu _{g}^{2}\\&=\int _{-\infty }^{\infty }g(\mu )^{2}f_{X}(x),円dx+\int _{-\infty }^{\infty }2,円g_{\mu }\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})f_{X}(x),円dx\\&\quad {}+\int _{-\infty }^{\infty }\left[\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})\right]^{2}f_{X}(x),円dx-\mu _{g}^{2}\\&=g_{\mu }^{2}\underbrace {\int _{-\infty }^{\infty }f_{X}(x),円dx} _{1}+2g_{\mu }\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}\underbrace {\int _{-\infty }^{\infty }(x_{i}-\mu _{i})f_{X}(x),円dx} _{0}\\&\quad {}+\int _{-\infty }^{\infty }\left[\sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}{\frac {\partial g(\mu )}{\partial x_{j}}}(x_{i}-\mu _{i})(x_{j}-\mu _{j})\right]f_{X}(x),円dx-\mu _{g}^{2}\\&=\underbrace {g(\mu )^{2}} _{\mu _{g}^{2}}+\sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}{\frac {\partial g(\mu )}{\partial x_{j}}}\underbrace {\int _{-\infty }^{\infty }(x_{i}-\mu _{i})(x_{j}-\mu _{j})f_{X}(x),円dx} _{\operatorname {cov} \left(X_{i},X_{j}\right)}-\mu _{g}^{2}\\&=\sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}{\frac {\partial g(\mu )}{\partial x_{j}}}\operatorname {cov} \left(X_{i},X_{j}\right).\end{aligned}}} {\displaystyle {\begin{aligned}\sigma _{g}^{2}&\approx \int _{-\infty }^{\infty }\left[g(\mu )+\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})\right]^{2}f_{X}(x),円dx-\mu _{g}^{2}\\&=\int _{-\infty }^{\infty }\left\{g(\mu )^{2}+2g_{\mu }\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})+\left[\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})\right]^{2}\right\}f_{X}(x),円dx-\mu _{g}^{2}\\&=\int _{-\infty }^{\infty }g(\mu )^{2}f_{X}(x),円dx+\int _{-\infty }^{\infty }2,円g_{\mu }\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})f_{X}(x),円dx\\&\quad {}+\int _{-\infty }^{\infty }\left[\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})\right]^{2}f_{X}(x),円dx-\mu _{g}^{2}\\&=g_{\mu }^{2}\underbrace {\int _{-\infty }^{\infty }f_{X}(x),円dx} _{1}+2g_{\mu }\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}\underbrace {\int _{-\infty }^{\infty }(x_{i}-\mu _{i})f_{X}(x),円dx} _{0}\\&\quad {}+\int _{-\infty }^{\infty }\left[\sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}{\frac {\partial g(\mu )}{\partial x_{j}}}(x_{i}-\mu _{i})(x_{j}-\mu _{j})\right]f_{X}(x),円dx-\mu _{g}^{2}\\&=\underbrace {g(\mu )^{2}} _{\mu _{g}^{2}}+\sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}{\frac {\partial g(\mu )}{\partial x_{j}}}\underbrace {\int _{-\infty }^{\infty }(x_{i}-\mu _{i})(x_{j}-\mu _{j})f_{X}(x),円dx} _{\operatorname {cov} \left(X_{i},X_{j}\right)}-\mu _{g}^{2}\\&=\sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}{\frac {\partial g(\mu )}{\partial x_{j}}}\operatorname {cov} \left(X_{i},X_{j}\right).\end{aligned}}}

Higher-order approaches

[edit ]

The following abbreviations are introduced.

g μ = g ( μ ) , g , i = g ( μ ) x i , g , i j = 2 g ( μ ) x i x j , μ i , j = E [ ( x i μ i ) j ] {\displaystyle {\begin{aligned}g_{\mu }&=g(\mu ),&g_{,i}&={\frac {\partial g(\mu )}{\partial x_{i}}},&g_{,ij}&={\frac {\partial ^{2}g(\mu )}{\partial x_{i},円\partial x_{j}}},&\mu _{i,j}&=E\left[(x_{i}-\mu _{i})^{j}\right]\end{aligned}}} {\displaystyle {\begin{aligned}g_{\mu }&=g(\mu ),&g_{,i}&={\frac {\partial g(\mu )}{\partial x_{i}}},&g_{,ij}&={\frac {\partial ^{2}g(\mu )}{\partial x_{i},円\partial x_{j}}},&\mu _{i,j}&=E\left[(x_{i}-\mu _{i})^{j}\right]\end{aligned}}}

In the following, the entries of the random vector X {\displaystyle X} {\displaystyle X} are assumed to be independent. Considering also the second-order terms of the Taylor expansion, the approximation of the mean value is given by

μ g g μ + 1 2 i = 1 n g , i i μ i , 2 {\displaystyle \mu _{g}\approx g_{\mu }+{\frac {1}{2}}\sum _{i=1}^{n}g_{,ii}\;\mu _{i,2}} {\displaystyle \mu _{g}\approx g_{\mu }+{\frac {1}{2}}\sum _{i=1}^{n}g_{,ii}\;\mu _{i,2}}

The incomplete second-order approximation (ISOA[3] ) of the variance is given by

σ g 2 g μ 2 + i = 1 n g , i 2 μ i , 2 + 1 4 i = 1 n g , i i 2 μ i , 4 + g μ i = 1 n g , i i μ i , 2 + i = 1 n g , i g , i i μ i , 3 + 1 2 i = 1 n j = i + 1 n g , i i g , j j μ i , 2 μ j , 2 + i = 1 n j = i + 1 n g , i j 2 μ i , 2 μ j , 2 μ g 2 {\displaystyle {\begin{aligned}\sigma _{g}^{2}\approx {}g_{\mu }^{2}&+\sum _{i=1}^{n}g_{,i}^{2},円\mu _{i,2}+{\frac {1}{4}}\sum _{i=1}^{n}g_{,ii}^{2},円\mu _{i,4}+g_{\mu }\sum _{i=1}^{n}g_{,ii},円\mu _{i,2}+\sum _{i=1}^{n}g_{,i},円g_{,ii},円\mu _{i,3}\\&+{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=i+1}^{n}g_{,ii},円g_{,jj},円\mu _{i,2},円\mu _{j,2}+\sum _{i=1}^{n}\sum _{j=i+1}^{n}g_{,ij}^{2},円\mu _{i,2},円\mu _{j,2}-\mu _{g}^{2}\end{aligned}}} {\displaystyle {\begin{aligned}\sigma _{g}^{2}\approx {}g_{\mu }^{2}&+\sum _{i=1}^{n}g_{,i}^{2},円\mu _{i,2}+{\frac {1}{4}}\sum _{i=1}^{n}g_{,ii}^{2},円\mu _{i,4}+g_{\mu }\sum _{i=1}^{n}g_{,ii},円\mu _{i,2}+\sum _{i=1}^{n}g_{,i},円g_{,ii},円\mu _{i,3}\\&+{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=i+1}^{n}g_{,ii},円g_{,jj},円\mu _{i,2},円\mu _{j,2}+\sum _{i=1}^{n}\sum _{j=i+1}^{n}g_{,ij}^{2},円\mu _{i,2},円\mu _{j,2}-\mu _{g}^{2}\end{aligned}}}

The skewness of g {\displaystyle g} {\displaystyle g} can be determined from the third central moment μ g , 3 {\displaystyle \mu _{g,3}} {\displaystyle \mu _{g,3}}. When considering only linear terms of the Taylor series, but higher-order moments, the third central moment is approximated by

μ g , 3 i = 1 n g , i 3 μ i , 3 {\displaystyle \mu _{g,3}\approx \sum _{i=1}^{n}g_{,i}^{3}\;\mu _{i,3}} {\displaystyle \mu _{g,3}\approx \sum _{i=1}^{n}g_{,i}^{3}\;\mu _{i,3}}

For the second-order approximations of the third central moment as well as for the derivation of all higher-order approximations see Appendix D of Ref.[3] Taking into account the quadratic terms of the Taylor series and the third moments of the input variables is referred to as second-order third-moment method.[4] However, the full second-order approach of the variance (given above) also includes fourth-order moments of input parameters,[5] the full second-order approach of the skewness 6th-order moments,[3] [6] and the full second-order approach of the kurtosis up to 8th-order moments.[6]

Practical application

[edit ]

There are several examples in the literature where the FOSM method is employed to estimate the stochastic distribution of the buckling load of axially compressed structures (see e.g. Ref.[7] [8] [9] [10] ). For structures which are very sensitive to deviations from the ideal structure (like cylindrical shells) it has been proposed to use the FOSM method as a design approach. Often the applicability is checked by comparison with a Monte Carlo simulation. Two comprehensive application examples of the full second-order method specifically oriented towards the fatigue crack growth in a metal railway axle are discussed and checked by comparison with a Monte Carlo simulation in Ref.[5] [6]

In engineering practice, the objective function often is not given as analytic expression, but for instance as a result of a finite-element simulation. Then the derivatives of the objective function need to be estimated by the central differences method. The number of evaluations of the objective function equals 2 n + 1 {\displaystyle 2n+1} {\displaystyle 2n+1}. Depending on the number of random variables this still can mean a significantly smaller number of evaluations than performing a Monte Carlo simulation. However, when using the FOSM method as a design procedure, a lower bound shall be estimated, which is actually not given by the FOSM approach. Therefore, a type of distribution needs to be assumed for the distribution of the objective function, taking into account the approximated mean value and standard deviation.

References

[edit ]
  1. ^ A. Haldar and S. Mahadevan, Probability, Reliability, and Statistical Methods in Engineering Design. John Wiley & Sons New York/Chichester, UK, 2000.
  2. ^ Crespo, L. G.; Kenny, S. P. (2005). "A first and second order moment approach to probabilistic control synthesis". {AIAA} Guidance Navigation and Control Conference. hdl:2060/20050232742 .
  3. ^ a b c B. Kriegesmann, "Probabilistic Design of Thin-Walled Fiber Composite Structures", Mitteilungen des Instituts für Statik und Dynamik der Leibniz Universität Hannover 15/2012, ISSN 1862-4650, Gottfried Wilhelm Leibniz Universität Hannover, Hannover, Germany, 2012, PDF; 10,2MB.
  4. ^ Y. J. Hong, J. Xing, and J. B. Wang, "A Second-Order Third-Moment Method for Calculating the Reliability of Fatigue", Int. J. Press. Vessels Pip., 76 (8), pp 567–570, 1999.
  5. ^ a b Mallor C, Calvo S, Núñez JL, Rodríguez-Barrachina R, Landaberea A. "Full second-order approach for expected value and variance prediction of probabilistic fatigue crack growth life." International Journal of Fatigue 2020;133:105454. https://doi.org/10.1016/j.ijfatigue.2019.105454.
  6. ^ a b c Mallor C, Calvo S, Núñez JL, Rodríguez-Barrachina R, Landaberea A. "Uncertainty propagation using the full second-order approach for probabilistic fatigue crack growth life." International Journal of Numerical Methods for Calculation and Design in Engineering (RIMNI) 2020:11. https://doi.org/10.23967/j.rimni.202007004.
  7. ^ I. Elishakoff, S. van Manen, P. G. Vermeulen, and J. Arbocz, "First-Order Second-Moment Analysis of the Buckling of Shells with Random Imperfections", AIAA J., 25 (8), pp 1113–1117, 1987.
  8. ^ I. Elishakoff, "Uncertain Buckling: Its Past, Present and Future", Int. J. Solids Struct., 37 (46–47), pp 6869–6889, Nov. 2000.
  9. ^ J. Arbocz and M. W. Hilburger, "Toward a Probabilistic Preliminary Design Criterion for Buckling Critical Composite Shells", AIAA J., 43 (8), pp 1823–1827, 2005.
  10. ^ B. Kriegesmann, R. Rolfes, C. Hühne, and A. Kling, "Fast Probabilistic Design Procedure for Axially Compressed Composite Cylinders", Compos. Struct., 93, pp 3140–3149, 2011.

AltStyle によって変換されたページ (->オリジナル) /