Skip to main content
Mathematics

Questions tagged [hessian-matrix]

The Hessian matrix of function is used to second derivative test when $f$ has a critical point $x$. If the Hessian is positive definite at $x,ドル then $f$ attains a local minimum at $x$. If the Hessian is negative definite at $x,ドル then $f$ attains a local maximum at $x$. If the Hessian has both positive and negative eigenvalues then $x$ is a saddle point for $f$.

Filter by
Sorted by
Tagged with
0 votes
1 answer
55 views

If $f$ and $g$ are $C^2$ functions $\mathbb{R}^2\to\mathbb{R}$ having the same zero set $\cal C$. I want to ask whether $$\kappa_f:= \frac{\text{Hess}_f(t,t)}{\|\nabla f\|}$$ is equal to (up to a sign)...
1 vote
1 answer
71 views

$\def\diag{\operatorname{diag}} \Lambda^{-1}=\diag\left(\dfrac{1}{\lambda_i}\right).$ $\lambda_i \geq 0$ $w_i \in \mathbb{R}$ $C>0, C \in \mathbb{R}_{++}$ $\mathbf w$ is length $d$ and so is lambda....
1 vote
1 answer
50 views

Page 147, equation (3.9.9, of "The Finite Element Method Linear Static and Dynamic Finite Element Analysis" by Thomas J. R. Hughes contains the following formula $$ \begin{bmatrix} \dfrac{\...
1 vote
0 answers
47 views

For which triples $(l,s,h) \in \mathbf{Z}^3_{\geq 0}$ does there exist an example of a smooth function $f\colon \mathbf{R}^2 \to \mathbf{R}$ that has $l$ local minimums (low points), $s$ saddle points,...
3 votes
1 answer
239 views

For a smooth multivariable function $f \colon \mathbf{R}^2 \to \mathbf{R},ドル a critical point of $f$ is any $(a,b)$ at which $\nabla f(a,b) = \mathbf{0}$. Whether a critical point corresponds to a ...
1 vote
1 answer
116 views

I am studying the convexity properties of the negative log-likelihood in multinomial logistic regression. Let me briefly set up the notation: We have a dataset $$ D = \{(x_n, y_n)\}_{n=1}^N, \quad ...
0 votes
0 answers
45 views

In Leonard Dickson's Modern Algebraic Theories (p. 3), the Hessian $h$ of a function $f$ is introduced, where the elements of the ith row are $\frac{\partial^2}{\partial x_i \partial x_1}, \frac{\...
2 votes
0 answers
104 views

The mixed partial derivative $\frac{\partial^{2} f}{\partial x ,円 \partial y} := \frac{\partial}{\partial x}\!\left( \frac{\partial f}{\partial y} \right)$ is obtained by first differentiating with ...
1 vote
1 answer
68 views

I have a planar complex projective cubic, let’s call it $F$. I’ve proven that it’s nonsingular and I’m now asked to prove that $D=\det(H(F))$ is again a smooth cubic. ($H$ is the hessian matrix of $F$....
0 votes
1 answer
45 views

Let $f\colon \mathbb{R}^d \to \mathbb{R}$ be twice continuously differentiable. In the theory of Hamilton–Jacobi–Bellman or convex analysis in general one can encounter conditions on data like $$(\...
2 votes
2 answers
115 views

Setup to the problem: We are going to determine the stationary points of the function 5ドルx^3 - 3yx - 6y^3 - 2$ in the region $-1 \leq x \leq 1, \ -1 \leq y \leq 1$. Calculate the gradient $\nabla f(\...
0 votes
1 answer
75 views

This proof was taken from the book Multivariable Mathematics by Theodore Shifrin. How does the definition of continuity in the second part of the proof connect to the claim that $$f(\mathbf a+\mathbf ...
0 votes
1 answer
58 views

Problem formulation I would like to generalize the following well-known and super-nice formulas for the gradient $\nabla J(x)$ and Hessian $\nabla^2 J(x)$ of a quadratic cost $J(x)\triangleq x'Qx$ \...
0 votes
1 answer
135 views

Let $\mathcal{l}$ be the following quadratic loss function: $\frac{1}{2} \theta ^t H \theta$ where H is Hessian matrix, $\theta \in R^d$ is the parameter vector. If we define the regularized version ...
0 votes
0 answers
68 views

Assume $\mathcal{l}$ is a standard loss function for training a neural network with differentiability, convexity, and Lipschitz continuity assumptions. Let's say the largest eigenvalue of the Hessian ...

15 30 50 per page
1
2 3 4 5
...
60

AltStyle によって変換されたページ (->オリジナル) /