2

I would like to compute the residual of the numerical solution to an ordinary differential equation.

Here is the code

import numpy as np
from scipy.integrate import solve_ivp
def f(t, x): 
 return 0.5 * x
t_start = 0
t_end = 2
n = 50
t = np.linspace(t_start, t_end, 50)
x_init = 1
solution = solve_ivp(f, [t_start, t_end], [x_init], dense_output = True)

Now I would like to numerically differentiate the result at the points t. I tried naively

from scipy.differentiate import derivative
derivative(solution.sol, t)

And get the error that the sizes do not match.

I tried to overcome this by defining

def g(t):
 return solution.sol(t)[0]
derivative(g, t)

and this also does not work.

How to fix these errors?

Added clarification: Please note that I am not asking how to implement numerical differentiation. Or how to find these derivatives using other modules. My questions is as follows: How to use scipy function "derivative" on the output of another scipy function "solve_ivp."

asked Jul 3 at 18:29

2 Answers 2

4

You need to vectorize the OdeSolution interpolator to make it work, or simply use np.gradient from numpy. In both case, the derivative will be assessed numerically.

import numpy as np
import matplotlib.pyplot as plt
from scipy import integrate, differentiate

First we solve the IVP:

def model(t, y): 
 return 0.5 * y
t = np.linspace(0, 2, 500)
solution = integrate.solve_ivp(model, [t.min(), t.max()], y0=[1.], t_eval=t)

We may assess the gradient numerically using the integrated solution with numpy:

gradient = np.gradient(solution.y[0], solution.t)

Or we may vectorize the OdeSolution interpolator:

@np.vectorize
def proxy(t):
 return solution.sol(t)[0]

Then we can use derivative method of scipy as you wished:

derivative = differentiate.derivative(proxy, t)
# success: [ True True ... True True]
# status: [0 0 ... 0 0]
# df: [ 5.000e-01 5.010e-01 ... 1.356e+00 1.359e+00]
# error: [ 1.112e-09 4.324e-09 ... 3.553e-15 2.665e-15]
# nit: [3 3 ... 2 2]
# nfev: [13 13 ... 11 11]
# x: [ 0.000e+00 4.008e-03 ... 1.996e+00 2.000e+00]

In this second version we receive a _RichResult which contains additional information on the numerical estimation.

For your model, we simply have exponentials:

fig, axe = plt.subplots()
axe.plot(solution.t, solution.y[0])
axe.plot(solution.t, gradient)
axe.plot(solution.t, derivative.df, "--")
axe.grid()

As we can see, both methods agree and both are indeed numerical.

enter image description here

answered Jul 4 at 17:26
Sign up to request clarification or add additional context in comments.

1 Comment

@Artem Solved your problem, you must vectorize your OdeSolution interpolator in order to make it work. Have a good day.
2

You can't use scipy.differentiate.derivative here because that procedure expects a function of a continuous variable as an argument whereas what solve_ivp produces is a discrete set of values.

The most accurate method is simply to use your given gradient function f(t,x) at the values of t and numerically-evaluated values of x.

A second method is to use finite-difference approximations from the t, x pairs.

x can be found in solution.y[0]. The t_eval= option can be specified in solve_ivp if you want solutions at the particular nodes in t[].

The code below compares both methods with the solution and its derivative for your equation:

x=exp(t/2), dx/dt=(1/2)exp(t/2)

import numpy as np
from scipy.integrate import solve_ivp
def f(t, x): 
 return 0.5 * x
t_start = 0
t_end = 2
n = 50
t = np.linspace(t_start, t_end, 50)
x_init = 1
solution = solve_ivp(f, [t_start, t_end], [x_init], t_eval=t, dense_output = True)
x = solution.y[0]
dt = ( t_end - t_start ) / ( n - 1 )
deriv = np.zeros( ( n ) )
deriv[1:n-1] = ( x[2:n] - x[0:n-2] ) / ( 2 * dt ) # 2nd-order approximation internally
deriv[0 ] = ( -x[2 ] + 4 * x[1 ] - 3 * x[0 ] ) / ( 2 * dt ) # 2nd-order approximation at ends
deriv[n-1] = ( x[n-3] - 4 * x[n-2] + 3 * x[n-1] ) / ( 2 * dt )
exact_x = np.exp( 0.5 * t )
exact_deriv = 0.5 * np.exp( 0.5 * t )
print( f"{'time':>7s} {'finite-diff':>11s} {'f(t,x)':>11s} {'exact':>11s}" )
for i in range( n ): 
 print( f"{t[i]:7.4f} {deriv[i]:11.4e} {f(t[i],x[i]):11.4e} {exact_deriv[i]:11.4e}" )

Output:

 time finite-diff f(t,x) exact
 0.0000 4.9993e-01 5.0000e-01 5.0000e-01
 0.0408 5.1034e-01 5.1031e-01 5.1031e-01
 0.0816 5.2087e-01 5.2083e-01 5.2083e-01
 0.1224 5.3161e-01 5.3157e-01 5.3157e-01
 0.1633 5.4259e-01 5.4253e-01 5.4253e-01
 0.2041 5.5379e-01 5.5372e-01 5.5371e-01
 0.2449 5.6521e-01 5.6513e-01 5.6513e-01
 0.2857 5.7686e-01 5.7678e-01 5.7678e-01
 0.3265 5.8875e-01 5.8868e-01 5.8867e-01
 0.3673 6.0088e-01 6.0082e-01 6.0081e-01
 0.4082 6.1326e-01 6.1320e-01 6.1320e-01
 0.4490 6.2589e-01 6.2585e-01 6.2584e-01
 0.4898 6.3879e-01 6.3875e-01 6.3875e-01
 0.5306 6.5195e-01 6.5192e-01 6.5192e-01
 0.5714 6.6538e-01 6.6536e-01 6.6536e-01
 0.6122 6.7909e-01 6.7908e-01 6.7907e-01
 0.6531 6.9309e-01 6.9308e-01 6.9308e-01
 0.6939 7.0738e-01 7.0737e-01 7.0737e-01
 0.7347 7.2197e-01 7.2195e-01 7.2195e-01
 0.7755 7.3686e-01 7.3684e-01 7.3683e-01
 0.8163 7.5205e-01 7.5203e-01 7.5203e-01
 0.8571 7.6757e-01 7.6753e-01 7.6753e-01
 0.8980 7.8340e-01 7.8336e-01 7.8336e-01
 0.9388 7.9956e-01 7.9951e-01 7.9951e-01
 0.9796 8.1606e-01 8.1599e-01 8.1599e-01
 1.0204 8.3289e-01 8.3282e-01 8.3282e-01
 1.0612 8.5007e-01 8.4999e-01 8.4999e-01
 1.1020 8.6760e-01 8.6751e-01 8.6751e-01
 1.1429 8.8549e-01 8.8540e-01 8.8540e-01
 1.1837 9.0374e-01 9.0365e-01 9.0365e-01
 1.2245 9.2237e-01 9.2229e-01 9.2228e-01
 1.2653 9.4138e-01 9.4130e-01 9.4130e-01
 1.3061 9.6079e-01 9.6071e-01 9.6071e-01
 1.3469 9.8060e-01 9.8052e-01 9.8051e-01
 1.3878 1.0008e+00 1.0007e+00 1.0007e+00
 1.4286 1.0214e+00 1.0214e+00 1.0214e+00
 1.4694 1.0425e+00 1.0424e+00 1.0424e+00
 1.5102 1.0640e+00 1.0639e+00 1.0639e+00
 1.5510 1.0859e+00 1.0859e+00 1.0858e+00
 1.5918 1.1083e+00 1.1082e+00 1.1082e+00
 1.6327 1.1312e+00 1.1311e+00 1.1311e+00
 1.6735 1.1545e+00 1.1544e+00 1.1544e+00
 1.7143 1.1783e+00 1.1782e+00 1.1782e+00
 1.7551 1.2026e+00 1.2025e+00 1.2025e+00
 1.7959 1.2274e+00 1.2273e+00 1.2273e+00
 1.8367 1.2527e+00 1.2526e+00 1.2526e+00
 1.8776 1.2785e+00 1.2784e+00 1.2784e+00
 1.9184 1.3049e+00 1.3048e+00 1.3048e+00
 1.9592 1.3318e+00 1.3317e+00 1.3317e+00
 2.0000 1.3590e+00 1.3591e+00 1.3591e+00
answered Jul 4 at 8:45

6 Comments

Thanks. However, your first sentence makes no sense from the programming point of view. What do you mean that a function "expects a continuous function"? According to documentation it expects a callable object, which (correct me if I am wrong) OdeSolution is. For instance, I can easily use the old "derivative" function from scipy.misc on it. The goal to compute this derivative is to compare it with $f(t,x),ドル to see what kind of error the numerical method produces. So your first method is not really a help. The second method is indeed useful (but can be obtained easier with numpy.gradient). So.
So, the question remains: How one can use flexible and already implemented function "derivative" on the output produced by solve_ivp.
In this context a continuous function would be one that could return a value for any value of t ... not just the nodal values at which you happened to have the solution. The definition of a derivative in mathematics is a limiting one - you have to be able to go smoothly through all values of t, not just discrete ones.
This is exactly exactly what the option dense_output = True does: it returns the object that interpolates the values of solution at any given point. I also highly doubt that function derivative uses "limiting procedure" to compute the derivative. This is just a numerical approximation (you used the second order, by default derivative uses the eighth order). So there should be a way to apply derivative to the output of solve_ivp.
Sure, you can simply apply f(t,x) to any output you get from solve_ivp. It will be discrete.
|

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.