Since I could not get numpy.gradient()
to compute a derivative successfully, I wrote a script to compute it manually. Running the script below will output a plot of two functions f(x) = sin(x)
and f'(x) = cos(x)
over the interval 0 ≤ x ≤ 2 pi
.
import numpy as np
import matplotlib.pyplot as plt
def compute_numerical_derivative(func, x, method='custom'):
y = func(x)
if method == 'custom':
size = len(y0)
res = np.zeros(size, 'd') # 'd' for double
# centered differences
for idx in range(1, size-1):
res[idx] = (y[idx+1] - y[idx-1]) / (x[idx+1] - x[idx-1])
# one-sided differences
res[0] = (y[1] - y[0]) / (x[1] - x[0])
res[-1] = (y[size-1] - y[size-2]) / (x[size-1] - x[size-2])
# elif method == 'numpy':
# res = np.gradient(y)
return res
x = np.linspace(0, 2*np.pi, 100)
y0 = np.sin(x)
y1_true = np.cos(x) # exactly d/dx (y0)
y1_cust = compute_numerical_derivative(np.sin, x, method='custom')
# y1_nump = compute_numerical_derivative(np.sin, x, method='numpy')
plt.plot(x, y1_true, 'r', marker='o', linestyle=None, alpha=0.5)
plt.plot(x, y1_cust, 'b', marker='^', linestyle=None, alpha=0.5)
# plt.plot(x, y1_nump, 'k', marker='*', linestyle='-', alpha=0.5)
plt.show()
Can the speed and/or accuracy of this algorithm be improved? Is it proper to use centered differences at interior points and one-sided differences at the boundaries?
2 Answers 2
You can use np.roll
to compute the centered differences as a vectorised operation rather than in a for loop:
res = (np.roll(y, -1) - np.roll(y, 1)) / (np.roll(x, -1) - np.roll(x, 1))
You then still need to account for incorrect values at the boundaries.
Also note that you have a bug at the line
size = len(y0)
It should be len(y)
. But you don't need it anymore as you can access the last values as y[-1]
or x[-2]
.
Lastly, consider putting your test code under an if __name__ == '__main__'
clause.
You can vectorize the calculation
def compute_numerical_derivative3(func, x, method='custom'):
y = func(x)
if method == 'custom':
res = np.zeros_like(y) # 'd' for double
# centered differences
x_dif = x[2:] - x[:-2]
y_dif = y[2:] - y[:-2]
res[1:-1] = y_dif / x_dif
res[0] = (y[1] - y[0]) / (x[1] - x[0])
res[-1] = (y[-1] - y[-2]) / (x[-1] - x[-2])
# elif method == 'numpy':
# res = np.gradient(y)
return res
x
as the second parameter ofnp.gradient
? \$\endgroup\$*varargs
is just the Python notation to mean variable number of arguments, just callnp.gradient(y, x)
Python will figure it out just fine. \$\endgroup\$