0

Computing higher order derivatives is a nested function and can be quite consuming. I tried to use the parallel programming to speed up the nested function.

from concurrent.futures import ThreadPoolExecutor
import os
def numerical_derivative_par(f, x, n=1, h=mp.mpf("1e-6"), num_workers=None):
 """
 Compute the numerical derivative of a function at a given point using the central difference method.
 Parameters:
 - f: The function to differentiate.
 - x: The point at which to compute the derivative.
 - n: The order of the derivative to compute (default: 1).
 - h: The step size for the finite difference approximation (default: 1e-6).
 - num_workers: The number of parallel workers to use (default: None, uses all available CPU cores).
 Returns:
 - The numerical derivative of the function at the given point.
 """
 if num_workers is None:
 num_workers = os.cpu_count()
 if n == 0:
 return f(x)
 elif n == 1:
 return (f(x + h) - f(x - h)) / (mp.mpf("2") * h)
 else:
 with ThreadPoolExecutor(max_workers=num_workers) as executor:
 futures = []
 for sign in [+1, -1]:
 future = executor.submit(numerical_derivative_par, f, x + sign * h, n - 1, h, num_workers)
 futures.append(future)
 
 results = [future.result() for future in futures]
 
 return (results[0] - results[1]) / (mp.mpf("2") * h)

However, somehow the parallel function was much slower than the original nested function. I suspected that it's because something went wrong with the with ThreadPoolExecutor(max_workers=num_workers) as executor: assignment. But by changing the num_workers it's still much slower than the original nested function.

Is the original nested function already parallelized by python mechanically, i.e. does the nested calling automatically separated to different threads? What went wrong with the above code and how to improve it?

Is it necessary to parallelize the nested function in python?

asked Jun 14, 2023 at 18:06
3
  • 1
    This does not work numerically for high-order derivatives. The truncation error of symmetric difference quotients is always O(h^2), while the floating-point evaluation error of the divided derivative is ~mu/h^p, where p is the order of the derivative and mu the precision of the number type. This gives optimal step sizes h ~ root(mu, p+2), with step-size and (optimal) error moving towards non-small numbers. Commented Jun 14, 2023 at 18:32
  • @LutzLehmann Hello again!(thank you) could you give me some links/references about the truncation error, please? (I posted another question related to this: math.stackexchange.com/questions/4718876/…) There's limited references about the numerical differentiation method online and I could not find the error(accuracy) or precision calculations. Commented Jun 14, 2023 at 18:48
  • 1
    You get the full error expansion, but due to the symmetry there are no odd-degree terms. Thus the first non-trivial term is O(h^2). One could have doubts if it is really non-trivial. Using Taylor-shift operators exp(hD) one can write the divided difference as [(sinh(hD)/h)^p f](x) which expands to [D^p f](x) + p/6 h^2 [D^(p+2) f](x) + ... Commented Jun 14, 2023 at 18:56

0

Know someone who can answer? Share a link to this question via email, Twitter, or Facebook.

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.