Timeline for Sine approximation, did i beat Remez?
Current License: CC BY-SA 4.0
Post Revisions
33 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Sep 8, 2025 at 6:17 | comment | added | Juha P | I made visual comparison against my Neville-Thiele hybrid interpolation method. Interpolation based methods tend to be slow but seems like accuracy can be good. I'd like to calculate polynomial from Neville-Thiele hybrid but my math skills are not good enough for that task. | |
| May 14, 2025 at 11:27 | history | edited | minorlogic | CC BY-SA 4.0 |
deleted 3 characters in body
|
| May 11, 2025 at 11:40 | history | edited | minorlogic | CC BY-SA 4.0 |
tan
|
| May 11, 2025 at 11:39 | comment | added | minorlogic | @MartinBrown Take a look inline float r(float x, float k) { return x / (1.0f + k * x * x); } inline float tan_approx(float x) { //maxulperr = 3.82904 maxrelerr = 2.70230e-07 maxabserr = 2.28229e-07 float a = -0.078562208; float b = -0.0089859413; float c = -0.24578492; return r(r(r(x, a), b), c); } | |
| May 10, 2025 at 18:08 | history | edited | minorlogic | CC BY-SA 4.0 |
TAN approximation form
|
| May 3, 2025 at 12:47 | comment | added | Martin Brown | @chux The Remez absolute equal ripple approximation with 4 coefficients using my modified Julia version of the ARM Remez code is iter = 5 maxerr = 5.89E-07 +/- 6.39E-53 x*(0.999996614147336117544+x2*(-0.16664827666950056+x2*(0.008306318264244135091+x2*(-0.00018363465680338492851)))) If I forced the x coefficient to be exactly 1.0 then it would be even worse. I could make my Julia code available on the understanding that is it warts and all work in progress (ie it doesn't always work). Remez to x^9 just wins out with peak error 3.3e-9 (and 5 coeffs). | |
| May 2, 2025 at 19:29 | answer | added | njuffa | timeline score: 7 | |
| May 2, 2025 at 17:47 | answer | added | chux | timeline score: 6 | |
| May 2, 2025 at 16:23 | comment | added | chux | @minorlogic "better than Remez" --> do you have (or link to) a real Remez code in which to compare? Else we are comparing against a fuzzing reference. | |
| May 2, 2025 at 14:14 | comment | added | Steve Summit | @minorlogic Njuffa knows his stuff — you can be sure he's determining ULP differences with respect to authoritative values. Me, I'm impressed by your technique, but I know I don't know enough to really know how impressed I should be. But if njuffa is impressed — now that's high praise indeed! :-) | |
| May 2, 2025 at 12:03 | history | edited | minorlogic | CC BY-SA 4.0 |
added 45 characters in body
|
| May 2, 2025 at 11:58 | comment | added | Martin Brown | I have very specific requirements for my own function approximations. I'm using a hybrid method - numerical analysis for the trig approximation (etc.) and analytical algebraic solutions to create approximate but considerably more accurate starting guesses for tricky to solve non-linear equations. For it to work I want at most a rational cubic/quadratic to remain sane. I could in principle do quartic/cubic but the difficulty of solving a quartic numerically out weights any accuracy gains. | |
| May 2, 2025 at 11:32 | history | edited | minorlogic | CC BY-SA 4.0 |
0 -64pi range for chart
|
| May 2, 2025 at 11:27 | comment | added | minorlogic | @Martin Brown I'm applied engineer, far away from academic publications. But thanks for advice. Did you tried compositional approach to tangent approximation? | |
| May 2, 2025 at 11:24 | comment | added | minorlogic | @njuffa For use in real math lib, a lot of work should be done. I more interesting in general approach. | |
| May 2, 2025 at 9:24 | comment | added | Martin Brown | I concur with @njuffa that if you haven't already done so you should try to publish it somewhere much more visible maybe ACM Algorithms? | |
| May 2, 2025 at 8:38 | history | edited | minorlogic | CC BY-SA 4.0 |
added approximation chart out of minimization range
|
| May 2, 2025 at 8:28 | history | edited | minorlogic | CC BY-SA 4.0 |
added 312 characters in body
|
| May 2, 2025 at 8:21 | comment | added | minorlogic | @Martin Brown This approach have general application. I tested with several functions approximation. Thanks for sharing results. | |
| May 2, 2025 at 8:17 | comment | added | minorlogic | @chux The range is only for testing and demonstration and compare to Remez. Real code will not use [0, pi/2]. For example this trick also good for "cos" evaluation. As you see my "practical" coefficients only serve to demonstrate method application. Please share results if you dig into it. | |
| May 2, 2025 at 7:58 | comment | added | Martin Brown |
@chux since his expression has a decent polynomial expansion out to O(x^9) we might reasonably expect ~300x improvement in accuracy for each halving of the range. It may do better. I'm also curious to see how well it actually does on your suggested ranges. I came back to enquire of the OP if the same method when applied to the badly behaved alternating series for log(1+x) yields equally impressive results. Or if it actually depends on there being a series in x^2 then log[(1-x)/(1+x))]. Thanks!
|
|
| May 2, 2025 at 0:51 | comment | added | chux |
@minorlogic I am especially interested in the maximum ULP error. Per my test it is : x:0x1.00c894p-2 sin(x):0x1.fc3393027p-3 sine_approximate(x):0x1.fc3398p-3 ulp error:2.495 -- 2.495 is very respectable for a 3-term float sine(). Yet given that error, a double version seems unneeded.
|
|
| May 1, 2025 at 20:52 | comment | added | chux |
@minorlogic Why the range [0,pi/2] versus say[0,pi/4]? Usually sin(x) is over a wide x range and then the argument, through various trig properties, is reduced, to a sub range like [0,pi/4]. Is [0,pi/2] the only range of interest? What is the larger use case?
|
|
| May 1, 2025 at 18:27 | comment | added | minorlogic | Also try to remove "f" of floating points coefficients (if C++ compiler). It will reads as "double" and cast to float32. | |
| May 1, 2025 at 18:16 | comment | added | minorlogic | try link colab.research.google.com/drive/… | |
| May 1, 2025 at 16:34 | history | edited | minorlogic |
edited tags
|
|
| May 1, 2025 at 16:32 | comment | added | minorlogic | It was noted before, but approximation above uses nice L_INF minimization, and lower errors closer to Remez. | |
| May 1, 2025 at 15:27 | history | edited | minorlogic | CC BY-SA 4.0 |
added 36 characters in body
|
| May 1, 2025 at 13:31 | comment | added | minorlogic | And interesting behavior outside 0-pi/2 range. The most problem is No parallel version of calculations available. | |
| May 1, 2025 at 13:27 | history | edited | minorlogic | CC BY-SA 4.0 |
added 251 characters in body
|
| May 1, 2025 at 13:19 | comment | added | Martin Brown |
I can confirm that absolute error in doubles is about 7.4e-9 on 0-pi/2. Remez terms up to x^7 ~ 5.9e-7 and to x^9 ~ 3.3e-9. Your recursion P1() approximates the Remez coefficient in x^9 and higher quite well. It is an interesting compact form that I haven't seen before. The latency won't be great because of data dependencies but the accuracy here is impressive for so few coefficients.
|
|
| May 1, 2025 at 12:46 | history | edited | minorlogic | CC BY-SA 4.0 |
deleted 35 characters in body
|
| May 1, 2025 at 12:22 | history | asked | minorlogic | CC BY-SA 4.0 |