Skip to main content
Stack Overflow
  1. About
  2. For Teams

Return to Answer

deleted 54 characters in body
Source Link
Steve Summit
  • 49.2k
  • 9
  • 80
  • 113

So, based on these arguments, your implementation should beperform quite a bit better than the one suggested by your instructor.

Assuming you've reduced the range of the input x properly, x won't be large and so Math.pow(x, i) won't grow so fast. (In fact, if you reduce the range to [-π/4, π/4], it won't grow at all.) And when you're computing y/z, even when y and z are large, the properties of division and of IEEE-754 floating point mean that you usually get a good result — you don't lose so much precision — after all. Finally, as I mentioned in a comment, the Taylor series for sin and cos are so darn good that even a naïve implementation tends to give great resultsconverge to a good answer, and quickly. (That's why, for the problem you chose, your implementation and the professor's gave practically identical results.)

(2) In general, well-chosen rearrangements, which are theoretically mathematically equivalent but which work around the various limitations of floating point, can be an excellent idea.

(3) It sounds like you might not always want to take this particular instructor's teachings to heart.

So, based on these arguments, your implementation should be quite a bit better than the one suggested by your instructor.

Assuming you've reduced the range of the input x properly, x won't be large and so Math.pow(x, i) won't grow so fast. (In fact, if you reduce the range to [-π/4, π/4], it won't grow at all.) And when you're computing y/z, even when y and z are large, the properties of division and of IEEE-754 floating point mean that you usually get a good result — you don't lose so much precision — after all. Finally, as I mentioned in a comment, the Taylor series for sin and cos are so darn good that even a naïve implementation tends to give great results. (That's why, for the problem you chose, your implementation and the professor's gave practically identical results.)

(2) In general, well-chosen rearrangements, which are theoretically mathematically equivalent but which work around the limitations of floating point, can be an excellent idea.

(3) It sounds like you might not want to take this particular instructor's teachings to heart.

So, based on these arguments, your implementation should perform quite a bit better than the one suggested by your instructor.

Assuming you've reduced the range of the input x properly, x won't be large and so Math.pow(x, i) won't grow so fast. And when you're computing y/z, even when y and z are large, the properties of division and of IEEE-754 floating point mean that you usually get a good result — you don't lose so much precision — after all. Finally, as I mentioned in a comment, the Taylor series for sin and cos are so darn good that even a naïve implementation tends to converge to a good answer, and quickly. (That's why, for the problem you chose, your implementation and the professor's gave practically identical results.)

(2) In general, well-chosen rearrangements, which are theoretically mathematically equivalent but which work around the various limitations of floating point, can be an excellent idea.

(3) It sounds like you might not always want to take this particular instructor's teachings to heart.

added 29 characters in body
Source Link
Steve Summit
  • 49.2k
  • 9
  • 80
  • 113

The onlyBut for this particular Taylor series, the problem with the arguments I've presented here is that, in my experience, they don't end up making much difference in practice. The efficiency argument is probably real. But it ends up being hard to show that the hypothetical inaccuracies due to overflow and precision loss will actually occur.

The only problem with the arguments I've presented here is that, in my experience, they don't end up making much difference in practice. The efficiency argument is probably real. But it ends up being hard to show that the hypothetical inaccuracies due to overflow and precision loss will actually occur.

But for this particular Taylor series, the problem with the arguments I've presented is that, in my experience, they don't end up making much difference in practice. The efficiency argument is probably real. But it ends up being hard to show that the hypothetical inaccuracies due to overflow and precision loss will actually occur.

added 13 characters in body
Source Link
Steve Summit
  • 49.2k
  • 9
  • 80
  • 113
  1. It does a lot of unnecessary multiplication. At iteration i+1, it essentially recomputes berechneFak(i) and Math.pow(x, i) on the way to computing them for i+1.
  2. Terms like berechneFak(i) and Math.pow(x, i) can get very big, very fast. That's not a problem in pure mathematics, but the range and precision of computer floating point numbers are limited. If a term overflows, it can demolish your results. When you have something like x = y/z, where y and z are both very big, you may lose precision in the quotient x even though x is nice and small and theoretically perfectly representable.

Here, there's a great way to address both problems. If you've already computed the factorial berechneFak(i), then on the next iteration you can simply multiply it by i+1 to get berechneFak(i+1). If you've already computed Math.pow(x, i), then you can simply multiply it by x again to get Math.pow(x, i+1). And if you perform both operations on a single running quotient variable term, as you did, you minimize the magnitude of the numbers involved, which reduces the possibilities for both overflow and precision loss.

  1. It does a lot of unnecessary multiplication. At iteration i+1, it essentially recomputes berechneFak(i) and Math.pow(x, i) on the way to computing them for i+1.
  2. Terms like berechneFak(i) and Math.pow(x, i) can get very big, very fast. That's not a problem in pure mathematics, but the range and precision of computer floating point numbers are limited. If a term overflows, it can demolish your results. When you have something like x = y/z, where y and z are both very big, you may lose precision in x even though x is nice and small and theoretically perfectly representable.

Here, there's a great way to address both problems. If you've already computed the factorial berechneFak(i), then you can simply multiply it by i+1 to get berechneFak(i+1). If you've already computed Math.pow(x, i), then you can simply multiply it by x again to get Math.pow(x, i+1). And if you perform both operations on a single running quotient variable term, as you did, you minimize the magnitude of the numbers involved, which reduces the possibilities for both overflow and precision loss.

  1. It does a lot of unnecessary multiplication. At iteration i+1, it essentially recomputes berechneFak(i) and Math.pow(x, i) on the way to computing them for i+1.
  2. Terms like berechneFak(i) and Math.pow(x, i) can get very big, very fast. That's not a problem in pure mathematics, but the range and precision of computer floating point numbers are limited. If a term overflows, it can demolish your results. When you have something like x = y/z, where y and z are both very big, you may lose precision in the quotient x even though x is nice and small and theoretically perfectly representable.

Here, there's a great way to address both problems. If you've already computed the factorial berechneFak(i), then on the next iteration you can simply multiply it by i+1 to get berechneFak(i+1). If you've already computed Math.pow(x, i), then you can simply multiply it by x again to get Math.pow(x, i+1). And if you perform both operations on a single running quotient variable term, as you did, you minimize the magnitude of the numbers involved, which reduces the possibilities for both overflow and precision loss.

edited body
Source Link
Steve Summit
  • 49.2k
  • 9
  • 80
  • 113
Loading
Source Link
Steve Summit
  • 49.2k
  • 9
  • 80
  • 113
Loading
lang-java

AltStyle によって変換されたページ (->オリジナル) /