Jelly, 15 bytes
32:2_2:Ẹ¡:2+μƬṪ
An arbitrary precision Jelly answer that uses the Newton-Raphson method to find the correct answer. Uses only integer arithmetic operations so the intermediate values are all Python big ints rather than getting cast as floats which would lose precision. The integer result equates to the floor of what would be the floating point answer.
A full program that takes a (possibly negative) integer as its argument and returns an integer.
Now correctly handles inputs of 0 and 1; previously threw an error because of division of 0 by 0 being impermissible for integers.
Useful comment from @PeterCordes about efficiency of this method and some detail on Python’s big integer implementation:
Newton-Raphson converges quickly, like twice as many correct bits per iteration, given a decent first estimate. e.g. one step refines a 12-bit-precision rsqrtps(x) FP result into nearly 24-bit. (In this case apparently the original input is close enough). You only pay Python interpreter overhead per operation, not per limb (aka chunk) of a very long number; extended-precision division is not cheap but it is implemented in C on chunks of 2^30 stored in an array of 32-bit integers. (I forget if Python uses 64-bit on 64-bit machines.)
Explanation
μƬ | Do the following as a monad until no new values seen, collecting up the intermediate values:
3 | - Original argument to program
2 | - Squared
:2 | - Integer divide by 2
_2 | - Subtract current estimate squared
Ẹ¡ | - If non-zero: : | - Integer divide by current estimate
:2 | - Integer divide by 2
+ | - Add to current estimate
Ṫ | Finally, take the tail of the list of estimates
Note Ẹ¡ is literally repeat the number of times indicated by applying the any function to current value, but it is effectively used here to mean if non-zero.
A much shorter answer that is only accurate to float precision is:
Jelly, 4 bytes
21⁄2:@
Jelly, 15 bytes
32:2_2:Ẹ¡:2+μƬṪ
An arbitrary precision Jelly answer that uses the Newton-Raphson method to find the correct answer. Uses only integer arithmetic operations so the intermediate values are all Python big ints rather than getting cast as floats which would lose precision. The integer result equates to the floor of what would be the floating point answer.
A full program that takes a (possibly negative) integer as its argument and returns an integer.
Now correctly handles inputs of 0 and 1; previously threw an error because of division of 0 by 0 being impermissible for integers.
Useful comment from @PeterCordes about efficiency of this method and some detail on Python’s big integer implementation:
Newton-Raphson converges quickly, like twice as many correct bits per iteration, given a decent first estimate. e.g. one step refines a 12-bit-precision rsqrtps(x) FP result into nearly 24-bit. (In this case apparently the original input is close enough). You only pay Python interpreter overhead per operation, not per limb (aka chunk) of a very long number; extended-precision division is not cheap but it is implemented in C on chunks of 2^30 stored in an array of 32-bit integers. (I forget if Python uses 64-bit on 64-bit machines.)
Explanation
μƬ | Do the following as a monad until no new values seen, collecting up the intermediate values:
3 | - Original argument to program
2 | - Squared
:2 | - Integer divide by 2
_2 | - Subtract current estimate squared
: | - Integer divide by current estimate
:2 | - Integer divide by 2
+ | - Add to current estimate
Ṫ | Finally, take the tail of the list of estimates
A much shorter answer that is only accurate to float precision is:
Jelly, 4 bytes
21⁄2:@
Jelly, 15 bytes
32:2_2:Ẹ¡:2+μƬṪ
An arbitrary precision Jelly answer that uses the Newton-Raphson method to find the correct answer. Uses only integer arithmetic operations so the intermediate values are all Python big ints rather than getting cast as floats which would lose precision. The integer result equates to the floor of what would be the floating point answer.
A full program that takes a (possibly negative) integer as its argument and returns an integer.
Now correctly handles inputs of 0 and 1; previously threw an error because of division of 0 by 0 being impermissible for integers.
Useful comment from @PeterCordes about efficiency of this method and some detail on Python’s big integer implementation:
Newton-Raphson converges quickly, like twice as many correct bits per iteration, given a decent first estimate. e.g. one step refines a 12-bit-precision rsqrtps(x) FP result into nearly 24-bit. (In this case apparently the original input is close enough). You only pay Python interpreter overhead per operation, not per limb (aka chunk) of a very long number; extended-precision division is not cheap but it is implemented in C on chunks of 2^30 stored in an array of 32-bit integers. (I forget if Python uses 64-bit on 64-bit machines.)
Explanation
μƬ | Do the following as a monad until no new values seen, collecting up the intermediate values:
3 | - Original argument to program
2 | - Squared
:2 | - Integer divide by 2
_2 | - Subtract current estimate squared
Ẹ¡ | - If non-zero: : | - Integer divide by current estimate
:2 | - Integer divide by 2
+ | - Add to current estimate
Ṫ | Finally, take the tail of the list of estimates
Note Ẹ¡ is literally repeat the number of times indicated by applying the any function to current value, but it is effectively used here to mean if non-zero.
A much shorter answer that is only accurate to float precision is:
Jelly, 4 bytes
21⁄2:@
Jelly, 15 bytes
32:2_2:Ẹ¡:2+μƬṪ
An arbitrary precision Jelly answer that uses the Newton-Raphson method to find the correct answer. Uses only integer arithmetic operations so the intermediate values are all Python big ints rather than getting cast as floats which would lose precision. The integer result equates to the floor of what would be the floating point answer.
A full program that takes a (possibly negative) integer as its argument and returns an integer.
Now correctly handles inputs of 0 and 1; previously threw an error because of division of 0 by 0 being impermissible for integers.
Useful comment from @PeterCordes about efficiency of this method and some detail on Python’s big integer implementation:
Newton-Raphson converges quickly, like twice as many correct bits per iteration, given a decent first estimate. e.g. one step refines a 12-bit-precision rsqrtps(x) FP result into nearly 24-bit. (In this case apparently the original input is close enough). You only pay Python interpreter overhead per operation, not per limb (aka chunk) of a very long number; extended-precision division is not cheap but it is implemented in C on chunks of 2^30 stored in an array of 32-bit integers. (I forget if Python uses 64-bit on 64-bit machines.)
Explanation
μƬ | Do the following as a monad until no new values seen, collecting up the intermediate values:
3 | - Original argument to program
2 | - Squared
:2 | - Integer divide by 2
_2 | - Subtract current estimate squared
: | - Integer divide by current estimate
:2 | - Integer divide by 2
+ | - Add to current estimate
Ṫ | Finally, take the tail of the list of estimates
A much shorter answer that is only accurate to float precision is:
Jelly, 4 bytes
21⁄2:@
Jelly, 15 bytes
32:2_2:Ẹ¡:2+μƬṪ
An arbitrary precision Jelly answer that uses the Newton-Raphson method to find the correct answer. Uses only integer arithmetic operations so the intermediate values are all Python big ints rather than getting cast as floats which would lose precision. The integer result equates to the floor of what would be the floating point answer.
A full program that takes a (possibly negative) integer as its argument and returns an integer.
Now correctly handles inputs of 0 and 1; previously threw an error because of division of 0 by 0 being impermissible for integers.
Explanation
μƬ | Do the following as a monad until no new values seen, collecting up the intermediate values:
3 | - Original argument to program
2 | - Squared
:2 | - Integer divide by 2
_2 | - Subtract current estimate squared
: | - Integer divide by current estimate
:2 | - Integer divide by 2
+ | - Add to current estimate
Ṫ | Finally, take the tail of the list of estimates
A much shorter answer that is only accurate to float precision is:
Jelly, 4 bytes
21⁄2:@
Jelly, 15 bytes
32:2_2:Ẹ¡:2+μƬṪ
An arbitrary precision Jelly answer that uses the Newton-Raphson method to find the correct answer. Uses only integer arithmetic operations so the intermediate values are all Python big ints rather than getting cast as floats which would lose precision. The integer result equates to the floor of what would be the floating point answer.
A full program that takes a (possibly negative) integer as its argument and returns an integer.
Now correctly handles inputs of 0 and 1; previously threw an error because of division of 0 by 0 being impermissible for integers.
Useful comment from @PeterCordes about efficiency of this method and some detail on Python’s big integer implementation:
Newton-Raphson converges quickly, like twice as many correct bits per iteration, given a decent first estimate. e.g. one step refines a 12-bit-precision rsqrtps(x) FP result into nearly 24-bit. (In this case apparently the original input is close enough). You only pay Python interpreter overhead per operation, not per limb (aka chunk) of a very long number; extended-precision division is not cheap but it is implemented in C on chunks of 2^30 stored in an array of 32-bit integers. (I forget if Python uses 64-bit on 64-bit machines.)
Explanation
μƬ | Do the following as a monad until no new values seen, collecting up the intermediate values:
3 | - Original argument to program
2 | - Squared
:2 | - Integer divide by 2
_2 | - Subtract current estimate squared
: | - Integer divide by current estimate
:2 | - Integer divide by 2
+ | - Add to current estimate
Ṫ | Finally, take the tail of the list of estimates
A much shorter answer that is only accurate to float precision is:
Jelly, 4 bytes
21⁄2:@
Jelly, 15 bytes
32:2_2:Ẹ¡:2+μƬṪ
An arbitrary precision Jelly answer that uses the Newton-Raphson method to find the correct answer. Uses only integer arithmetic operations so the intermediate values are all Python big ints rather than getting cast as floats which would lose precision. The integer result equates to the floor of what would be the floating point answer.
A full program that takes ana (possibly negative) integer as its argument and returns an integer.
Now correctly handles inputs of 0 and 1; previously threw an error because of division of 0 by 0 being impermissible for integers.
Explanation
μƬ | Do the following as a monad until no new values seen, collecting up the intermediate values:
3 | - Original argument to program
2 | - Squared
:2 | - Integer divide by 2
_2 | - Subtract current estimate squared
: | - Integer divide by current estimate
:2 | - Integer divide by 2
+ | - Add to current estimate
Ṫ | Finally, take the tail of the list of estimates
A much shorter answer that is only accurate to float precision is:
Jelly, 4 bytes
21⁄2:@
Jelly, 15 bytes
32:2_2:Ẹ¡:2+μƬṪ
An arbitrary precision Jelly answer that uses the Newton-Raphson method to find the correct answer. Uses only integer arithmetic operations so the intermediate values are all Python big ints rather than getting cast as floats which would lose precision. The integer result equates to the floor of what would be the floating point answer.
A full program that takes an integer as its argument and returns an integer.
Now correctly handles inputs of 0 and 1; previously threw an error because of division of 0 by 0 being impermissible for integers.
Explanation
μƬ | Do the following as a monad until no new values seen, collecting up the intermediate values:
3 | - Original argument to program
2 | - Squared
:2 | - Integer divide by 2
_2 | - Subtract current estimate squared
: | - Integer divide by current estimate
:2 | - Integer divide by 2
+ | - Add to current estimate
Ṫ | Finally, take the tail of the list of estimates
A much shorter answer that is only accurate to float precision is:
Jelly, 4 bytes
21⁄2:@
Jelly, 15 bytes
32:2_2:Ẹ¡:2+μƬṪ
An arbitrary precision Jelly answer that uses the Newton-Raphson method to find the correct answer. Uses only integer arithmetic operations so the intermediate values are all Python big ints rather than getting cast as floats which would lose precision. The integer result equates to the floor of what would be the floating point answer.
A full program that takes a (possibly negative) integer as its argument and returns an integer.
Now correctly handles inputs of 0 and 1; previously threw an error because of division of 0 by 0 being impermissible for integers.
Explanation
μƬ | Do the following as a monad until no new values seen, collecting up the intermediate values:
3 | - Original argument to program
2 | - Squared
:2 | - Integer divide by 2
_2 | - Subtract current estimate squared
: | - Integer divide by current estimate
:2 | - Integer divide by 2
+ | - Add to current estimate
Ṫ | Finally, take the tail of the list of estimates
A much shorter answer that is only accurate to float precision is:
Jelly, 4 bytes
21⁄2:@