Apparently this algorithm is well known.
The algorithm will build the square root answer bit by bit starting from the left-most bit to the last.
Let's say we will support squaring up to a 8 bit unsigned integer for simplicity.
let res be result
let x be 8 bit unsigned integer
let n be the nth bit in the result where the 0th bit is the last bit.
set res to 0
for nth bit from 3 down to 0
change the nth bit of the res to 1
if res * res > x then: <- The Comparison
restore the nth bit back to 0
return res
The return result will be a 4 bit unsigned integer
For Example: Finding the square root of 4
(see image) enter image description here
Is this perhaps the Digit by Digit Calculation: Binary Numeral System (base 2)? Wiki Link here
Thank you for your time!
-
1I'm not sure what is the question here.Paul92– Paul922016年09月29日 18:46:28 +00:00Commented Sep 29, 2016 at 18:46
-
1I would describe it as optimized binary search.Stack Exchange Broke The Law– Stack Exchange Broke The Law2016年09月30日 00:41:55 +00:00Commented Sep 30, 2016 at 0:41
-
1What is shown in the question is a digit-by-digit calculation of the square root (I would claim not a particular fast one, and therefore of little practical use). What is shown at the Wiki link is the classical longhand digit-by-digit square rooting method based on the binomial theorem, which goes back to the Middle Ages (possibly Fibonacci's "Liber Abaci"?) and until a few decades ago, was taught to high-school students.njuffa– njuffa2016年11月29日 01:40:23 +00:00Commented Nov 29, 2016 at 1:40
3 Answers 3
i might call it successive approximation or maybe binary search to invert a monotonic function.
if you are able to perform a function f(x)
, and if you know that f(x)
is strictly monotonic (strictly increasing or strictly decreasing) in the domain of x
that is salient, then you can always invert f(x)
to whatever finite degree of precision you want with this binary search. one iteration for every bit you need in your result.
Yes, that's essentially the same algorithm described at the article that you linked.
The main difference is your pseudo-code's use of multiplication, which costs more than what can be done, as the same article points out, "... the operations needed to compute ... e * e can be replaced with faster bit shift operations."
'HTH,
Put simply, if it is given that X == Y^2
(a reformulation of sqrt(X) == Y
), and you would like to solve (N*N*X + m) == (N*Y + k)^2
for k
where the value of N
is fixed for the entire algorithm, you can simply substitute values into k
from k = 0, 1, ..., N-1
and compare the magnitude between the left hand side (which is fixed) and the right hand side (which depend on the value substituted into k
).
For N = 2
, the possible value of k
is simply 0, 1
so that it reduces to a binary search.