I came across this thread and I am working on the same problem. My best algorithm passes 6/11 cases while the algorithm in the other thread does indeed pass 8/11 cases. Running time trials with random input my algorithm is significantly quicker (8 to 12 times depending on input) then the one in the other thread. So I am pretty confused on these results.
I came across this thread and I am working on the same problem. My best algorithm passes 6/11 cases while the algorithm in the other thread does indeed pass 8/11 cases. Running time trials with random input my algorithm is significantly quicker (8 to 12 times depending on input) then the one in the other thread. So I am pretty confused on these results.
Aside from empirical testing these are the reasons I think my code should be running better than the other implementation for these reasons.
- Appending to a
StringBuilder
and printing once should be more efficient than multiple print statements. - Better reduction in logic for binary addition. As well doing addition from the bit index’s least significant 0 in both BitSets, a and b, reduces unnecessary computation.
- Using
BufferedReade
r has a lot less overhead than usingScanner
. BitSet
really didn’t offer much difference in the terms of performance than a character array but reversing the binary strings is a waste and theStringBuilder
reverse is not the best performance.
Can anyone shed some light onto why the seemingly slower algorithm does better in the testing?
Version 2
private static String floatSum()
{
final Scanner input = new Scanner(System.in);
final int number_bits = input.nextInt();
final int number_queries = input.nextInt();
final StringBuilder builder = new StringBuilder();
final char[] a = new StringBuilder(input.next()).reverse().toString().toCharArray();
final char[] b = new StringBuilder(input.next()).reverse().toString().toCharArray();
for(int queries = 0; queries < number_queries; queries++)
{
switch(input.next().charAt(4))
{
case 'a':
a[input.nextInt()] = input.next().charAt(0);
break;
case 'b':
b[input.nextInt()] = input.next().charAt(0);
break;
default:
final int index = input.nextInt();
int carry_bit = 0;
for(int iter = index - 1; iter >= 0; iter--)
{
if(a[iter] == b[iter])
{
carry_bit = a[iter] - 48;
break;
}
}
builder.append(index == number_bits ? carry_bit : ((a[index] - 48) + (b[index] - 48) + carry_bit) % 2);
break;
}
}
return builder.toString();
}
1 Answer 1
Consider this case:
A: 1111111111
B: 1111111111
Query: n+1
The other post terminates very quckly. It knows that since both numbers end in 1, it must carry and thus doesn't have to scan throughout the entire string. On the other hand, your method tries to find clear bits but since there aren't any, it ends up going through the whole bitstring.
Probably, yours is faster over all, but this pathological case happens to have been put first in the test cases and so your program fails.
if(carry && a.get(iter) && b.get(iter))
{
carry = true;
value = 1;
}
else if((carry && a.get(iter)) || (carry && b.get(iter)) || (a.get(iter) && b.get(iter)))
{
carry = true;
value = 0;
}
else if(carry || a.get(iter) || b.get(iter))
{
carry = false;
value = 1;
}
else
{
carry = false;
value = 0;
}
This whole bit kinda hard to follow you could instead do something like:
int bits = a.get(iter) ? 1 : 0 + b.get(iter) ? 1 : 0 + carry ? 1 : 0;
value = bits % 2;
carry = bits / 2;
I think its a clearer
int a_index = a.nextClearBit(number_bits - index);
int b_index = b.nextClearBit(number_bits - index);
Yeah, not gonna fly. nextClearBit has to scan through the bitset bit by bit in order to do that. You are gonna need to come up with a way to look through the bits without looking at each individual bit. I've solved this problem, and I used a specialized data structure to do it.
-
\$\begingroup\$ Going to have to ponder your points. Since it doesn’t seem that raw performance is the key to passing each test case is there a better metric for rating performance? As of now I loop through a number of inputs creating a text file for each input then load the file into System.in and record the milliseconds it took for each method to complete. \$\endgroup\$ntin– ntin2012年03月01日 03:51:05 +00:00Commented Mar 1, 2012 at 3:51
-
\$\begingroup\$ @ntin, well raw performance is the right metric. The problem is that there are test cases where your algorithm does really really badly. You've got to measure performance on those test cases. \$\endgroup\$Winston Ewert– Winston Ewert2012年03月01日 03:53:09 +00:00Commented Mar 1, 2012 at 3:53
-
\$\begingroup\$ I see what you mean about the data structure, TreeMap/Set is too slow with the inserts/deletes to find the pair closests to the get_c index. \$\endgroup\$ntin– ntin2012年03月03日 00:36:27 +00:00Commented Mar 3, 2012 at 0:36