The 8 in the NO_OF_BITS
definition should be replaced with <limits.h>
's CHAR_BIT
(thought he chance that you run into an architecture where it is different is very small). Or the entire definition can be replaced with std::numeric_limits<unsigned int>::digits
The generic parameter is the size of the bitset however it of off by a factor of 8. instead you can do
unsigned int array[(n+NO_OF_BITS-1)/NO_OF_BITS];
This way size is exactly the generic parameter.
The logic is that integer divide will round down however we want to round up to have enough space. we could just add 1 to the result but that would take too much space if n
was a multiple of NO_OF_BITS
. So we pull the addition in and subtract 1.
For count you can do a repeated bitcount:
int sum = 0;
for(int i = 0; i < (n+NO_OF_BITS-1)/NO_OF_BITS; i++){
sum += bitcount(array[i]);
}
Searching for a bitcount implementation on SO will show you this question this question with a fast implementation.
All and none can be implemented by checking whether one of the array elements is not 0xffffffff or 0 resp. (making an allowance for the last array element)
The 8 in the NO_OF_BITS
definition should be replaced with <limits.h>
's CHAR_BIT
(thought he chance that you run into an architecture where it is different is very small). Or the entire definition can be replaced with std::numeric_limits<unsigned int>::digits
The generic parameter is the size of the bitset however it of off by a factor of 8. instead you can do
unsigned int array[(n+NO_OF_BITS-1)/NO_OF_BITS];
This way size is exactly the generic parameter.
The logic is that integer divide will round down however we want to round up to have enough space. we could just add 1 to the result but that would take too much space if n
was a multiple of NO_OF_BITS
. So we pull the addition in and subtract 1.
For count you can do a repeated bitcount:
int sum = 0;
for(int i = 0; i < (n+NO_OF_BITS-1)/NO_OF_BITS; i++){
sum += bitcount(array[i]);
}
Searching for a bitcount implementation on SO will show you this question with a fast implementation.
All and none can be implemented by checking whether one of the array elements is not 0xffffffff or 0 resp. (making an allowance for the last array element)
The 8 in the NO_OF_BITS
definition should be replaced with <limits.h>
's CHAR_BIT
(thought he chance that you run into an architecture where it is different is very small). Or the entire definition can be replaced with std::numeric_limits<unsigned int>::digits
The generic parameter is the size of the bitset however it of off by a factor of 8. instead you can do
unsigned int array[(n+NO_OF_BITS-1)/NO_OF_BITS];
This way size is exactly the generic parameter.
The logic is that integer divide will round down however we want to round up to have enough space. we could just add 1 to the result but that would take too much space if n
was a multiple of NO_OF_BITS
. So we pull the addition in and subtract 1.
For count you can do a repeated bitcount:
int sum = 0;
for(int i = 0; i < (n+NO_OF_BITS-1)/NO_OF_BITS; i++){
sum += bitcount(array[i]);
}
Searching for a bitcount implementation on SO will show you this question with a fast implementation.
All and none can be implemented by checking whether one of the array elements is not 0xffffffff or 0 resp. (making an allowance for the last array element)
The 8 in the NO_OF_BITS
definition should be replaced with <limits.h>
's CHAR_BIT
(thought he chance that you run into an architecture where it is different is very small). Or the entire definition can be replaced with std::numeric_limits<unsigned int>::digits
The generic parameter is the size of the bitset however it of off by a factor of 8. instead you can do
unsigned int array[(n+NO_OF_BITS-1)/NO_OF_BITS];
This way size is exactly the generic parameter.
The logic is that integer divide will round down however we want to round up to have enough space. we could just add 1 to the result but that would take too much space if n
was a multiple of NO_OF_BITS
. So we pull the addition in and subtract 1.
For count you can do a repeated bitcount:
int sum = 0;
for(int i = 0; i < (n+NO_OF_BITS-1)/NO_OF_BITS; i++){
sum += bitcount(array[i]);
}
Searching for a bitcount implementation on SO will show you this question with a fast implementation.
All and none can be implemented by checking whether one of the array elements is not 0xffffffff or 0 resp. (making an allowance for the last array element)
The 8 in the NO_OF_BITS
definition should be replaced with <limits.h>
's CHAR_BIT
(thought he chance that you run into an architecture where it is different is very small).
The generic parameter is the size of the bitset however it of off by a factor of 8. instead you can do
unsigned int array[(n+NO_OF_BITS-1)/NO_OF_BITS];
This way size is exactly the generic parameter.
The logic is that integer divide will round down however we want to round up to have enough space. we could just add 1 to the result but that would take too much space if n
was a multiple of NO_OF_BITS
. So we pull the addition in and subtract 1.
For count you can do a repeated bitcount:
int sum = 0;
for(int i = 0; i < (n+NO_OF_BITS-1)/NO_OF_BITS; i++){
sum += bitcount(array[i]);
}
Searching for a bitcount implementation on SO will show you this question with a fast implementation.
All and none can be implemented by checking whether one of the array elements is not 0xffffffff or 0 resp. (making an allowance for the last array element)
The 8 in the NO_OF_BITS
definition should be replaced with <limits.h>
's CHAR_BIT
(thought he chance that you run into an architecture where it is different is very small). Or the entire definition can be replaced with std::numeric_limits<unsigned int>::digits
The generic parameter is the size of the bitset however it of off by a factor of 8. instead you can do
unsigned int array[(n+NO_OF_BITS-1)/NO_OF_BITS];
This way size is exactly the generic parameter.
The logic is that integer divide will round down however we want to round up to have enough space. we could just add 1 to the result but that would take too much space if n
was a multiple of NO_OF_BITS
. So we pull the addition in and subtract 1.
For count you can do a repeated bitcount:
int sum = 0;
for(int i = 0; i < (n+NO_OF_BITS-1)/NO_OF_BITS; i++){
sum += bitcount(array[i]);
}
Searching for a bitcount implementation on SO will show you this question with a fast implementation.
All and none can be implemented by checking whether one of the array elements is not 0xffffffff or 0 resp. (making an allowance for the last array element)
The 8 in the NO_OF_BITS
definition should be replaced with <limits.h>
's CHAR_BIT
(thought he chance that you run into an architecture where it is different is very small).
The generic parameter is the size of the bitset however it of off by a factor of 8. instead you can do
unsigned int array[(n+CHAR_BITn+NO_OF_BITS-1)/sizeof(int)];NO_OF_BITS];
This way size is exactly the generic parameter.
The logic is that integer divide will round down however we want to round up to have enough space. we could just add 1 to the result but that would take too much space if n
was a multiple of CHAR_BITNO_OF_BITS
. So we pull the addition in and subtract 1.
For count you can do a repeated bitcount:
int sum = 0;
for(int i = 0; i < (n+CHAR_BITn+NO_OF_BITS-1)/sizeof(int);NO_OF_BITS; i++){
sum += bitcount(array[i]);
}
Searching for a bitcount implementation on SO will show you this question with a fast implementation.
All and none can be implemented by checking whether one of the array elements is not 0xffffffff or 0 resp. (making an allowance for the last array element)
The 8 in the NO_OF_BITS
definition should be replaced with <limits.h>
's CHAR_BIT
(thought he chance that you run into an architecture where it is different is very small).
The generic parameter is the size of the bitset however it of off by a factor of 8. instead you can do
unsigned int array[(n+CHAR_BIT-1)/sizeof(int)];
This way size is exactly the generic parameter.
The logic is that integer divide will round down however we want to round up to have enough space. we could just add 1 to the result but that would take too much space if n
was a multiple of CHAR_BIT
. So we pull the addition in and subtract 1.
For count you can do a repeated bitcount:
int sum = 0;
for(int i = 0; i < (n+CHAR_BIT-1)/sizeof(int); i++){
sum += bitcount(array[i]);
}
Searching for a bitcount implementation on SO will show you this question with a fast implementation.
All and none can be implemented by checking whether one of the array elements is not 0xffffffff or 0 resp. (making an allowance for the last array element)
The 8 in the NO_OF_BITS
definition should be replaced with <limits.h>
's CHAR_BIT
(thought he chance that you run into an architecture where it is different is very small).
The generic parameter is the size of the bitset however it of off by a factor of 8. instead you can do
unsigned int array[(n+NO_OF_BITS-1)/NO_OF_BITS];
This way size is exactly the generic parameter.
The logic is that integer divide will round down however we want to round up to have enough space. we could just add 1 to the result but that would take too much space if n
was a multiple of NO_OF_BITS
. So we pull the addition in and subtract 1.
For count you can do a repeated bitcount:
int sum = 0;
for(int i = 0; i < (n+NO_OF_BITS-1)/NO_OF_BITS; i++){
sum += bitcount(array[i]);
}
Searching for a bitcount implementation on SO will show you this question with a fast implementation.
All and none can be implemented by checking whether one of the array elements is not 0xffffffff or 0 resp. (making an allowance for the last array element)