template<typename Container>
void findElement(const Container& container, int k){
std::unordered_map<Container::value_type, int> freq;
for(auto& value : container){
freq[value]++;
}
auto smallest = freq.end();
for(auto& entry : freq){
if(entry.second == k && (smallest != freq.end() || entry.first < *smallestsmallest.first )){
smallest = entry;
}
}
if(smallest != freq.end()){
std::cout << smallest.first <<'\n';
}
else{
std::cerr << "No such element\n"; ̈
}
}
template<typename Container>
void findElement(const Container& container, int k){
std::unordered_map<Container::value_type, int> freq;
for(auto& value : container){
freq[value]++;
}
auto smallest = freq.end();
for(auto& entry : freq){
if(entry.second == k && (smallest != freq.end() || entry.first < *smallest )){
smallest = entry;
}
}
if(smallest != freq.end()){
std::cout << smallest.first <<'\n';
}
else{
std::cerr << "No such element\n"; ̈
}
}
template<typename Container>
void findElement(const Container& container, int k){
std::unordered_map<Container::value_type, int> freq;
for(auto& value : container){
freq[value]++;
}
auto smallest = freq.end();
for(auto& entry : freq){
if(entry.second == k && (smallest != freq.end() || entry.first < smallest.first )){
smallest = entry;
}
}
if(smallest != freq.end()){
std::cout << smallest.first <<'\n';
}
else{
std::cerr << "No such element\n"; ̈
}
}
/**
* Will find the smallest value in 'vec' that is repeated 'k' times.
*
* It is assumed that values of 'vec' are [1, 1000[ as per
* problem description.
*/
void findElement(const std::vector<int>&vector<unsigned int>& vec, int k){
intconstexpr freq[1000]auto max_value = {0}1000U;
std::array<unsigned int, max_value> freq;
freq.fill(k);
for(auto& value : vec){
freq[value]++;freq.at(value)--;
}
for(intstd::size_t f = 0; f < 1000;freq.size(); ++f){
if(k0 == freq[f]){
std::cout << f <<'\n';
return;
}
}
std::cerr << "No such element \n ";element\n";
}
The brute force methodraw array is actually faster as this code only performs \$O(n+m)\$ work which is less than \$O(nlogm)\$. Yes it's not as "generic" as the template method and won't work for all T
. But it doesn't need to be able to do that for the task.
I didn't add any bounds checkingOne could solve this using std::unordered_map
as well, if you don't trustand the input or lives dependbig O time is the same:
template<typename Container>
void findElement(const Container& container, int k){
std::unordered_map<Container::value_type, int> freq;
for(auto& value : container){
freq[value]++;
}
auto smallest = freq.end();
for(auto& entry : freq){
if(entry.second == k && (smallest != freq.end() || entry.first < *smallest )){
smallest = entry;
}
}
if(smallest != freq.end()){
std::cout << smallest.first <<'\n';
}
else{
std::cerr << "No such element\n"; ̈
}
}
The algorithm is essentially the same with one minor difference, the approach with the std::unordered_map
may not terminate early on itfinding the first element with exactly 'k' occurrences as the array based algorithm can. This means that the unordered_map
version must visit exactly all 'M<1000' unique values in 'vec' including some overhead from traversing a sparse hash-map, while the array version on average only needs to visit '1000/2' of the possible values.
So they have different cases where they are faster. For example a degenerate case such as K=3 , {999,999,999} is faster with unordered_map as you should add itonly visit one unique element but with the array solution you need to (very quickly) scan through the 998 elements in the array. But on average on random sets the array solution is expected to be slightly faster by a constant factor as both have the same big O time.
void findElement(const std::vector<int>& vec, int k){
int freq[1000] = {0};
for(auto& value : vec){
freq[value]++;
}
for(int f = 0; f < 1000; ++f){
if(k == freq[f]){
std::cout << f <<'\n';
return;
}
}
std::cerr << "No such element \n ";
}
The brute force method is actually faster as this code only performs \$O(n+m)\$ work which is less than \$O(nlogm)\$. Yes it's not as "generic" as the template method and won't work for all T
. But it doesn't need to be able to do that for the task.
I didn't add any bounds checking, if you don't trust the input or lives depend on it you should add it.
/**
* Will find the smallest value in 'vec' that is repeated 'k' times.
*
* It is assumed that values of 'vec' are [1, 1000[ as per
* problem description.
*/
void findElement(const std::vector<unsigned int>& vec, int k){
constexpr auto max_value = 1000U;
std::array<unsigned int, max_value> freq;
freq.fill(k);
for(auto& value : vec){
freq.at(value)--;
}
for(std::size_t f = 0; f < freq.size(); ++f){
if(0 == freq[f]){
std::cout << f <<'\n';
return;
}
}
std::cerr << "No such element\n";
}
The raw array is actually faster as this code only performs \$O(n+m)\$ work which is less than \$O(nlogm)\$. Yes it's not as "generic" as the template method and won't work for all T
. But it doesn't need to be able to do that for the task.
One could solve this using std::unordered_map
as well, and the big O time is the same:
template<typename Container>
void findElement(const Container& container, int k){
std::unordered_map<Container::value_type, int> freq;
for(auto& value : container){
freq[value]++;
}
auto smallest = freq.end();
for(auto& entry : freq){
if(entry.second == k && (smallest != freq.end() || entry.first < *smallest )){
smallest = entry;
}
}
if(smallest != freq.end()){
std::cout << smallest.first <<'\n';
}
else{
std::cerr << "No such element\n"; ̈
}
}
The algorithm is essentially the same with one minor difference, the approach with the std::unordered_map
may not terminate early on finding the first element with exactly 'k' occurrences as the array based algorithm can. This means that the unordered_map
version must visit exactly all 'M<1000' unique values in 'vec' including some overhead from traversing a sparse hash-map, while the array version on average only needs to visit '1000/2' of the possible values.
So they have different cases where they are faster. For example a degenerate case such as K=3 , {999,999,999} is faster with unordered_map as you only visit one unique element but with the array solution you need to (very quickly) scan through the 998 elements in the array. But on average on random sets the array solution is expected to be slightly faster by a constant factor as both have the same big O time.
Looking at the problem statement, a few obvious solutions come to mind:
- Count of many of each number there is and take the smallest with 'k' entries. This is what you do and as you are using a
std::map
withlog(m)
insertion where 'm' is the number of distinct elements (bounded by 1000) so you have \$O(n\log(m))\$ run time and \$O(m)\$ memory. - Sort the input vector and scan linearly from the start until you find something that repeats 6 times. As you have to sort the whole thing you get \$O(nlogn)\$ time and \$O(n)\$ memory unless you can modify the input array.
But there is a better way:
void findElement(const std::vector<int>& vec, int k){
int freq[1000] = {0};
for(auto& value : vec){
freq[value]++;
}
for(auto&int f: freq= 0; f < 1000; ++f){
if(k == ffreq[f]){
std::cout << kf <<'\n';
return;
}
}
std::cerr << "No such element \n ";
}
The brute force method is actually faster as this code only performs \$O(n+m)\$ work which is less than \$O(nlogm)\$. Yes it's not as "generic" as the template method and won't work for all T
. But it doesn't need to be able to do that for the task.
I didn't add any bounds checking, if you don't trust the input or lives depend on it you should add it.
Looking at the problem statement, a few obvious solutions come to mind:
- Count of many of each number there is and take the smallest with 'k' entries. This is what you do and as you are using a
std::map
withlog(m)
insertion where 'm' is the number of distinct elements (bounded by 1000) so you have \$O(n\log(m))\$ run time and \$O(m)\$ memory. - Sort the input vector and scan linearly from the start until you find something that repeats 6 times. As you have to sort the whole thing you get \$O(nlogn)\$ time and \$O(n)\$ memory unless you can modify the input array.
But there is a better way:
void findElement(const std::vector<int>& vec, int k){
int freq[1000] = {0};
for(auto& value : vec){
freq[value]++;
}
for(auto& f: freq){
if(k == f){
std::cout << k <<'\n';
return;
}
}
std::cerr << "No such element \n ";
}
The brute force method is actually faster as this code only performs \$O(n+m)\$ work which is less than \$O(nlogm)\$. Yes it's not as "generic" as the template method and won't work for all T
. But it doesn't need to be able to do that for the task.
I didn't add any bounds checking, if you don't trust the input or lives depend on it you should add it.
Looking at the problem statement, a few obvious solutions come to mind:
- Count of many of each number there is and take the smallest with 'k' entries. This is what you do and as you are using a
std::map
withlog(m)
insertion where 'm' is the number of distinct elements (bounded by 1000) so you have \$O(n\log(m))\$ run time and \$O(m)\$ memory. - Sort the input vector and scan linearly from the start until you find something that repeats 6 times. As you have to sort the whole thing you get \$O(nlogn)\$ time and \$O(n)\$ memory unless you can modify the input array.
But there is a better way:
void findElement(const std::vector<int>& vec, int k){
int freq[1000] = {0};
for(auto& value : vec){
freq[value]++;
}
for(int f = 0; f < 1000; ++f){
if(k == freq[f]){
std::cout << f <<'\n';
return;
}
}
std::cerr << "No such element \n ";
}
The brute force method is actually faster as this code only performs \$O(n+m)\$ work which is less than \$O(nlogm)\$. Yes it's not as "generic" as the template method and won't work for all T
. But it doesn't need to be able to do that for the task.
I didn't add any bounds checking, if you don't trust the input or lives depend on it you should add it.