Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

concurrently iterating over a parallel_flat_hash_map #247

Answered by greg7mdp
vondele asked this question in Q&A
Discussion options

I would like to concurrently iterate over a parallel_flat_hash_map (read-only access). I could imagine that this is possible if for each thread I could assign a begin(), end() range that doesn't overlap, and I could imagine this would be efficient if I could get a begin() and end() for each of the submaps.

Is there a public API for that, or is there an easy way to do it nevertheless?

You must be logged in to vote

Yes, this is definitely possible.
You can use with_submap or with_submap_m (see here ) - use the second version with _m if you mutate the submaps.

You can also use get_inner. I don't have it in a current example, but here is a code fragment:

template <class K, class V, size_t N, class Mutex>
using Map = phmap::parallel_flat_hash_map<K, V, phmap::priv::hash_default_hash<K>, phmap::priv::hash_default_eq<K>,
 phmap::priv::Allocator<phmap::priv::Pair<const K, V>>, N, Mutex>;
using OutputTable = Map<Bitset, double, 8, std::mutex>;
OutputTable test(const Table& table1, const Table& table2) {
 OutputTable table;
 #pragma omp parallel for // schedul...

Replies: 2 comments 1 reply

Comment options

Yes, this is definitely possible.
You can use with_submap or with_submap_m (see here ) - use the second version with _m if you mutate the submaps.

You can also use get_inner. I don't have it in a current example, but here is a code fragment:

template <class K, class V, size_t N, class Mutex>
using Map = phmap::parallel_flat_hash_map<K, V, phmap::priv::hash_default_hash<K>, phmap::priv::hash_default_eq<K>,
 phmap::priv::Allocator<phmap::priv::Pair<const K, V>>, N, Mutex>;
using OutputTable = Map<Bitset, double, 8, std::mutex>;
OutputTable test(const Table& table1, const Table& table2) {
 OutputTable table;
 #pragma omp parallel for // schedule(static, 1)
 for (size_t i = 0; i < table1._table.subcnt(); ++i) {
 const auto& set1 = table1._table.get_inner_m(i).set_;
 const auto& set2 = table2._table.get_inner_m(i).set_;
 {
 for (const auto& [index, vec1] : set1) {
 auto it = set2.find(index);
 if (it != set2.end()) {
 const auto& vec2 = it->second;
 // create a new hashmap to contain all results sharing the common pattern
 for (const auto& e1 : vec1) {
 for (const auto& e2 : vec2) {
 Bitset b(e1.first);
 b |= e2.first;
 table.lazy_emplace_l(
 b, [&](OutputTable::value_type& entry) { entry.second += e1.second * e2.second; },
 [&](const OutputTable::constructor& c) { c(std::move(b), e1.second * e2.second); });
 }
 }
 }
 }
 }
 }
 return table;
}
You must be logged in to vote
1 reply
Comment options

I went for the with_submap variant, works very nicely, minimal change to the code
https://github.com/vondele/cdbsubtree/blob/5ec757bf7ab1a7a5b1ed1270ae1b1872c60cc145/main.cpp#L337-L354

Answer selected by vondele
Comment options

Looks good!

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants

AltStyle によって変換されたページ (->オリジナル) /