I created a SafeQueue
class, which stores pointers. It has two methods:
- push: Adds a new pointer to the queue
- next: If the queue is empty, returns nullptr. Otherwise it returns the front element, and pop the queue
Most of the time, I have one producer and one consumer. But there may be more.
The producer just simply calls .push(ptr)
The consumer(s) call .next()
, until a nullptr is returned. Or they continue the loop forever.
The reason I did this, is to eliminate .isEmpty
function, so between the .empty()
and .front()
no other thread can pop the queue.
It works, but I don't think, that it is the preferable or an optimal solution for the problem.
The code is:
template<class T>
class SafeQueue {
std::queue<T*> q;
std::mutex m;
public:
SafeQueue() {}
void push(T* elem) {
m.lock();
if(elem != nullptr) {
q.push(elem);
}
m.unlock();
}
T* next() {
T* elem = nullptr;
m.lock();
if(!q.empty()) {
elem = q.front();
q.pop();
}
m.unlock();
return elem;
}
};
2 Answers 2
I see some things that may help you improve your code.
List all required #include
s
The code needs the following #include
s to actually compile. Since they therefore form part of the interface, they should be included in the file and in a code review:
#include <mutex>
#include <queue>
Be clear about ownership
If the intention is to have a thread-safe queue, then passing pointers in and out is definitely not the way to go. The problem is with object ownership. Even if your thread-safe queue works perfectly, all of its inherent goodness is all too easily bypassed because you're using pointers. For example:
SafeQueue<std::string> sq;
{
std::string msg1{"this message exists"};
sq.push(&msg1);
} // msg1 is now destroyed, but queue still has pointer
std::cout << *sq.next() << " no longer\n"; // kaboom!
The problem is that the queue doesn't actually own the object (or at least have a std::shared_ptr
) so there's not much point in perfecting the queue until that's addressed.
Choose better names
I have never thought of push
and next
as inverse operations, and I'll bet you never have either. The push
member function name is OK, since it actually does that, but next
is just a strange name. I'd say use pop
or pop_front
might be better names.
Minimize locking duration
In the push
code, we have this:
void push(T* elem) {
m.lock();
if(elem != nullptr) {
q.push(elem);
}
m.unlock();
}
But why acquire a lock if you don't need it? It just slows things down. You could instead write that like this:
void push(T* elem) {
if (elem == nullptr) {
return;
}
m.lock();
q.push(elem);
m.unlock();
}
Or better, see the next suggestion:
Use RAII to reduce errors
If you happened to forget to remove the lock on the code above, Bad Things would likely happen. Fortunately, in C++, there's a handy idiom that is very often used in C++ and it's called Resource Allocation is Initialization. In this context, we use a std::lock_guard
like this:
void push(T elem) {
std::lock_guard<std::mutex> lock(m);
q.push(elem);
}
The lock_guard
automatically gets locked on creation and unlocked on deletion, so when it goes out of scope, the lock is released even if you forget.
Return an indicator of success
Since we've already established that passing pointers in or out is a problem, I'd suggest changing the interface for the next()
function. Have it take a reference (so the caller must supply one) and then return a bool
to indicate success. That might look like this:
bool next(T& elem) {
std::lock_guard<std::mutex> lock(m);
if (q.empty()) {
return false;
}
elem = q.front();
q.pop();
return true;
}
-
1\$\begingroup\$ Good point about the minimize lock duration. \$\endgroup\$Surt– Surt2016年12月13日 17:10:05 +00:00Commented Dec 13, 2016 at 17:10
For this kind of queue I often use something like this
std::mutex mtx;
std::condition_variable cv;
void push(T* elem) {
if(elem == nullptr) {
return;
}
std::unique_lock<std::mutex> lck(mtx);
q.push(elem);
cv.notify_one();
}
T* next() {
T* elem = nullptr;
std::unique_lock<std::mutex> lck(mtx);
cv.wait(lck, !q.empty());
if(!q.empty()) {
elem = q.front();
q.pop();
}
return elem;
}
The consumers lock on the empty queue, and when something is pushed one is awakened for each push'ed. That way you don't waste CPU cycles waiting when you don't have anything to process, but there is a certain cost to locking and the task switch.
So measure your performance.
-
\$\begingroup\$ Why did you use
unique_lock
in thepush
method too? Isn't it only required in thenext
function? Is it possible to use alock_guard
instead ofunique_lock
in thepush
? \$\endgroup\$Iter Ator– Iter Ator2016年12月13日 10:49:10 +00:00Commented Dec 13, 2016 at 10:49 -
\$\begingroup\$ @IterAtor easy answer, habit, using the same lock style make me feel safe. There are some requirement on the CV lock, which I couldn't remember required using the same lock in other places. \$\endgroup\$Surt– Surt2016年12月13日 17:08:06 +00:00Commented Dec 13, 2016 at 17:08
-
1\$\begingroup\$ You always have to pass a lambda or a function to
std::condition_variable::wait
so the expression gets evaluated when a spuriously wakeup happens. \$\endgroup\$JulianW– JulianW2019年03月30日 09:16:14 +00:00Commented Mar 30, 2019 at 9:16 -
\$\begingroup\$ in your push() you need to use a shared_lock, not a unique_lock. The unique_lock is a deferred lock. \$\endgroup\$edwinc– edwinc2020年08月03日 19:03:48 +00:00Commented Aug 3, 2020 at 19:03
-
\$\begingroup\$ @edwinc I can't find a place where it says std::defer_lock is the default, could you add a reference please. \$\endgroup\$Surt– Surt2020年08月07日 11:54:02 +00:00Commented Aug 7, 2020 at 11:54
Explore related questions
See similar questions with these tags.