thread_specific_ptr thread safe access - boost

I have the following code snippet
class MCSLock
{
static boost::thread_specific_ptr< mcs_lock > tls_node;
public:
MCSLock()
{
if( tls_node.get() == 0 )
tls_node.reset( new mcs_lock() );
}
};
My understand is that each thread has it's own space allowed for tls_node. This means
the constructor in which we call get() and reset() are thread safe.
Is my understanding correct ?
Thanks.

Yes, each call will be received by different (thread local) objects.

Related

Is CoroutineScope(SupervisorJob()) runs in Main scope?

I was doing this code lab
https://developer.android.com/codelabs/android-room-with-a-view-kotlin#13
and having a question
class WordsApplication : Application() {
// No need to cancel this scope as it'll be torn down with the process
val applicationScope = CoroutineScope(SupervisorJob())
// Using by lazy so the database and the repository are only created when they're needed
// rather than when the application starts
val database by lazy { WordRoomDatabase.getDatabase(this, applicationScope) }
val repository by lazy { WordRepository(database.wordDao()) }
}
private class WordDatabaseCallback(
private val scope: CoroutineScope
) : RoomDatabase.Callback() {
override fun onCreate(db: SupportSQLiteDatabase) {
super.onCreate(db)
INSTANCE?.let { database ->
scope.launch {
var wordDao = database.wordDao()
// Delete all content here.
wordDao.deleteAll()
// Add sample words.
var word = Word("Hello")
wordDao.insert(word)
word = Word("World!")
wordDao.insert(word)
// TODO: Add your own words!
word = Word("TODO!")
wordDao.insert(word)
}
}
}
}
this is the code I found, as you can see, it is directly calling scope.launch(...)
my question is that:
isn't all the Room operations supposed to run in non-UI scope? Could someone help me to understand this? thanks so much!
Is CoroutineScope(SupervisorJob()) runs in Main scope?
No. By default CoroutineScope() uses Dispatchers.Default, as can be found in the documentation:
CoroutineScope() uses Dispatchers.Default for its coroutines.
isn't all the Room operations supposed to run in non-UI scope?
I'm not very familiar specifically with Room, but generally speaking it depends if the operation is suspending or blocking. You can run suspend functions from any dispatcher/thread. deleteAll() and insert() functions in the example are marked as suspend, therefore you can run them from both UI and non-UI threads.

Is it possible to cancel a blocking call in C++ 11 or 14?

My gcc compiler supports C++ 14.
Scenario:
I want to know if there is a way I can force cancel out of a blocking call and stop my std::thread safely.
Code:
// Member vars declared in MyClass.hpp
std::atomic<bool> m_continue_polling = false;
std::thread m_thread;
StartThread() {
m_continue_polling = true;
m_thread = std::thread { [this] {
while (m_continue_polling) {
int somevalue = ReadValue(); // This is a blocking call and can take minutes
}
}};
}
StopThread() {
m_continue_polling = false;
try {
if (m_thread.joinable()) {
m_thread.join();
}
}
catch(const std::exception & /*e*/) {
// Log it out
}
}
In above code ReadValue is a blocking call which goes into a library and reads on a fd which some device driver related code which I have no control on.
Question:
I need StopThread to be able to stop the thread and cancel the call that is blocking on ReadValue. How can I do this? Is there some way in C++11 or 14?
PS:
Probably std::async could be a solution? But I wish to know if there are better ways. If std::async is the best approach then how to effectively use it this scenario without causing bad side effects.
On Linux, you can get the native thread handle and use pthread_cancel function. Provided the thread did not disable cancelability and that the thread is blocked in a cancellation point. You will have to read the documentation carefully to understand all the caveats.

queue underlying c clear not support on vs2008

#include <queue>
struct model_shell_t
{
model_shell_t() {clear();}
void clear();
void Release() {clear(); model_pool._delete(this);}
void SetArgument(byte4 _type, byte4 _arg1, byte4 _arg2)
{
switch( _type )
{
case 100:
if( tail_enable && !_arg1 )
{
tail_deque.clear();//tail_queue.c.clear(); // Error C2248 Cannot access protected member of queue
}
tail_enable = _arg1;
break;
case 101:
tail_interval = _arg1;
tail_count = _arg2;
break;
}
}
queue<model_t> tail_queue;
byte4 tail_enable;
byte4 tail_interval;
byte4 tail_count;
deque<model_t> tail_deque;
};
Another :
for( byte4 i = 0; i < ms->tail_queue.size(); i++ )
{
//ms->tail_queue.c[i].bind_model = &bind_ms->tail_queue.c[i];
ms->tail_deque[i].bind_model = &bind_ms->tail_deque[i];
}
error C2248: 'std::queue<_Ty>::c' : cannot access protected member
declared in class 'std::queue<_Ty>'
When i upgrade solution from VS2003 to VS2008 i cant use clear() function.
How can i use it ?
Thank you
EDITED : With help from Igor Tandetnik i have change above
VS2003 was wrong, VS2008 is right. The underlying container should be a protected member, and shouldn't be accessible. Your code relies on a non-conforming implementation detail - essentially, on a compiler bug that has since been fixed.
As you apparently need direct access to elements, you are breaking queue abstraction anyway. It'd be easiest to just use std::deque directly, without a wrapper.
If for some reason you insist on using std::queue, another approach would be for model_shell_t to privately derive from it, rather than having it as a member. This way it'll have direct access to c. Though it's unclear why you would want to take such a circuitous route to getting a std::deque member.

How to use lambda capture with lock?

In the code below, the member variable (match_room_list_) is a shared resource. So I used a mutex.
But locking confuses me.
Should I use lock or not in an inner lambda function?
void room::join_room(int user_id)
{
std::lock_guard<std::recursive_mutex> lock(mutex_);
std::shared_ptr<match_room> find_room = nullptr;
for (auto iter : match_room_list_)
{
if (false == iter.second->is_public_room())
{
continue;
}
if (true == iter.second->is_full())
{
continue;
}
find_room = iter.second;
break;
}
// Something
// async ( other thread call this lambda function )
// call database
database_manager::get_instance().join_room(find_room->get_room_id(), user_id, [=](bool success, std::string room_id, std::vector<int> user_id_list)
{
// How do you think I should using lock?
// std::lock_guard<std::recursive_mutex> lock(mutex_);
// shared resource
match_room_list_[room_id].join_user(user_id);
response_join_room_ntf(room_id, user_id_list);
}
}
If your lambda function will ever run on a different thread (and that's likely) then yes, you do need to lock it.
The only possible problem with doing so could be if it was called on the same thread from a function that had already locked the mutex. But that won't be an issue here since your mutex is a recursive one and a given thread can lock that as many times as it wants.
You may also want to look into conditions such as:
if (false == iter.second->is_public_room())
if (true == iter.second->is_full())
They would be far better written as:
if (! iter.second->is_public_room())
if (iter.second->is_full())

Thread safe queue with front() + pop()

I have created a thread safe queue (see code). The class seems to work but now I want to make the combination front() plus pop() thread safe in such a way that a thread first gets the element and then for sure removes the same element. I can come up with some solutions but they are not elegant for the user side or the strong exception safety guarantee will be lost.
The first solution is that the user simply has to lock the ThreadQueueu than call front() and pop() and unlock the ThreadQueue. However, the whole idea of the class is that the user has not to mind about the thread safety.
The second solution is to lock the queue inside the overloaded function front() and only unlock it in pop(). However, in this case the user is not allowed to only call front() or pop(), not that user friendly..
The third option I came up with is to create a public function in the class (frontPop) which returns the front element and removes it. However in this case the exception safety is gone.
What is the solution which is both user friendly (elegant) and maintain the exception safety?
class ThreadQueue: private std::queue<std::string>
{
mutable std::mutex d_mutex;
public:
void pop()
{
lock_guard<mutex> lock(d_mutex);
pop();
}
std::string &front()
{
lock_guard<mutex> lock(d_mutex);
return front();
}
// All other functions
private:
};
The usual solution is to provide a combined front & pop that accepts a reference into which to store the popped value, and returns a bool that is true if a value was popped:
bool pop(std::string& t) {
lock_guard<mutex> lock(d_mutex);
if (std::queue<std::string>::empty()) {
return false;
}
t = std::move(std::queue<std::string>::front());
std::queue<std::string>::pop();
return true;
}
Any exceptions thrown by the move assignment happen before the queue is modified, maintaining the exception guarantee provided by the value type's move assignment operator.

Resources