In the code below, the member variable (match_room_list_) is a shared resource. So I used a mutex.
But locking confuses me.
Should I use lock or not in an inner lambda function?
void room::join_room(int user_id)
{
std::lock_guard<std::recursive_mutex> lock(mutex_);
std::shared_ptr<match_room> find_room = nullptr;
for (auto iter : match_room_list_)
{
if (false == iter.second->is_public_room())
{
continue;
}
if (true == iter.second->is_full())
{
continue;
}
find_room = iter.second;
break;
}
// Something
// async ( other thread call this lambda function )
// call database
database_manager::get_instance().join_room(find_room->get_room_id(), user_id, [=](bool success, std::string room_id, std::vector<int> user_id_list)
{
// How do you think I should using lock?
// std::lock_guard<std::recursive_mutex> lock(mutex_);
// shared resource
match_room_list_[room_id].join_user(user_id);
response_join_room_ntf(room_id, user_id_list);
}
}
If your lambda function will ever run on a different thread (and that's likely) then yes, you do need to lock it.
The only possible problem with doing so could be if it was called on the same thread from a function that had already locked the mutex. But that won't be an issue here since your mutex is a recursive one and a given thread can lock that as many times as it wants.
You may also want to look into conditions such as:
if (false == iter.second->is_public_room())
if (true == iter.second->is_full())
They would be far better written as:
if (! iter.second->is_public_room())
if (iter.second->is_full())
Related
I use a semaphore for two processes that share a resource (rest api endpoint), that can't be called concurrent. I do:
let tokenSemaphore = null;
class restApi {
async getAccessToken() {
let tokenResolve;
if (tokenSemaphore) {
await tokenSemaphore;
}
tokenSemaphore = new Promise((resolve) => tokenResolve = resolve);
return new Promise(async (resolve, reject) => {
// ...
resolve(accessToken);
tokenResolve();
tokenSemaphore = null;
});
}
}
But this looks too complicated. Is there a simpler way to achieve the same thing?
And how to do it for more concurrent processes.
This is not a server side Semaphore. You need interprocess communication for locking processes which are running independently in different threads. In that case the API must support something like that on the server side and this here is not for you.
As this was the first hit when googling for "JavaScript Promise Semaphore", here is what I came up with:
function Semaphore(max, fn, ...a1)
{
let run = 0;
const waits = [];
function next(x)
{
if (run<max && waits.length)
waits.shift()(++run);
return x;
}
return (...a2) => next(new Promise(ok => waits.push(ok)).then(() => fn(...a1,...a2)).finally(_ => run--).finally(next));
}
Example use (above is (nearly) copied from my code, following was typed in directly and hence is not tested):
// do not execute more than 20 fetches in parallel:
const fetch20 = Semaphore(20, fetch);
async function retry(...a)
{
for (let retries=0;; retries++)
{
if (retries)
await new Promise(ok => setTimeout(ok, 100*retries));
try {
return await fetch20(...a)
} catch (e) {
console.log('retry ${retries}', url, e);
}
}
}
and then
for (let i=0; ++i<10000000; ) retry(`https://example.com/?${i}`);
My Browser handles thousands of asynchronous parallel calls to retry very well. However when using fetch directly, the Tabs crash nearly instantly.
For your usage you probably need something like:
async function access_token_api_call()
{
// assume this takes 10s and must not be called in parallel for setting the Cookie
return fetch('https://api.example.com/nonce').then(r => r.json());
}
const get_access_token = Semaphore(1, access_token_api_call);
// both processes need to use the same(!) Semaphore, of course
async function process(...args)
{
const token = await get_access_token();
// processing args here
return //something;
}
proc1 = process(1);
proc2 = process(2);
Promise.all([proc1, proc2]).then( //etc.
YMMV.
Notes:
This assumes that your two processes are just asynchronous functions of the same single JS script (i.E. running in the same Tab).
A Browser usually does not open more than 5 concurrent connects to a backend and then pipelines excess requests. fetch20 is my workaround for a real-world problem when a JS-Frontend needs to queue, say, 5000 fetches in parallel, which crashes my Browser (for unknown reason). We have 2021 and that should not be any problem, right?
But this looks too complicated.
Not complicated enough, I'm afraid. Currently, if multiple code paths call getAccessToken when the semaphore is taken, they'll all block on the same tokenSemaphore instance, and when the semaphore is released, they'll all be released and resolve roughly at the same time, allowing concurrent access to the API.
In order to write an asynchronous lock (or semaphore), you'll need a collection of futures (tokenResolvers). When one is released, it should only remove and resolve a single future from that collection.
I played around with it a bit in TypeScript a few years ago, but never tested or used the code. My Gist is also C#-ish (using "dispoables" and whatnot); it needs some updating to use more natural JS patterns.
In an IOCP Winsock2 client, after ConnectEx() times-out on an unsuccessful connection attempt, the following happens:
An "IO completion" is queued to the associated IO Completion Port.
GetQueuedCompletionStatus() returns FALSE.
WSAGetOverlappedResult() returns WSAETIMEDOUT.
What determines the timeout period between calling ConnectEx() and 1 above? How can I shorten this timeout period?
I know that it is possible to wait for ConnectEx() by passing it a filled-out structure OVERLAPPED.hEvent = WSACreateEvent() and then waiting for this event, e.g. with WaitForSingleObject(Overlapped.hEvent, millisec) to timeout after no connection has been made for the millisec time period. BUT, that solution is outside the scope of this question because it does not refer to the IOCP notification model.
unfortunatelly look like no built-in option for set socket connect timeout. how minimum i not view this and based on this question - How to configure socket connect timeout - nobody not view too.
one possible solution pass event handle to I/O request and if we got ERROR_IO_PENDING - call RegisterWaitForSingleObject for this event. if this call will be successful - our WaitOrTimerCallback callback function will be called - or because I/O will be complete (with any final status) and at this moment event (which we pass both to I/O request and RegisterWaitForSingleObject) will be set or because timeout (dwMilliseconds) expired - in this case we need call CancelIoEx function.
so let say we have class IO_IRP : public OVERLAPPED which have reference counting (we need save pointer to OVERLAPPED used in I/O request for pass it to CancelIoEx. and need be sure that this OVERLAPPED still not used in another new I/O - so yet not free). in this case possible implementation:
class WaitTimeout
{
IO_IRP* _Irp;
HANDLE _hEvent, _WaitHandle, _hObject;
static VOID CALLBACK WaitOrTimerCallback(
__in WaitTimeout* lpParameter,
__in BOOLEAN TimerOrWaitFired
)
{
UnregisterWaitEx(lpParameter->_WaitHandle, NULL);
if (TimerOrWaitFired)
{
// the lpOverlapped unique here (because we hold reference on it) - not used in any another I/O
CancelIoEx(lpParameter->_hObject, lpParameter->_Irp);
}
delete lpParameter;
}
~WaitTimeout()
{
if (_hEvent) CloseHandle(_hEvent);
_Irp->Release();
}
WaitTimeout(IO_IRP* Irp, HANDLE hObject) : _hEvent(0), _Irp(Irp), _hObject(hObject)
{
Irp->AddRef();
}
BOOL Create(PHANDLE phEvent)
{
if (HANDLE hEvent = CreateEvent(NULL, FALSE, FALSE, NULL))
{
*phEvent = hEvent;
_hEvent = hEvent;
return TRUE;
}
return FALSE;
}
public:
static WaitTimeout* Create(PHANDLE phEvent, IO_IRP* Irp, HANDLE hObject)
{
if (WaitTimeout* p = new WaitTimeout(Irp, hObject))
{
if (p->Create(phEvent))
{
return p;
}
delete p;
}
return NULL;
}
void Destroy()
{
delete this;
}
// can not access object after this call
void SetTimeout(ULONG dwMilliseconds)
{
if (RegisterWaitForSingleObject(&_WaitHandle, _hEvent,
(WAITORTIMERCALLBACK)WaitOrTimerCallback, this,
dwMilliseconds, WT_EXECUTEONLYONCE|WT_EXECUTEINWAITTHREAD))
{
// WaitOrTimerCallback will be called
// delete self here
return ;
}
// fail register wait
// just cancel i/o and delete self
CancelIoEx(_hObject, _Irp);
delete this;
}
};
and use something like
if (IO_IRP* Irp = new IO_IRP(...))
{
WaitTimeout* p = 0;
if (dwMilliseconds)
{
if (!(p = WaitTimeout::Create(&Irp->hEvent, Irp, (HANDLE)socket)))
{
err = ERROR_NO_SYSTEM_RESOURCES;
}
}
if (err == NOERROR)
{
DWORD dwBytes;
err = ConnectEx(socket, RemoteAddress, RemoteAddressLength,
lpSendBuffer, dwSendDataLength, &dwBytes, Irp)) ?
NOERROR : WSAGetLastError();
}
if (p)
{
if (err == ERROR_IO_PENDING)
{
p->SetTimeout(dwMilliseconds);
}
else
{
p->Destroy();
}
}
Irp->CheckErrorCode(err);
}
another possible solution set timer via CreateTimerQueueTimer and if timer expired - call CancellIoEx or close I/O handle from here. difference with event solution - if I/O will be completed before timer expired - the WaitOrTimerCallback callback function will be not automatically called. in case event - I/O subsystem set event when I/O complete (after intial pending status) and thanks to that (event in signal state) callback will be called. but in case timer - no way pass it to io request as parameter (I/O accept only event handle). as result we need save pointer to timer object by self and manually free it when I/O complete. so here will be 2 pointer to timer object - one from pool (saved by CreateTimerQueueTimer) and one from our object (socket) class (we need it for dereference object when I/O complete). this require reference counting on object which incapsulate timer too. from another side we can use timer not for single I/O operation but for several I/O (because it not direct bind to some I/O)
My gcc compiler supports C++ 14.
Scenario:
I want to know if there is a way I can force cancel out of a blocking call and stop my std::thread safely.
Code:
// Member vars declared in MyClass.hpp
std::atomic<bool> m_continue_polling = false;
std::thread m_thread;
StartThread() {
m_continue_polling = true;
m_thread = std::thread { [this] {
while (m_continue_polling) {
int somevalue = ReadValue(); // This is a blocking call and can take minutes
}
}};
}
StopThread() {
m_continue_polling = false;
try {
if (m_thread.joinable()) {
m_thread.join();
}
}
catch(const std::exception & /*e*/) {
// Log it out
}
}
In above code ReadValue is a blocking call which goes into a library and reads on a fd which some device driver related code which I have no control on.
Question:
I need StopThread to be able to stop the thread and cancel the call that is blocking on ReadValue. How can I do this? Is there some way in C++11 or 14?
PS:
Probably std::async could be a solution? But I wish to know if there are better ways. If std::async is the best approach then how to effectively use it this scenario without causing bad side effects.
On Linux, you can get the native thread handle and use pthread_cancel function. Provided the thread did not disable cancelability and that the thread is blocked in a cancellation point. You will have to read the documentation carefully to understand all the caveats.
This question involves boost::asio but is a pure C++ 11 question.
I am new to C++ 11 & lambda techniques which I am trying to use with boost::asio::async_connect for network communication.
Following is my function which attempts an asynchronous connect with the host.
bool MyAsyncConnectFunction() {
//some logic here to check validity of host
if (ip_is_not_resolved)
return false;
the_socket.reset(new tcp::socket(the_io_service));
auto my_connection_handler = [this]
(const boost::system::error_code& errc, const tcp::resolver::iterator& itr)
{
if (errc) {
//Set some variables to false as we are not connected
return false;
}
//Do some stuff as we are successfully connected at this point
return true;
};
//How is async_connect taking a lambda which
boost::asio::async_connect(the_socket, IP_destination, tcp::resolver::iterator(), my_connection_handler);
return true;
}
All works fine. There are no functional issues absolutely. However, I am wondering that boost::asio::async_connect takes a ConnectionHandler without a return type in its last parameter but I am passing a lambda i.e. my_connection_handler which returns a value.
How is it possible that I can pass a lambda with a return value whereas boost::asio::async_connect's 4th param takes a callback without a return value ?
boost::asio::async_connect is a function template that takes a callable as its fourth argument. It does not use the return value of said callable, nor does it care about it. Just as you could write :
auto f = []() { return true; };
f(); // Return value is discarded
The example of #m.s. is good too. Since it is a template, the function resolves the argument according to the template argument deduction rules.
I have the following code snippet
class MCSLock
{
static boost::thread_specific_ptr< mcs_lock > tls_node;
public:
MCSLock()
{
if( tls_node.get() == 0 )
tls_node.reset( new mcs_lock() );
}
};
My understand is that each thread has it's own space allowed for tls_node. This means
the constructor in which we call get() and reset() are thread safe.
Is my understanding correct ?
Thanks.
Yes, each call will be received by different (thread local) objects.