How is CopyFileEx's pbCancel parameter safe? - winapi

There are several questions posted (like Send flag to cancel CopyFileEx after it has started) that reference the ability to use the pbCancel parameter of the Win32 CopyFileEx() function to cancel an in-progress copy. What is not clear to me, is why is it safe to set that boolean in another thread without any sort of synchronization mechanism (mutex, etc.)? This functionality is really only useful if another thread sets that boolean to true, as CopyFileEx() blocks until the file copy is finished.
Is this relying on a peculiarity of the Windows platform?

in case simply set boolean by sense variable (which can be 0 or not 0) without any connection with other data - not need any synchronization - for what ?!
one thread set variable to true, another thread read from variable or 0 or not 0. even if you do write and read to variable in critical section - what this change ? nothing ! thread which will be read - anyway load or 0 or not 0 from variable.
synchronization need in another cases. usually when we write to another memory locations, before store true in variable, and want that after another thread read true from variable - all another memory modification will be visible already. but in case cancel callback - no any another data
or if we write complex data to variable (not only or 0 or not 0) and this write not atomic - synchronization need for prevent read partial state. but here impossible any "partial state" by design
For those who not understand:
no matter are write to or read from pbCancel will be atomic.
in any case finally some value will be read from pbCancel.
if this value will be interpreted as TRUE during the copy operation, the operation is canceled. Otherwise, the copy operation will continue to completion.
If this flag is set to TRUE during the copy operation, the operation is canceled. Otherwise, the copy operation will continue to
completion.
even if some "transient state" will be read - and so what -any way this value will be used in if/else expression and as result copy operation will be canceled or continue.
Nowhere is not specified (and this contradicts common sense) that a strict checking of 0 (false) and 1(true) will be performed and in case of any other value there will be an exception or UB.
on the contrary - it is clearly indicated that Otherwise (ie flag not set to TRUE), the copy operation will continue to completion.. no any words about exception, UB, etc
if look on declaration of CopyFileExW, here visible in more details how pbCancel value is interpreted:
_When_(pbCancel != NULL, _Pre_satisfies_(*pbCancel == FALSE))
_Inout_opt_ LPBOOL pbCancel,
so check is (and this is most native)
if (pbCancel == NULL || *pbCancel == FALSE)
// continue copy
else
// cancel copy
here no any "transient state". here or 0 or not 0. even if you write 1 to pbCancel but another thread read from it say 0x5FD38FCA - this will be interpreted as TRUE and copy operation will be canceled.
anyway - if you write true (in strict sense 1) to variable - another thread sooner or later read 1 from this variable. do this in critical section - nothing change - again only sooner or later another thread read this value. not faster.

Related

Cypress check that text either does not exist or is invisible

I want to check that a piece of text either does not even exist in the DOM or that if it exists, it is invisible.
cy.contains(text).should("not.visible) handles the second case, cy.contains(text).should("not.exist") the first, but either of them fails in the case of the other.
Before trying a conditional solution, have a read through this paragraph
https://docs.cypress.io/guides/core-concepts/conditional-testing#Error-Recovery
This is a feature that they intentionally made not available
Enabling this would mean that for every single command, it would recover from errors, but only after each applicable command timeout was reached. Since timeouts start at 4 seconds (and exceed from there), this means that it would only fail after a long, long time.
every cy...should has a built-in timeout, so if you have multiple your wait time would stack.
TL;DR;
If you can get around having to use a conditional, try that approach first
Alternatively, you can use this trick (at your peril 😉).
cy.get("body").then(($body) => {
if ($body.find(":contains(texta)").length > 0) {
cy.contains("texta").should("not.be.visible");
} else {
cy.contains("texta").should("not.exist");
}
});
cy.get("body").then(($body) => { will get the copy of body(DOM) in the current state and make it available for synchronous querying using jQuery. With jQuery we can determine synchronously whether an element contains the text string with $body.find(":contains(text)")
using the result's length you can make a condition that will then fire off cypress' asynchronous assertions.

What exactly is kqueue's EV_RECEIPT for?

The kqueue mechanism has an event flag, EV_RECEIPT, which according to the linked man page:
... is useful for making bulk changes to a kqueue
without draining any pending events. When passed as input,
it forces EV_ERROR to always be returned. When a filter is
successfully added the data field will be zero.
My understanding however is that it is trivial to make bulk changes to a kqueue without draining any pending events, simply by passing 0 for the nevents parameter to kevent and thus drawing no events from the queue. With that in mind, why is EV_RECEIPT necesary?
Some sample code in Apple documentation for OS X actually uses EV_RECEIPT:
kq = kqueue();
EV_SET(&changes, gTargetPID, EVFILT_PROC, EV_ADD | EV_RECEIPT, NOTE_EXIT, 0, NULL);
(void) kevent(kq, &changes, 1, &changes, 1, NULL);
But, seeing as the changes array is never examined after the kevent call, it's totally unclear to me why EV_RECEIPT was used in this case.
Is EV_RECEIPT actually necessary? In what situation would it really be useful?
If you are making bulk changes and one of them causes an error, then the event will be placed in the eventlist with EV_ERROR set in flags and the system error in data.
Therefore it is possible to identify which changelist element caused the error.
If you set nevents to zero, you get the error code but no indication of which event caused the error.
So EV_RECEIPT allows you to set nevents to a non-zero value without draining any pending events.

Ruby - Redis based mutex with expiration implementation

I'm trying to implement a memory based, multi process shared mutex, which supports timeout, using Redis.
I need the mutex to be non-blocking, meaning that I just need to be able to know if I was able to fetch the mutex or not, and if not - simply continue with execution of fallback code.
something along these lines:
if lock('my_lock_key', timeout: 1.minute)
# Do some job
else
# exit
end
An un-expiring mutex could be implemented using redis's setnx mutex 1:
if redis.setnx('#{mutex}', '1')
# Do some job
redis.delete('#{mutex}')
else
# exit
end
But what if I need a mutex with a timeout mechanism (In order to avoid a situation where the ruby code fails before the redis.delete command, resulting the mutex being locked forever, for example, but not for this reason only).
Doing something like this obviously doesn't work:
redis.multi do
redis.setnx('#{mutex}', '1')
redis.expire('#{mutex}', key_timeout)
end
since I'm re-setting an expiration to the mutex EVEN if I wasn't able to set the mutex (setnx returns 0).
Naturally, I would've expected to have something like setnxex which atomically sets a key's value with an expiration time, but only if the key does not exist already. Unfortunately, Redis does not support this as far as I know.
I did however, find renamenx key otherkey, which lets you rename a key to some other key, only if the other key does not already exist.
I came up with something like this (for demonstration purposes, I wrote it down monolithically, and didn't break it down to methods):
result = redis.multi do
dummy_key = "mutex:dummy:#{Time.now.to_f}#{key}"
redis.setex dummy_key, key_timeout, 0
redis.renamenx dummy_key, key
end
if result.length > 1 && result.second == 1
# do some job
redis.delete key
else
# exit
end
Here, i'm setting an expiration for a dummy key, and try to rename it to the real key (in one transaction).
If the renamenx operation fails, then we weren't able to obtain the mutex, but no harm done: the dummy key will expire (it can be optionally deleted immediately by adding one line of code) and the real key's expiration time will remain intact.
If the renamenx operation succeeded, then we were able to obtain the mutex, and the mutex will get the desired expiration time.
Can anyone see any flaw with the above solution? Is there a more standard solution for this problem? I would really hate using an external gem in order to solve this problem...
If you're using Redis 2.6+, you can do this much more simply with the Lua scripting engine. The Redis documentation says:
A Redis script is transactional by definition, so everything you can do with a Redis transaction, you can also do with a script, and usually the script will be both simpler and faster.
Implementing it is trivial:
LUA_ACQUIRE = "return redis.call('setnx', KEYS[1], 1) == 1 and redis.call('expire', KEYS[1], KEYS[2]) and 1 or 0"
def lock(key, timeout = 3600)
if redis.eval(LUA_ACQUIRE, key, timeout) == 1
begin
yield
ensure
redis.del key
end
end
end
Usage:
lock("somejob") { do_exclusive_job }
Starting from redis 2.6.12 you can do: redis.set(key, 1, nx: true, ex: 3600) which is actually SET key 1 NX EX 3600.
I was inspired by the simplicity that of both Chris's and Mickey's solutions, and created gem - simple_redis_lock with this code(and some features and rspec):
def lock(key, timeout)
if #redis.set(key, Time.now, nx: true, px: timeout)
begin
yield
ensure
release key
end
end
end
I explored some other awesome alternatives:
mlanett/redis-lock
PatrickTulskie/redis-lock
leandromoreira/redlock-rb
dv/redis-semaphore
but they had too many features of blocking to acquire lock and didn't use this single SET KEY 1 NX EX 3600 atomic redis statement.

boost::unique_lock/upgrade_to_unique_lock && boost::shared_lock can exist at the same time ? it worries me

I did experiments with boost::upgrade_to_unique_lock/unique_lock && boost::shared_lock, scenario is:
1 write thread, where it has
boost::unique_lock existing with a
boost::shared_mutex, in the thread, I
write to a global AClass
3 read thread, each one has
boost::shared_lock with the same
boost:;shrared_mutex, they have a
loop to read the global AClass
I observed all the threads are holding locks( 1 unique, 3 shared ) at the same time, and they all
running data access loops.
my concern is AClass is not thread-safe, if I can do read/write at the same time in different threads, the read could crash. Even it's not AClass, we use primitive types, reading them surely will not crash, but the data could be dirty, isn't it ?
boost::shared_lock<boost::shared_mutex>(gmutex);
This is not an "unnamed lock." This creates a temporary shared_lock object which locks gmutex, then that temporary shared_lock object is destroyed, unlocking gmutex. You need to name the object, making it a variable, for example:
boost::shared_lock<boost::shared_mutex> my_awesome_lock(gmutex);
my_awesome_lock will then be destroyed at the end of the block in which it is declared, which is the behavior you want.

Can somebody explain this remark in the MSDN CreateMutex() documentation about the bInitialOwner flag?

The MSDN CreatMutex() documentation (http://msdn.microsoft.com/en-us/library/ms682411%28VS.85%29.aspx) contains the following remark near the end:
Two or more processes can call CreateMutex to create the same named mutex. The first process actually creates the mutex, and subsequent processes with sufficient access rights simply open a handle to the existing mutex. This enables multiple processes to get handles of the same mutex, while relieving the user of the responsibility of ensuring that the creating process is started first. When using this technique, you should set the bInitialOwner flag to FALSE; otherwise, it can be difficult to be certain which process has initial ownership.
Can somebody explain the problem with using bInitialOwner = TRUE?
Earlier in the same documentation it suggests a call to GetLastError() will allow you to determine whether a call to CreateMutex() created the mutex or just returned a new handle to an existing mutex:
Return Value
If the function succeeds, the return value is a handle to the newly created mutex object.
If the function fails, the return value is NULL. To get extended error information, call GetLastError.
If the mutex is a named mutex and the object existed before this function call, the return value is a handle to the existing object, GetLastError returns ERROR_ALREADY_EXISTS, bInitialOwner is ignored, and the calling thread is not granted ownership. However, if the caller has limited access rights, the function will fail with ERROR_ACCESS_DENIED and the caller should use the OpenMutex function.
Using bInitialOwner combines two steps into one: creating the mutex and acquiring the mutex. If multiple people can be creating the mutex at once, the first step can fail while the second step can succeed.
As the other answerers mentioned, this isn't strictly a problem, since you'll get ERROR_ALREADY_EXISTS if someone else creates it first. But then you have to differentiate between the cases of "failed to create or find the mutex" and "failed to acquire the mutex; try again later" just by using the error code. It'll make your code hard to read and easier to screw up.
In contrast, when bInitialOwner is FALSE, the flow is much simpler:
result = create mutex()
if result == error:
// die
result = try to acquire mutex()
if result == error:
// try again later
else:
// it worked!
Well, not sure if there's a real problem. But if you set the argument to TRUE in both processes then you have to check the value of GetLastError() to check if you actually ended up having ownership. It will be first-come-first serve. Useful perhaps only if you use a named mutex to implement a singleton process instance.
The flag is used to create the mutex in an owned state - the successful caller will atomically create the synchronisation object and also acquire the lock before returning in the case that the caller needs to be certain that no race condition can form between creating the object and acquiring it.
Your protocol will determine whether you ever need to do this in one atomic operation.

Resources