I have a threaded application, each thread will probably insert specific item into map, or erase its inserted item from map, but for any other threads, they will just use find or traverse the whole map via iterator.
Again, each thread will only insert or erase one its specific item on map.
In such case, should I add lock before insert or erase to avoid race? then how?
Without looking at code, I can only say that you use a ConcurrentHashMap for your needs. You may also want to read this : What's the difference between ConcurrentHashMap and Collections.synchronizedMap(Map)?
Related
I was trying to have a offchain storage that stores a collection of data (likely Vector).
And I was assuming to keep this vector growing.
One smooth-seeming approach was to use StorageValueRef::mutate() function but only later that I found we can't use that in extrinsic ( or maybe we could and I am not aware of ).
Another simple approach is to use BlockNumber to create a storage key and use BlockNumber from offchain wroker to reference that value.
But on what I am doing there will be need to store multiple data coming into single block. So I will be restricted to be able to store only one value per block which also doesn't fit the requirements.
You could create a map like this:
#[pallet::storage]
pub type MyData<T: Config> =
StorageMap<_, Twox64Concat, T::BlockNumber, Vec<MyData>>;
Then you can do MyData::<T>::append(block_number, data) in your pallet as often as you want.
But I would propose that you introduce some "pruning" window. Let's say 10 and only keep the data of the latest 10 blocks in the state. For that you can just have some MyData::<T>::remove(block_number - 10) in your on_intialize.
But if it is really just about data that you want to set from the runtime for the offchain worker, you could use sp_io::offchain_index::set("key", "data");. But this is a more low level interface. However, here you could also prefix the key by block number to have it unique per block, but you will need to come up with your own custom way of storing multiple values per block.
I am using list_for_each_entry_rcu and inside the loop I want to delete an element from the list. How can I do that if there is no list_for_each_entry_rcu_safe?
I saw that in the past there was a macro list_for_each_safe_rcu. Why was it removed? Is there any alternative?
From the view of RCU lists, all threads are divided into 2 groups:
readers
modifiers
At a time, the list could be accessed by several readers and a single modifier.
Readers should use _rcu primitives for list traversal.
In case of several modifiers, their access should be protected by a lock or other synchronization means.
So having list_for_each_safe_rcu is useless:
If your thread is a modifier, no other thread may modify a list at the same time. So, _rcu protection isn't needed and modifier may use list_for_each_safe for list traversal.
If your thread is a reader, it shouldn't modify a list, so it may use list_for_each_rcu.
Accessing a RCU list by two concurrent modifiers is generally unsafe. E.g., concurrent list_del_rcu on adjusted elements may corrupt a list.
Not sure why list_for_each_safe_rcu has been existed in the kernel 2.6.25 and before. In any case, it hasn't been used anywhere.
In Go, can we synchronize each key of a map using a lock per key?
Is map level global lock always required?
The documentation says that any access to map is not thread safe. But if a key exists, then can it be locked individually?
Not exactly, but if you are only reading pointers off a map and modifying the referents, then you aren't modifying the map itself.
This is a simple implementation of what you want: mapmutex.
Basically, a mutex is used to guard the map and each item in the map is used like a 'lock'.
Would a Win32 Mutex be the most efficient way to limit thread access to a linked list in a hash table? I didn't want to create a lot of handles, and the size of the hash table is variable. It could potentially be thousands. I didn't want to lock the whole list down when only one entry's list is being changed, so that would call for multiple Mutexes (one per each list), but I figured I could probably get away with pooling about 20 Mutex handles and reusing them since there shouldn't be that many threads accessing it simultaneously. Is there an alternative to Mutex locks for this case?
A lot here depends on the details of your hash table. My immediate reaction would be to avoid a mutex/critical section at all, at least if you can.
At least for adding an item to the linked list, it's pretty easy to avoid it by using an InterlockedExchangePointer instead. Presumably you have a struct something like:
struct LL_item {
LL_item *next;
std::string key;
whatever_type value;
};
To insert an item of this type into a linked list, you do something like:
LL_item *item = new LL_item;
// set key and value here
item->next = &item;
InterlockedExchangePointer(&item->next, &bucket->head);
Prior to the InterlockedExchangePointer, bucket->head contains the address of the first item currently in the list. We initialize our new item with its own address in its next pointer. We then (atomically) exchange the next pointer in our new item with the pointer to the pointer to the (previous) first node in the list. After the exchange, the new node's next pointer contains the address of the previously-first item in the list, and the pointer to the head of the list contains the address of our new node.
I believe you can (probably) normally use an exchange to remove an item from a list as well, but I'm not sure -- I haven't thought through that quite as thoroughly. Quite a few hash tables don't (even try to) support deletion anyway, so you may not care about that though.
I'd suggest a slim reader writer lock. Sure, it locks the entire data structure when you're doing updates, but typically you'll have a lot more reads than writes to the hash table. My experience with SRW locks is that it works quite well and performance is very good. You probably should give it a try. That'll get your program working. Then you can profile the code to determine if there are bottlenecks and if so where the bottlenecks are. It's quite possible that the SRW lock is plenty fast enough.
Is NSObject's retain method atomic?
For example, when retaining the same object from two different threads, is it promised that the retain count has gone up twice, or is it possible for the retain count to be incremented just once?
Thanks.
NSObject as well as object allocation and retain count functions are thread-safe — see Appendix A: Thread Safety Summary in the Thread Programming Guide.
Edit: I’ve decided to take a look at the open source part of Core Foundation. In CFRuntime.c, __CFDoExternRefOperation() is the function responsible for updating the the retain counters. It tests whether the process has more than one thread and, if there’s more than one thread, it acquires a spin lock before updating the retain count, hence making this operation thread safe.
Interestingly enough, the retain count is not an attribute (or instance variable) of an object in the struct (class) sense. The runtime keeps a separate structure with retain counters. In fact, if I understand it correctly, this structure is an array of hash tables and there’s a spin lock for each hash table. This means that a lock refers to multiple objects that have been placed in the same hash table, i.e., the lock is neither global (for all instances) nor per instance.