I need to store many page-aligned entries, each entry is page-sized; basically I need to collect/bind together memory pages. The only requirement that I need to be able to check whether entry already exists upon adding by matching the machine-word-sized key. It is not possible to override the entry; if the same key is used, the existing entry must be found.
The function to add/replace the entry receives some machine-word-sized key (32/64 bits), checks if there is a page-aligned entry which contains the same key. If there is no entry, it is created via mmap, and is added with the required key. The C declaration of the entry looks like this:
struct entry {
uintptr_t key; /* machine-word-sized key */
unsigned char meta[]; /* this space may be used for storing in data structure */
};
The caller receives the key and takes the decision whether to use the existing entry or to allocate a new one. That is, the key must be looked up, and pages must be just collected to allow removal in a loop.
All I need are adding the entry and removing all entries (no specific order imposed); since I use each entry as epoll.data.ptr, I don't need even fast lookup after adding the entry. Given that each entry has some space for meta-data, I'm OK to dedicate some of this space to payload required to store the entry in the data structure.
I thought about using a hash table. I have no math or crypto background, so generating a good hash is a problem. I tried looking at several well-known hashes, but it seems that they are quite generic, i.e. intended to work with any data. However, my case seems to be very specific: there is no way user would use the table directly and conditions (page-aligned and page-sized entries plus word-sized key) are unlikely to change.
The questions are:
Am I right that hash table is OK for this case? If yes, what kind of hash would you suggest? If the hash table is linked-list based, it'd better be intrusive (i.e. all required meta is better to be inside the entry, not outside, like Linux kernel's struct list_head).
I was also looking at page tables, like described at https://wiki.osdev.org/Paging. However, it concentrates mostly on how MMU does its job, and I'm not sure whether I can adopt it to purely software implementation and how can I apply these concepts. Since machine-word-sized key must be used for inserting the entry, the concepts from this link only show how to organize pages effectively for page-to-page mappings.
I currently need to care only of 4096-bytes pages, but generic case is better (i.e. some algorithm which operates on PAGE_SIZE, be it 4K, 8K or whatever). It would be also nice the data structure does not assume page-sized (though page-aligned is a strict requirement, since all memory is obtained via mmap).
Related
I write a code which rarely creates/removes objects (up to several thousands) but very frequently modifies them in soft IRQ context. These objects are also rarely read (and probably will also be rarely modified) from task context (via procfs: file per object). Currently my code contains global per-CPU data blocks, each one guarded by a spinlock. Such a block contains a fixed-sized hashtable for object storage.
Obviously the current design is not optimal, especially when having very high object update loads: reading objects from procfs will cause data losses in updating soft IRQs. I need to rewrite the synchronisation scheme to get rid of global locks. The most obvious choice - to have a spinlock for each hashtable bucket - it should scale well. The problem is that I'll probably need to use my own hashtable implementation or at least to reimplement several top-level macros (didn't find those in linux/hashtable.h for spinlock-protected buckets). Should I also look towards RCU-enabled hashtable (yet I have no solid understanding of this synchronisation approach)?
Buckets with lock protection are declared in the header linux/list_bl.h. They use lowest bit of the head pointer as a lock bit.
RCU-protected access to the bucket is defined with other hash table functions in the header linux/hashtable.h (they have _rcu suffix).
Choosing between locks and RCU is up to you. Note, that RCU itself cannot resolve modify-modify conflicts. And it helps mostly for frequently-read data, which seems is not your case.
As only one locking function - hlist_bl_lock - is declared for struct hlist_bl_head, and this function is unaware for irq's, additional actions should be performed when hash table can be used in irq or bottom halves:
spin_lock_irqsave:
local_irq_save(flags);
hlist_bl_lock(...);
spin_unlock_irqrestore:
hlist_bl_unlock(...);
local_irq_restore(flags);
spin_lock_bh:
local_bh_disable();
hlist_bl_lock(...);
spin_unlock_bh:
hlist_bl_unlock(...);
local_bh_enable();
I am reading Code Complete 2, Chapter 7.1 and I don't understand the point author said below.
7.1 Valid Reasons to Create a Routine
Hide pointer operations
Pointer operations tend to be hard to read and error prone. By isolating them in routines (or a class, if appropriate), you can concentrate on the intent of the operation rather than the mechanics of pointer manipulation. Also, if the operations are done in only one place, you can be more certain that the code is correct. If you find a better data type than pointers, you can change the program without traumatizing the routines that would have used the pointers.
Please explain or give example of this purpose.
Essentially, the advice is a specific example of the data-hiding. It boils down to this -
Stick to Object-oriented design and hide your data within objects.
In case of pointers, the norm is to NEVER expose pointers of "internal" data-structures as public members. Rather make them private and expose ONLY certain meaningful manipulations that are allowed to be performed on the pointers as public member functions.
Portable / Easy to maintain
The added advantage (as explained in the section quoted) is that any change in the internal data structures never forces the external API to be changed. Only the internal implementation of the publicly exposed member functions needs to be modified to handle any changes.
Code re-use / Easy to debug
Also pointer manipulations are now NOT copy/pasted and littered all around the code with no idea what exactly they do. They are now limited to the member functions which are written keeping in mind how exactly the internal data structures are being manipulated.
For example if we have a table of data which the user is allowed to add rows into,
Do NOT expose
pointers to the head/tail of table.
pointers to the individual elements.
Instead create a table object that exposes the functions
addNewRowTop(newData)
addNewRowBottom(newData)
addNewRow(position, newData)
To take this further, we implement addNewRowTop() and addNewRowBottom() by simply calling addNewRow() with the proper position - another internal variable of the table object.
I need your advice on Redis datatypes for my project. The project is a torrent-tracker (ruby, simple sinatra-based) with pure in-memory data store for current information about peers. I feel like this is what Redis is made for. But I'm stuck at choosing proper data types for this. For now I tend to the following setup:
Use list for seeders. Actually I'd better need a ring buffer to get a sequential range of seeders (with given size and start position) and save new start position for the next time.
Use sorted set for leechers. Score for each leecher is downloaded/(downloaded+left) so I can also extract a range for any specific case.
All string values in set and list are string (bencoded) representation of peer data.
What I actually lack in the setup above is:
Necessity to store offset for seeders so data access needs synchronization.
Unknown method of finding a specific seeder in list. Here I may benefit from set but then I won't be able to extract a range of items at once.
(General problem) Need TTL for set/list members (if client shuts down without sending any data before this). Possible option is to make each peer an ordinary string key/value (string or hash), give it TTL, subscribe on destroy and delete it in corresponding list or set.
What could you suggest? Any practical advice?
I've been reading some books on windows programming in C++ lately, and I have had some confusing understanding of some of the recurring concepts in WinAPI. For example, there are tons of data types that start with the handle keyword'H', are these supposed to be used like pointers? But then there are other data types that start with the pointer keyword 'P'. So I guess not. Then what is it exactly? And why were pointers to some data types given separate data types in the first place? For example, PCHAR could have easily designed to be CHAR*?
Handles used to be pointers in early versions of Windows but are not anymore. Think of them as a "cookie", a unique value that allows Windows to find back a resource that was allocated earlier. Like CreateFile() returns a new handle, you later use it in SetFilePointer() and ReadFile() to read data from that same file. And CloseHandle() to clean up the internal data structure, closing the file as well. Which is the general pattern, one api function to create the resource, one or more to use it and one to destroy it.
Yes, the types that start with P are pointer types. And yes, they are superfluous, it works just as well if you use the * yourself. Not actually sure why C programmers like to declare them, I personally think it reduces code readability and I always avoid them. But do note the compound types, like LPCWSTR, a "long pointer to a constant wide string". The L doesn't mean anything anymore, that dates back to the 16-bit version of Windows. But pointer, const and wide are important. I do use that typedef, not doing so will risk future portability problems. Which is the core reason these typedefs exist.
A handle is the same as a pointer only so far as both ID a particular item. Obviously a pointer is the address of the item so if you know it's structure you can start getting fields in the item. A handle may or may not be a pointer - basically if it is a pointer you don't know what it is pointing to so you can't get into the fields.
Best way to think of a handle is that it is a unique ID for something in the system. When you pass it to something in the system the system will know what to cast it to (if it is a pointer) or how to treat it (if it is just some id or index).
Would a Win32 Mutex be the most efficient way to limit thread access to a linked list in a hash table? I didn't want to create a lot of handles, and the size of the hash table is variable. It could potentially be thousands. I didn't want to lock the whole list down when only one entry's list is being changed, so that would call for multiple Mutexes (one per each list), but I figured I could probably get away with pooling about 20 Mutex handles and reusing them since there shouldn't be that many threads accessing it simultaneously. Is there an alternative to Mutex locks for this case?
A lot here depends on the details of your hash table. My immediate reaction would be to avoid a mutex/critical section at all, at least if you can.
At least for adding an item to the linked list, it's pretty easy to avoid it by using an InterlockedExchangePointer instead. Presumably you have a struct something like:
struct LL_item {
LL_item *next;
std::string key;
whatever_type value;
};
To insert an item of this type into a linked list, you do something like:
LL_item *item = new LL_item;
// set key and value here
item->next = &item;
InterlockedExchangePointer(&item->next, &bucket->head);
Prior to the InterlockedExchangePointer, bucket->head contains the address of the first item currently in the list. We initialize our new item with its own address in its next pointer. We then (atomically) exchange the next pointer in our new item with the pointer to the pointer to the (previous) first node in the list. After the exchange, the new node's next pointer contains the address of the previously-first item in the list, and the pointer to the head of the list contains the address of our new node.
I believe you can (probably) normally use an exchange to remove an item from a list as well, but I'm not sure -- I haven't thought through that quite as thoroughly. Quite a few hash tables don't (even try to) support deletion anyway, so you may not care about that though.
I'd suggest a slim reader writer lock. Sure, it locks the entire data structure when you're doing updates, but typically you'll have a lot more reads than writes to the hash table. My experience with SRW locks is that it works quite well and performance is very good. You probably should give it a try. That'll get your program working. Then you can profile the code to determine if there are bottlenecks and if so where the bottlenecks are. It's quite possible that the SRW lock is plenty fast enough.