In William Stallings' book on Operating Systems, he defines a strong semaphore as one that has a FIFO queuing discipline, and a weak semaphore one that is unordered. Surely there are other queuing disciplines for strong semaphores, such as by priority? Or would this no longer be a strong semaphore, since starvation would become possible? (Stallings says that strong semaphores do not allow for starvation.) Is the primary distinction between strong and weak ordered/unordered, or starvation possible/impossible?
Yes, one non-FIFO non-starving possibility (among many) is to select the next process in a round-robin manner. For example, if the order is 1, 2, 3, 4, and while 1 is holding the semaphore, 4 and then 3 request it, then the next process up is 3. No process P starves because, after each request of P, each other process has at most one critical section before P's request is granted.
Definitions of "strong semaphore" in the first pages of hits from Google are split between "no starvation" and "FIFO". Which one is "right" is a matter of taste – given this mess (and the general overuse of strong as an adjective in mathematical writing), I'd probably use neither.
When it comes to the literature on semaphores, I have never seen (with my limited knowledge) anyone using FIFO or some form of ordering as the criteria for weak/strong categorization. In fact, starvation-freedom is not always the criteria either. The initial literatures (due to viz. Morris('79), Martin and J. R. Burch('85), Udding('86), Friedberg and Peterson('87), and Haldar and Subramanian('88)) used certain characteristics of ‘P’ and ‘V’ operations to define weak-semaphore. Interestingly, all of the definitions from the cited researchers eventually imply possible presence of starvation in case of weak semaphores. Further, though FIFO guarantees starvation-freedom, referring to the term FIFO or some form of ordering, in my opinion, restricts the behavior of the semaphore. One form of restriction could be that, for example, a FIFO ordering would imply that a semaphore has some kind of buffer attached to it in order to keep track of the blocked processes/threads on the ‘P’ operation. For hardware implementation of semaphores, this definition might be too restrictive. Another form of restrictions could be that instead of considering all possible ordering schemes with same bounded overtaking by k (i.e. no process will be overtaken more than k times), one would be restricted to consider each scheme as one different kind of semaphore. Thus, my personal opinion is to define a weak-semaphore as one that does NOT guarantee starvation-freedom (but guarantees dead-lock-freedom). However, if you’re more deep into research-grade activities, then by all means feel free to use more mathematical or/and fine-grained definitions as you prefer.
I think there is no starvation with a priority queue with a predefined priority on queue elements. As you can see it's just a regular queue except the next element has the highest priority. So if you implement priorities with FIFO logic (first in has the highest priority) there will be no starvation. Otherwise it can cause starvation.
Related
So I was having some (arguably) fun with sockets (in c) and then I came across the problem of asynchronously receiving.
As stated here, select and poll does a linear search across the sockets, which does not scale very well. Then I thought, can I do better, knowing application specific behaviour of the sockets?
For instance, if
Xn: the time of arrival of the nth datagram for socket X (for simplicity lets assume time is discrete)
Pr(Xn = xn | Xn-1 = xn-1, Xn-2 = xn-2 ...): the probability of Xn = xn given the previous arrival times
is known by statistics or assumption or whatever. I could then implement an algorithm that polls sockets in the order of largest probability.
The question is, is this an insane attempt? Does the library poll/select have some advantage that I can't beat from user space?
EDIT: to clarify, I don't mean to duplicate the semantics of poll and select, I just want a working way of finding at least a socket that is ready to receive.
Also, stuff like epoll exists and all that, which I think is most likely superior but I want to seek out any possible alternatives first.
Does the library poll/select have some advantage that I can't beat from user space?
The C library runs in userspace, too, but its select() and poll() functions almost certainly are wrappers for system calls (but details vary from system to system). That they wrap single system calls (where in fact they do so) does give them a distinct advantage over any scheme involving multiple system calls, such as I imagine would be required for the kind of approach you have in mind. System calls have high overhead.
All of that is probably moot, however, if you have in mind to duplicate the semantics of select() and poll(): specifically, that when they return, they provide information on all the files that are ready. In order to do that, they must test or somehow watch every specified file, and so, therefore, must your hypothetical replacement. Since you need to scan every file anyway, it doesn't much matter what order you scan them in; a linear scan is probably an ideal choice because it has very low overhead.
I recently encountered the bakery algorithm in my studies and just need to clarify some things.
Would it be possible for the bakery algorithm to violate mutual exclusion if processes did not pick a ticket number larger than that of all existing tickets?
Is setting number[i] to zero after the critical section important for success in the absence of contention?
And is one of the reasons for the bakery algorithm not being used in practice because the process of finding the maximum value of an array is non-atomic? I thought this was not the case, as that isn't the correct reason for it.
Would it be possible for the bakery algorithm to violate mutual exclusion if processes did not pick a ticket number larger than that of all existing tickets?
It wouldn't violate mut. ex. as long as two or more different processes don't have the same number. But it would violate fairness since a process which later came to the critical section would be given precedence to enter it over some other process which has been waiting for more time. So, it's not critical, but it's also not ideal.
Is setting number[i] to zero after the critical section important for success in the absence of contention?
I don't think it's important. That reset serves the purpose to indicate that a process no longer wishes to enter the critical section. Not resetting its value may cause others to think the process wishes to enter the critical section, which may not be good, but I don't see it linked to a performance issue.
And is one of the reasons for the bakery algorithm not being used in practice because the process of finding the maximum value of an array is non-atomic? I thought this was not the case, as that isn't the correct reason for it.
I thought it certainly was, until I read that "as that isn't the correct reason for it." If you could share some more knowledge on this third point, I'd be thankful!
How a Priority Queue a Queue Data Structure. Since it doesn't follow FIFO, shouldn't it be named Priority Array or Priority Linked LIst majorly because Priority Queues don't follow a fashion like a FIFO queue
In a priority queue, an element with high priority is served before an element with low priority.
'If two elements have the same priority, they are served according to their order in the queue'
i think this will answer your question
If you look at most used implementations, priority queues are essentially heaps - they are arranged in a particular fashion based on priority defined by the programmer - in a simple example, ascending or descending order of integers.
Think of priority queue as a queue where rather than retrieving the elements based on when you add the element, you retrieve them based on how they compare with each other. This comparison can be simply ascending or descending order in your textbook examples. You can understand the ADT from an analogy from another StackOverflow answer:
You're running a hospital and patients are coming in. There's only one
doctor on staff. The first man walks in - and he's served immediately.
Next, a man with a cold comes in and requires assistance. You add him
to the queue and he waits in line for the doctor to become available.
Next, a man with an axe in his head comes through the door. He is
assigned a higher priority because he is a higher medical liability.
So the man with the cold is bumped down in line. Next, someone comes
in with breathing problems. So, once again, the man with the cold is
bumped down in priority. This is called trigaing in the real world -
but in this case it's a medical line.
Implementing this in code would use a priority queue and a worker
thread (the doctor) to perform work on the consumable / units of work
(the patients).
In real scenario, instead of patients, you might have processes waiting to be addressed by the CPU.
Read:
When would I use a priority queue?
In the queue, the natural
ordering given by how much time an element waits in a line can be considered the fairest. When you enter in a line waiting for something, first comes first served.
Sometimes, however, there is something special about some elements that
might suggest they should be served sooner than others that waited longer. For example, we don’t always read our emails in the order we received them, but often
you skip newsletters or “funny” jokes from friends to read work-related messages first.
Likewise, when you design an app or test an app, if there are some bugs, those bugs are prioritized and teams work on those bugs based on bugs severity. First, new bugs are discovered all the
time, and so new items will be added to the list. Say a nasty authentication bug is found—
you’d need to have it solved by yesterday! Moreover, priority for bugs can change over
time. For instance, your CEO might decide that you are going after the market share
that’s mostly using browser X, and you have a big feature launch next Friday, so you really need to solve that bug at the bottom within a couple of days.
Priority queues are especially useful when we need to consume elements in a certain order from a dynamically changing list (such as the list of tasks to run on a CPU), so that at any time we can get the next element (according to a certain criterion), remove it from the list, and (usually) stop worrying about fixing anything for
the other elements.
That’s the idea behind priority queues: they behave like regular, plain queues, except that the front of the queue is dynamically determined based on some kind of priority. The differences caused to the implementation by the introduction of priority are profound, enough to deserve a special kind of data structure.
Section 3.3.6 of "The Part-Time Parliament" suggests that membership in the parliament (and thus the quorum for decisions) can be changed safely "by letting the membership of Parliament used in passing decree n be specified by the law as of decree n-3".
Translated into more common MultiPaxos terms, that means that the set of acceptors becomes part of the replicated state machine's state, changed by proposals to add or remove acceptors.
The quorum for slot N would be taken from the set of acceptors defined in the state when slot N-3 was decided.
Lamport offers no justification for this decision, and while his next paragraph says that changes must be handled with care and describes the ultimate failure of the algorithm, it fails for reasons unrelated to this particular issue.
Is this an adequate safeguard to ensure consistency? If so, what literature supports it?
I maintain a Paxos system that is a core component to several large web services. The system runs Basic Paxos, and not Multi-Paxos. In that system changes to the set of acceptors can be proposed like any other transition. The set of acceptors for a paxos instance N is the one that was approved in N-1.
I am unsure if any literature supports this, but it trivial to see that it works. Because Paxos guarantees consensus of the transition N-1, it is guaranteed that hosts agree on which can act as acceptors for transition N.
However, things get a little more complicated with Multi-Paxos and Raft--or any pipelined consensus algorithm. According to the Raft video lecture, this must be a two-phased approach, but I don't recall that he explains why.
On further reading of the Paxos slides for the raft user study linked by Michael, I see that my suggestion is close, but in fact every decision needs to be made in a view that is agreed on by all participants. If we choose that view to be that in effect at slot N-1, that limits the whole machine to lock-step: each slot can only be decided once the previous slot has been decided.
However, N-1 can be generalized to N-α, where Lamport sets α=3. As long as all participants agree on α, they agree on the view for each slot, which means that the rest of the algorithm's correctness holds.
This adds a fairly trivial amount of storage overhead, then: leaders must track the view for the most recent slot executed at the replica and the preceding α-1 slots. This is sufficient information to either determine the view for slot N (slot_views[N-α]) or know that the view is undefined (slot N-α or some previous slot is not yet decided) and thus ignore the proposal.
I've got openMP and MPI at my disposal, and was wondering if anyone has come across a parallel version of any flood fill algorithm (preferably in c). If not, I'd be interested in sketches of how to do parallelise it - is it even possible given its based on recursion?
Wikipedia's got a pretty good article if you need to refresh your memory on flood fills.
Many thanks for your help.
There's nothing "inherently" recursive about flood-fill, just that to do some work, you need some information about previously-discovered "frontier" cells. If you think of it that way, it's clear that parallelism is eminently possible: even with a single queue, you could use four threads (one for each direction), and only move the tail of the queue when the cell has been examined by each thread. or equivalently, four queues. thinking in this way, one might even imagine partitioning the space into multiple queues - bucketed by coordinate ranges, perhaps.
one basic problem is that the problem definition usually includes the proviso that no cell is ever revisited. this implies that each worker needs an up-to-date map of which cells have been considered (globally). mutable global information is problematic, performance-wise, though it's not hard to think of ways to limit the necessity for propagating updates globally...