Queue/Waiting List position calculator - algorithm

Language not really important for this, I will translate to C#, but I am after a queue or waiting list position algorithm.
So I have effectively three queues in a store, Staff 1, Staff 2, Staff 3.
People in the queue can choose to book with an individual staff member or first available. So the queue would look something like.
Staff 1
Staff 3
First Available
Staff 3
First Available
Staff 2
First Available
Staff 1
First Available
Staff 3
So for the next person who comes into the store how would I calculate their queue position if they selected
a) A staff member (1,2 or 3)
b) First available

Given that at each passing of a constant interval each of the staff-linked queues gets 1 entry shorter simultaneously, we can reflect the waiting time for a new client as follows:
If the client wishes to be served by a specific member of staff, the waiting time (or rank) corresponds to the length of the queue that is linked with that particular member of staff
If the client wishes to be served by the desk that becomes first available, then this means they should be added to the shortest staff-linked queue, and so their staff selection can be made immediately, and the waiting time is thus determined accordingly as in the previous point.
If you have 3 staff members, then you would need to implement this with three (FIFO) queues. Put clients in the queue according to above considerations. It is easy to report the length of that queue, which is the rank for that client: 0 means they get served when the next interval starts.
At each passing of the fixed interval, you would pull one entry from each of the queues, which represent the clients that are getting served by each of the staff.

Related

Which Data structure is suitable queue or stack?

I am given a task to develop an application that will manage the appointment of patients coming for treatment. There can be two types of patients called normal patients and emergency patients. The system will take information of every patient and decide the turn of patient on the basis of type of patient.
If the patient will be from normal category, it will take appointment on the basis of arrival while the emergency patient will take appointment early than normal patients. All normal and emergency patients will be added into same system but appointment will be given differently. Every emergency patient will get appointment immediately.
Which data structure from Stack and Queue will be most efficient choice for the development of required application.
Note: You are not allowed to use any other data structures like priority queue and double ended queue
According to me, queue is a better option for calling normal patients because this follows FIFO. But how to deal with emergency patients without using priority queue
Are you prohibited from using two queues ? If not use two one for emergency patients the other for regular patients.
Which data structure from Stack and Queue will be most efficient choice for the development of required application.
A stack won't be of much help since that's a LIFO ("last in, first out") container. You should use a queue.
For the simple case, you could use two separate regular FIFO ("first in, first out") queues. When it's time to pick a patient:
If the emergency queue is not empty, pick one from that queue.
otherwise pick from the other queue.
A more generic solution is to use a priority queue.
Upon initial inspection of a new patient, you assign a priority to the patient, which can be a number from 0-100 for example. The patient is then placed in the priority queue, which automatically puts the patient before all other patiens with a lower priority.
When it's time to pick a patient you will always pick the first in line since the patients are lined up in the queue based on the priority of the matter.
A second sorting parameter for the priority queue could be the time of arrival, so that two patients that have been assigned the same priority are ordered according to when they came to seek help.

Solana - leader validator and incrementing field

As I understand it, Solana will elect a leader each round and there will be multiple validators handling the transactions independently. The leader will then consolidate all the transactions.
From this understanding, I'm curious how Solana actually handles programs which increment a field. So lets say we have this counter field, which increases by 1 each time the program is called. What happens if 10 different users calls this program at the same time, how will this work if the 10 transactions are handled by the ten validators independently. For example at the start of the round, counter=50 and during the round, ten different validators handles the transactions separately so each validator will increase the counter=51. When the leader gets back all the txns, it will say counter=51, what happens in this scenario?
I feel like there is something missing in my assumptions.
So my understanding here seems to be incorrect. It is actually the leader who executes the transactions and the validators who are verifying the transactions.
Source
Page 2 - Section 3 - https://solana.com/solana-whitepaper.pdf
As shown in Figure 1, at any given time a system node is designated as
Leader to generate a Proof of History sequence, providing the network global
read consistency and a verifiable passage of time. The Leader sequences user
messages and orders them such that they can be efficiently processed by other
nodes in the system, maximizing throughput. It executes the transactions
on the current state that is stored in RAM and publishes the transactions
and a signature of the final state to the replications nodes called Verifiers.
Verifiers execute the same transactions on their copies of the state, and publish their computed signatures of the state as confirmations. The published
confirmations serve as votes for the consensus algorithm.
The "recent blockhash" is another important part of this. A transaction references a recent blockhash, which is part of the Proof of History sequence. If two transactions reference the same blockhash, they are counted as duplicates by the network, even if they come from two different users.
More information can be found at https://docs.solana.com/developing/programming-model/transactions#recent-blockhash
There is only one PoH generator(Block producer) at a time. other nodes are just validating.
I cannot comment to Jon C but the answer is wrong. you can use the same recent blockhash otherwise there is no way solana can handle 50000 tps when block time is around 0.4 sec.

Link saturation/capacity optimization algorithm

My question is related to telecommunications, but it's still pure programming challenge since I'm using a Soft-switch.
Goal:
create algorithm used by call routing engine to fully saturate
available link capacity with traffic sold at highest possible rate
Situation:
there is communications link (E1/T1) with fixed capacity of 30 voice
channels (1 channel = one voice call between end users, so we can have max 30 concurrent calls on each link)
link has fixed cost of running per month, so it's best when it's fully utilized all the time (fixed cost divided by more minutes results in higher profit)
there are users "fighting" for link capacity by sending calls to Call Routing Engine
each user can consume random link capacity at given time, it's possible that one user take whole capacity at one time (ie peek
hours) but consume no capacity in off-peak hours
each user has different call rate per minute
ideal situation: link is fully utilized (24/7/365) with calls made by users with highest call rate per minute
Available control:
call routing engine can accept call and send it using this link or reject the call
Available data:
current link usage
user rate per minute
recent calls per minute per user
user call history (access is costly, but possible)
Example:
user A has rate 1 cent per minute, B 0.8 cent, C 0.7 cent
it's best to accept user A calls and reject others if user A can fill full link capacity
BUT user A usually can't fill whole link capacity and we need to accept calls from others to fill the gap
we have no control on how many calls users will send at given moment, so It's hard to plan what calls to accept and what to reject
Any ideas or suggested approach to this problem?
I suspect that the simplest algorithm you can come up with may be the best - for example if you get a call from user type B or C, simply check if there are any calls from a user type A and if not accept then call.
The reasosn why it may be best to go simplest approach:
Tts easier!
Rejecting calls like this may not be allowed by the regulator depending on the area.
If there really is a strong business opportunity here then a VoIP solution is likely going to be easier and if your client does not ask you do this someone else will likely do it anyway. VoIP as a an alternative transport for high cost TDM legs of calls is a very common approach.

Writing a weighted load balancing algorithm

I've to write a weighted load balancing algorithm and I'm looking for some references. Is there any book ? that you can suggest to understand such algorithms.
Thanks!
A simple algorithm here isn't that complicated.
Let's say you have a list of servers with the following weights:
A 10
B 20
C 30
Where the higher weight represents it can handle more traffic.
Just divide the amount of traffic sent to each server by the weight and sort smallest to largest. The server that comes out on top gets the user.
for example, let's say each server starts at 10 users, then the order is going to be:
C - 10 / 30 = 0.33
B - 10 / 20 = 0.50
A - 10 / 10 = 1.00
Which means the next 5 requests will go to server C. The 6th request will go to either C or B. The 7th will go to whichever one didn't handle the 6th.
To complicate things, you might want the balancer to be more intelligent. In which case it needs to keep track of how many requests are currently being serviced by each of the servers and decrement them when the request is completely fulfilled.
Further complications include adding stickiness to sessions. Which means the balancer has to inspect each request for the session id and keep track of where they went last time.
On the whole, if you can just buy a product from a company that already does this.
Tomcat's balancer app and the tutorial here serve as good starting points.

Google transit is too idealistic. How would you change that?

Suppose you want to get from point A to point B. You use Google Transit directions, and it tells you:
Route 1:
1. Wait 5 minutes
2. Walk from point A to Bus stop 1 for 8 minutes
3. Take bus 69 till stop 2 (15 minues)
4. Wait 2 minutes
5. Take bus 6969 till stop 3(12 minutes)
6. Walk 7 minutes from stop 3 till point B for 3 minutes.
Total time = 5 wait + 40 minutes.
Route 2:
1. Wait 10 minutes
2. Walk from point A to Bus stop I for 13 minutes
3. Take bus 96 till stop II (10 minues)
4. Wait 17 minutes
5. Take bus 9696 till stop 3(12 minutes)
6. Walk 7 minutes from stop 3 till point B for 8 minutes.
Total time = 10 wait + 50 minutes.
All in all Route 1 looks way better. However, what really happens in practice is that bus 69 is 3 minutes behind due to traffic, and I end up missing bus 6969. The next bus 6969 comes at least 30 minutes later, which amounts to 5 wait + 70 minutes (including 30 m wait in the cold or heat). Would not it be nice if Google actually advertised this possibility? My question now is: what is the better algorithm for displaying the top 3 routes, given uncertainty in the schedule?
Thanks!
How about adding weightings that express a level of uncertainty for different types of journey elements.
Bus services in Dublin City are notoriously untimely, you could add a 40% margin of error to anything to do with Dublin Bus schedule, giving a best & worst case scenario. you could also factor in the chronic traffic delays at rush hours. Then a user could see that they may have a 20% or 80% chance of actually making a connection.
You could sort "best" journeys by the "most probably correct" factor, and include this data in the results shown to the user.
My two cents :)
For the UK rail system, each interchange node has an associated 'minimum transfer time to allow'. The interface to the route planner here then has an Advanced option allowing the user to either accept the default, or add half hour increments.
In your example, setting a' minimum transfer time to allow' of say 10 minutes at step 2 would prevent Route 1 as shown being suggested. Of course, this means that the minimum possible journey time is increased, but that's the trade off.
If you take uncertainty into account then there is no longer a "best route", but instead there can be a "best strategy" that minimizes the total time in transit; however, it can't be represented as a linear sequence of instructions but is more of the form of a general plan, i.e. "go to bus station X, wait until 10:00 for bus Y, if it does not arrive walk to station Z..." This would be notoriously difficult to present to the user (in addition of being computationally expensive to produce).
For a fixed sequence of instructions it is possible to calculate the probability that it actually works out; but what would be the level of certainty users want to accept? Would you be content with, say, 80% success rate? When you then miss one of your connections the house of cards falls down in the worst case, e.g. if you miss a train that leaves every second hour.
I wrote many years a go a similar program to calculate long-distance bus journeys in Finland, and I just reported the transfer times assuming every bus was on schedule. Then basically every plan with less than 15 minutes transfer time or so was disregarded because they were too risky (there were sometimes only one or two long-distance buses per day at a given route).
Empirically. Record the actual arrival times vs scheduled arrival times, and compute the mean and standard deviation for each. When considering possible routes, calculate the probability that a given leg will arrive late enough to make you miss the next leg, and make the average wait time P(on time)*T(first bus) + (1-P(on time))*T(second bus). This gets more complicated if you have to consider multiple legs, each of which could be late independently, and multiple possible next legs you could miss, but the general principle holds.
Catastrophic failure should be the first check.
This is especially important when you are trying to connect to that last bus of the day which is a critical part of the route. The rider needs to know that is what is happening so he doesn't get too distracted and knows the risk.
After that it could evaluate worst-case single misses.
And then, if you really wanna get fancy, take a look at the crime stats for the neighborhood or transit station where the waiting point is.

Resources