How to spread processes over time getting minimum number of "collisions" - algorithm

I'm developing a scheduler for an embedded system.
This scheduler will call each process every X milliseconds; this time can be configured separately for each process, of course.
Everything is coded and calls every process as it should; the problem I'm facing is this:
Imagine I set 4 processes to be called every 10, 15, 5 and 30 milliseconds respectively:
A: 10ms
B: 15ms
C: 5ms
D: 30ms
The resulting calling over time will be:
A |
A B A B |
C C C C C C C | processes being called
D |
----------------------------------
0 5 10 15 20 25 30 35... ms
The problem is that when 30ms is reached, all processes are called at the same moment (one after another) and this can delay the correct execution from here.
This can be solved by adding a delay to each process (but preserving its calling frequency), so the frequencies stops being multiples of each other. My problem is that I don't know how to calculate the delay to apply to each process so the number of collisions is minimized.
Is there any known algorithm for this, or some mathematical guidance?
Thank you.

Given a set of intervals, you can find the time at which the start times would coincide (assuming no offsets) by finding the least common multiple as mentioned by Jason in a comment to your post. You can find the LCM by doing the prime factorization of the intervals for a set of tasks.
It seems, though, that the greatest common divisor (or greatest common factor GCF) might be the most useful number to compute. That number will give you interval at which repeats will happen. In your example, the GCF is 5. With a GCF of 5, it is possible to add an initial offset of 1, 2, 3, etc. to each task to avoid overlapping start times. Thus, with a GCF of 5, you can have up to 5 tasks that have start times that would never overlap. With a GCF of 20, you could have up to 20 tasks scheduled with no overlapping start times. If two (or more) tasks are relatively prime (GCF=1), then an overlap will definitely occur no matter what offset you use for those tasks if the intervals never change.

There is no perfect solution for this, they will collide from time to time.
I would suggest to add tiny(0.01-0.1ms) random value to cycle length, so in the long term they will really rarely called at the same time.
Alternatively, if you have 5ms scheduler granularity, first thread is always called at X+1ms, second at X+2, e.t.c, so that it is always guaranteed 1ms of uninterrupted run (if you have 10 threads, then it will be X+0.5, X+1, X+1.5). But this might get quite tricky to implement.

This kind of problem relates directly the domain of real-time programming and scheduling algorithms. I took a class on this subject in college, and if I remember well, Rate-monotonic scheduling is the kind of algorithm you are looking for.
The idea is that you assign priorities to jobs that are inversely proportional to their period, ie the smaller the period, the higher the priority. But this works better if you can interrupt your jobs and resume them later.
There are other alternatives, though, like EDF (earliest deadline first), but these are dynamic scheduling algorithms (ie the priorities are assigned during the execution).

The easy solution is to change the schedule in which you call the subroutines. I.e. instead of 5, 10, 15, and 30 ms, can you live e.g. with 5, 15, 15 and 30? Then you can use the following pattern: (A=5 ms proc, B,C=15 ms proc, D=30 ms proc):
AAAAAAAAAAAAAAAAAAAA ...
B B B B B B B ...
C C C C C C C ...
D D D ...
I'm sure you can generalize this idea, but it works only if you can actually change the static intervals.
If you can't change the intervals, and also you need to obey them strictly, then I you are kind of out of luck as there are no parameters to change :)

Related

SWI Prolog CLP(FD) scheduling

I am solving a scheduling task in SWI Prolog using the CLPFD library. Since it is the first time I solve something more serious than was the sendmory I probably need some good advices from more experienced users. Let me briefly describe the domain/the task.
Domain
I have a "calendar" for a month. Everyday there are 2 for the whole day, 2 for the whole night (long 12h service). There are also, only Mon-Fri 10 more workers for 8 hours (short service).
The domain constraints are, obviously:
There are no consecutive services (no day after night and vice versa, no short day service after night)
On worker can serve up to 2 consecutive night services in a row
Each worker has a limited amount of hours for a month
There are 19 workers available
My approach is as following:
Variables
For every field in the calendar I have a variable defined:
DxD_y where x is number of the day and y is 1 or 2 for the long day service
DxN_y where x is number of the day and y is 1 or 2 for the long night service
DxA_y where x is number of the day and y is 0 .. 9 for the short day service
SUM_x where x is a worker number (1..19) denoting sum of hours for a worker
Each of the D variables has a domain 1..19. To simplify it for now, SUM_X #=< 200 for each X.
Constraints
all_distinct() for each variable for the same day - each worker can serve only one service/day
global_cardinality() to count number of occurrences for each number 1..19 for list with short services and long services - this defines variables LSUM_X and SSUM_X - number of occurrences of worker X in Long or Short services
SUM_X #= 12*LSUM_X + 8*SSUM_X for each worker
DxN_y #\= Dx+1D_z - to avoid long day service after a night one
bunch of similar constraints like the one above to cover all the domain constraints
DxNy #= Dx+1Ny #==> DxNy #\= Dx+2Ny - to avoid three consecutive night services, there are constraints each combination of x and y
Notes
All the variables and constraints are directly stated in the pl script. I do not use prolog predicates to generate constraint - because I have a model in a .NET application (frontend) and I can easily generate all the stuff from the .NET code into a prolog code.
I think my approach is overall good. Running the scheduler on some smaller example works well (7 days, 4 long services, 1 short service, 8 workers). Also I was able to get some valid results on the full blown case - 30 days, 19 workers, 4 long and 10 short services per day.
However, I am not completely satisfied with the current status. Let me explain why.
Questions
I read some articles about modelling scheduling problems and some of them uses a bit different approach - introducing only boolean variables for each combination of my variable (calendar field) and worker to flag if the worker is assigned to a particular calendar field. Is this a better approach?
If you calculate the overall amount-of-work limits and overall hour in calendar you find out that the workers are not 100% utilized. But the solver creates the solution most likely in this way: utilize the first worker for 100% and then grab the next one. So the SUMs in the solution appears like [200,200,200...200,160,140,80,50,0,]. I would be glad if the workers will be more or less equally utilized. Is there some simple/efficient way how to achieve that? I considered defining somewhat like define the differences between workers and minimize it, but it sound very complex for me and I am afraid I would take ages to compute that. I use labeling([random_variable(29)], Vars), but it only reorders the variables, so there are still these gaps, only in different order. Probably I want the labeling procedure will take the values in some other order than up or down (in some pseudo-random way).
How should I order the constraints? I think the order of constraints matters with respect to efficiency of the labeling.
How to debug/optimize performance of labeling? I hoped solving this type of task will take some seconds or maximally a couple of minutes in case very tight conditions for sums. For example labeling with the the bisect option took ages.
I can provide some more code examples if needed.
That's a lot of questions, let me try to address some.
... introducing only boolean variables for each combination of my variable (calendar field) and worker to flag if the worker is assigned to a particular calendar field. Is this a better approach?
This is typically done when a MILP (Mixed Integer Linear Programming) solver is used, where higher-level concepts (such as alldifferent etc) have to be expressed as linear inequalities. Such formulations then usually require lots of boolean variables. Constraint Programming is more flexible here and offers more modelling choices, but unfortunately there is no simple answer, it depends on the problem. Your choice of variables influences both how hard it is to express your problem constraints, and how efficiently it solves.
So the SUMs in the solution appears like [200,200,200...200,160,140,80,50,0,]. I would be glad if the workers will be more or less equally utilized. Is there some simple/efficient way how to achieve that?
You already mention the idea of minimizing differences, and this is how such a balancing requirement would usually be implemented. It does not need to be complicated. If originally we have this unbalanced first solution:
?- length(Xs,5), Xs#::0..9, sum(Xs)#=20, labeling(Xs).
Xs = [0, 0, 2, 9, 9]
then simply minimizing the maximum of list elements will already give you (in combination with the sum-constraint) a balanced solution:
?- length(Xs,5), Xs#::0..9, sum(Xs)#=20, Cost#=max(Xs), minimize(labeling(Xs),Cost).
Xs = [4, 4, 4, 4, 4]
Cost = 4
You could also minimize the difference between minimum and maximum:
?- length(Xs,5), Xs#::0..9, sum(Xs)#=20, Cost#=max(Xs)-min(Xs), minimize(labeling(Xs),Cost).
Xs = [4, 4, 4, 4, 4]
Cost = 0
or even the sum of squares.
[Sorry, my examples are for ECLiPSe rather than SWI/clpfd, but should show the general idea.]
How should I order the constraints? I think the order of constraints matters with respect to efficiency of the labeling.
You should not worry about this. While it might have some influence, it is too unpredictable and depends too much on implementation details to allow for any general recommendations. This is really the job of the solver implementer.
How to debug/optimize performance of labeling?
For realistic problems, you will often need (a) a problem-specific labeling heuristic, and (b) some variety of incomplete search.
Visualization of the search tree or the search progress can help with tailoring heuristics. You can find some discussion of these issues in chapter 6 of this online course.

Density of time events

I am working on an assignment where I am supposed to compute the density of an event. Let's say that a certain event happens 5 times within seconds, it would mean that it would have a higher density than if it were to happen 5 times within hours.
I have in my possession, the time at which the event happens.
I was first thinking about computing the elapsed time between each two successive events and then play with the average and mean of these values.
My problem is that I do not know how to accurately represent this notion of density through mathematics. Let's say that I have 5 events happening really close to each other, and then a long break, and then again 5 events happening really close to each other. I would like to be able to represent this as high density. How should I go about it?
In the last example, I understand that my mean won't be truly representative but that my standard deviation will show that. However, how could I have a single density value (let's say between 0 and 1) with which I could rank different events?
Thank you for your help!
I would try the harmonic mean, which represents the rate at which your events happen, by still giving you an averaged time value. It is defined by :
I think its behaviour is close to what you expect as it measures what you want, but not between 0 and 1 and with inverse tendencies (small values mean dense, large values mean sparse). Let us go through a few of your examples :
~5 events in an hour. Let us suppose for simplicity there is 10 minutes between each event. Then we have H = 6 /(6 * 1/10) = 10
~5 events in 10 minutes, then nothing until the end of the hour (50 minutes). Let us suppose all short intervals are 2.5 minutes, then H = 6 / (5/2.5 + 1/50) = 6 * 50 / 101 = 2.97
~5 events in 10 minutes, but this cycle restarts every half hour thus we have 20 minutes as the last interval instead of 50. Then we get H = 6 / (5/2.5 + 1/20) = 6 * 20 / 41 = 2.92
As you can see the effect of the longer and rarer values in a set is diminished by the fact that we use inverses, thus less weight to the "in between bursts" behaviour. Also you can compare behaviours with the same "burst density" but that do not happen at the same frequency, and you will get numbers that are close but whose ordering still reflects this difference.
For density to make sense you need to define 2 things:
the range where you look at it,
and the unit of time
After that you can say for example, that from 12:00 to 12:10 the density of the event was an average of 10/minute.
What makes sense in your case obviously depends on what your input data is. If your measurement lasts for 1 hour and you have millions of entries then probably seconds or milliseconds are better choice for unit. If you measure for a week and have a few entries then day is a better unit.

load balancing algorithms - special example

Let´s pretend i have two buildings where i can build different units in.
A building can only build one unit at the same time but has a fifo-queue of max 5 units, which will be built in sequence.
Every unit has a build-time.
I need to know, what´s the fastest solution to get my units as fast as possible, considering the units already in the build-queues of my buildings.
"Famous" algorithms like RoundRobin doesn´t work here, i think.
Are there any algorithms, which can solve this problem?
This reminds me a bit of starcraft :D
I would just add an integer to the building queue which represents the time it is busy.
Of course you have to update this variable once per timeunit. (Timeunits are "s" here, for seconds)
So let's say we have a building and we are submitting 3 units, each take 5s to complete. Which will sum up to 15s total. We are in time = 0.
Then we have another building where we are submitting 2 units that need 6 timeunits to complete each.
So we can have a table like this:
Time 0
Building 1, 3 units, 15s to complete.
Building 2, 2 units, 12s to complete.
Time 1
Building 1, 3 units, 14s to complete.
Building 2, 2 units, 12s to complete.
And we want to add another unit that takes 2s, we can simply loop through the selected buildings and pick the one with the lowest time to complete.
In this case this would be building 2. This would lead to Time2...
Time 2
Building 1, 3 units, 13s to complete
Building 2, 3 units, 11s+2s=13s to complete
...
Time 5
Building 1, 2 units, 10s to complete (5s are over, the first unit pops out)
Building 2, 3 units, 10s to complete
And so on.
Of course you have to take care of the upper boundaries in your production facilities. Like if a building has 5 elements, don't assign something and pick the next building that has the lowest time to complete.
I don't know if you can implement this easily with your engine, or if it even support some kind of timeunits.
This will just result in updating all production facilities once per timeunit, O(n) where n is the number of buildings that can produce something. If you are submitting a unit this will take O(1) assuming that you keep the selected buildings in a sorted order, lowest first - so just a first element lookup. In this case you have to resort the list after manipulating the units like cancelling or adding.
Otherwise amit's answer seem to be possible, too.
This is NPC problem (proof at the end of the answer) so your best hope to find ideal solution is trying all possibilities (this will be 2^n possibilities, where n is the number of tasks).
possible heuristic was suggested in comment (and improved in comments by AShelly): sort the tasks from biggest to smallest, and put them in one queue, every task can now take element from the queue when done.
this is of course not always optimal, but I think will get good results for most cases.
proof that the problem is NPC:
let S={u|u is a unit need to be produced}. (S is the set containing all 'tasks')
claim: if there is a possible prefect split (both queues finish at the same time) it is optimal. let this time be HalfTime
this is true because if there was different optimal, at least one of the queues had to finish at t>HalfTime, and thus it is not optimal.
proof:
assume we had an algorithm A to produce the best solution at polynomial time, then we could solve the partition problem at polynomial time by the following algorithm:
1. run A on input
2. if the 2 queues finish exactly at HalfTIme - return True.
3. else: return False
this solution solves the partition problem because of the claim: if the partition exist, it will be returned by A, since it is optimal. all steps 1,2,3 run at polynomial time (1 for the assumption, 2 and 3 are trivial). so the algorithm we suggested solves partition problem at polynomial time. thus, our problem is NPC
Q.E.D.
Here's a simple scheme:
Let U be the list of units you want to build, and F be the set of factories that can build them. For each factory, track total time-til-complete; i.e. How long until the queue is completely empty.
Sort U by decreasing time-to-build. Maintain sort order when inserting new items
At the start, or at the end of any time tick after a factory completes a unit runs out of work:
Make a ready list of all the factories with space in the queue
Sort the ready list by increasing time-til-complete
Get the factory that will be done soonest
take the first item from U, add it to thact factory
Repeat until U is empty or all queues are full.
Googling "minimum makespan" may give you some leads into other solutions. This CMU lecture has a nice overview.
It turns out that if you know the set of work ahead of time, this problem is exactly Multiprocessor_scheduling, which is NP-Complete. Apparently the algorithm I suggested is called "Longest Processing Time", and it will always give a result no longer than 4/3 of the optimal time.
If you don't know the jobs ahead of time, it is a case of online Job-Shop Scheduling
The paper "The Power of Reordering for Online Minimum Makespan Scheduling" says
for many problems, including minimum
makespan scheduling, it is reasonable
to not only provide a lookahead to a
certain number of future jobs, but
additionally to allow the algorithm to
choose one of these jobs for
processing next and, therefore, to
reorder the input sequence.
Because you have a FIFO on each of your factories, you essentially do have the ability to buffer the incoming jobs, because you can hold them until a factory is completely idle, instead of trying to keeping all the FIFOs full at all times.
If I understand the paper correctly, the upshot of the scheme is to
Keep a fixed size buffer of incoming
jobs. In general, the bigger the
buffer, the closer to ideal
scheduling you get.
Assign a weight w to each factory according to
a given formula, which depends on
buffer size. In the case where
buffer size = number factories +1, use weights of (2/3,1/3) for 2 factories; (5/11,4/11,2/11) for 3.
Once the buffer is full, whenever a new job arrives, you remove the job with the least time to build and assign it to a factory with a time-to-complete < w*T where T is total time-to-complete of all factories.
If there are no more incoming jobs, schedule the remainder of jobs in U using the first algorithm I gave.
The main problem in applying this to your situation is that you don't know when (if ever) that there will be no more incoming jobs. But perhaps just replacing that condition with "if any factory is completely idle", and then restarting will give decent results.

Why does task X, appear two times for unit 0 at clock cycles 4 and 5?

In the image below, why does task X, appear two times for unit 0 at clock cycles 4 and 5?
have to make a program for the arrangement of the pipeline, but I need to know why the above happens to complete it.
Is it just because the author wants it to repeat??
I'm pretty certain it just means that a task takes two clocks in unit 0 the second time through. The fact that it takes seven clocks in total alludes to this, 1 in unit0, 1 in unit1, 1 in unit2, 1 in unit3, 2 more in unit0 and finally 1 in unit4.
It may well just be a contrived example so that there was a conflict when shifting by one clock (the author had to do something to ensure that task 2 would catch up to task 1 and that seems the easiest solution) or unit0 may well be a non-linear processor of some sort.
Another example would have been trying to pump in a task at the point where the previous task was re-entering unit0.
What they're trying to show is that, given a maximum duration within a unit of N cycles in a pipeline, you have to limit your injections of work to one every N cycles to be sure of no conflict.
My bet (based on the small number of authors I know) would be on the author doing the minimal amount of work to describe the problem :-)

Weighted, load-balancing resource scheduling algorithm

A software application that I'm working on needs to be able to assign tasks to a group of users based on how many tasks they presently have, where the users with the fewest tasks are the most likely to get the next task. However, the current task load should be treated as a weighting, rather than an absolute order definition. IOW, I need to implement a weighted, load-balancing algorithm.
Let's say there are five users, with the following number of tasks:
A: 4
B: 5
C: 0
D: 7
E: 9
I want to prioritize the users for the next task in the order CABDE, where C is most likely to get the assignment and E, the least likely. There are two important things to note here:
The number of users can vary from 2 to dozens.
The number of tasks assigned to each user can vary from 1 to hundreds.
For now, we can treat all tasks as equal, though I wouldn't mind including task difficult as a variable that I can use in the future - but this is purely icing on the cake.
The ideas I've come up with so far aren't very good in some situations. They might weight users too closely together if there are a large number of users, or they might fall flat if a user has no current tasks, or....
I've tried poking around the web, but haven't had much luck. Can anyone give me a quick summary of an algorithm that would work well? I don't need an actual implementation--I'll do that part--just a good description. Alternative, is there a good web site that's freely accessible?
Also, while I certainly appreciate quality, this need not be statistically perfect. So if you can think of a good but not great technique, I'm interested!
As you point out, this is a load-balancing problem. It's not really a scheduling problem, since you're not trying to minimise anything (total time, number of concurrent workers, etc.). There are no special constraints (job duration, time clashes, skill sets to match etc.) So really your problem boils down to selecting an appropriate weighting function.
You say there are some situations you want to avoid, like user weightings that are too close together. Can you provide more details? For example, what's wrong with making the chance of assignment just proportional to the current workload, normalised by the workload of the other workers? You can visualise this as a sequence of blocks of different lengths (the tasks), being packed into a set of bins (the workers), where you're trying to keep the total height of the bins as even as possible.
With more information, we could make specific recommendations of functions that could work for you.
Edit: example load-balancing functions
Based on your comments, here are some example of simple functions that can give you different balancing behaviour. A basic question is whether you want deterministic or probabilistic behaviour. I'll give a couple of examples of each.
To use the example in the question - there are 4 + 5 + 0 + 7 + 9 = 25 jobs currently assigned. You want to pick who gets job 26.
1) Simple task farm. For each job, always pick the worker with the least jobs currently pending. Fast workers get more to do, but everyone finishes at about the same time.
2) Guarantee fair workload. If workers work at different speeds, and you don't want some doing more than others, then track the number of completed + pending jobs for each worker. Assign the next job to keep this number evenly spread (fast workers get free breaks).
3) Basic linear normalisation. Pick a maximum number of jobs each worker can have. Each worker's workload is normalised to that number. For example, if the maximum number of jobs/worker is 15, then 50 more jobs can be added before you reach capacity. So for each worker the probability of being assigned the next job is
P(A) = (15 - 4)/50 = 0.22
P(B) = (15 - 5)/50 = 0.2
P(C) = (15 - 0)/50 = 0.3
P(D) = (15 - 7)/50 = 0.16
P(E) = (15 - 9)/50 = 0.12
If you don't want to use a specific maximum threshold, you could use the worker with the highest current number of pending jobs as the limit. In this case, that's worker E, so the probabilities would be
P(A) = (9 - 4)/20 = 0.25
P(B) = (9 - 5)/20 = 0.2
P(C) = (9 - 0)/20 = 0.45
P(D) = (9 - 7)/20 = 0.1
P(E) = (9 - 9)/20 = 0
Note that in this case, the normalisation ensures worker E can't be assigned any jobs - he's already at the limit. Also, just because C doesn't have anything to do doesn't mean he is guaranteed to be given a new job (it's just more likely).
You can easily implement the choice function by generating a random number r between 0 and 1 and comparing it to these boundaries. So if r is < 0.25, A gets the job, 0.25< r < 0.45, B gets the job, etc.
4) Non-linear normalisation. Using a log function (instead of the linear subtraction) to weight your numbers is an easy way to get a non-linear normalisation. You can use this to skew the probabilities, e.g. to make it much more likely that workers without many jobs are given more.
The point is, the number of ways of doing this are practically unlimited. What weighting function you use depends on the specific behaviour you're trying to enable. Hopefully that's given you some ideas which you can use as a starting point.

Resources