Maximise subset selection under constraints - algorithm

I am working on a scheduling problem for a team of volunteers. I have boiled my problem down to the following algorithmic problem:
I have a matrix with ~60 rows representing volunteers and ~14 columns representing days. Each entry is an integer in the range 0 to 3 inclusive representing how free the volunteer is on that day. I want to choose exactly 4 entries from each column (4 volunteers a day) such that (in order of importance)
A 0-entry is never chosen.
The workload is as spread out as possible (first give everyone one shift, then start giving out second shifts, etc. We can expect that most volunteers will only have one shift per 2-week period, and some may even have none.)
The sum over selected entries is maximised (volunteers get days that they prefer).
I want to output a decision matrix that has a 1 whenever a volunteer is chosen for a day, and 0 otherwise. I believe this is an instance of the nurse-rostering problem, so I'm not expecting a fast solution, but I just want to make a brute force algorithm that will work in a reasonable time for my ~60 person team. I'm just really not sure how to start tackling this problem. Is it suited for backtracking, or is there some way to calculate the best placement of each volunteer based on the distribution of his/her day-scores?

Related

Algorithm development and optimization

I have this problem:
You need to develop a meal regime based on the data entered by the user. We have a database with the meals and their prices (meals also have a mark whether they are breakfast, cause on breakfast we eat smth different from lunch and dinner most often). The input receives the amount of money (Currency is not important) and the number of days. At the output, we must get a meal regime for a given number of days. Conditions:
Final price does not differ from the given one by more than 3%.
meals mustn't repeat more than once every 5 days.
I found this not effective solution: We are looking for an average price per day = amount of money / number of days. Then, until we reach the given number of days, we iterate throught each breakfast, then lunch and dinner (3 for loops, 2 are nested) and if price is not too different, then we end the search and add this day to the result list. So the design now looks like this:
while(daysCounter < days){
for(){
for(){
for(){
}
}
It looks scary, although there is not a lot of data (number of meals is about 150). There are thoughts that it is possible to find a more effective solution. Also i think about dynamic programming, but so far there are no ideas how to implement it.
Dynamic programming won't work because a necessary part of your state is the meals from the last 5 days. And the number of possibilities for that are astronomical.
However it is likely that there are many solutions, not just a few. And that it is easy to find a working solution by being greedy. Also that an existing solution can be improved fairly easily.
So I'd solve it like this. Replace days with an array of meals. Take the total you want to spend. Divide it up among your meals, in proportion to the median price of the options for that meal. (Breakfast is generally cheaper than dinner.) And now add up that per meal cost to get running totals.
And now for each meal choose the meal you have not had in the last 5 days that brings the running total of what has been spent as close as possible to the ideal total. Choose all of your meals, one at a time.
This is the greedy approach. Normally, but not always, it will come fairly close to the target.
And now, to a maximum of n tries or until you have the target within 3%, pick a random meal, consider all options that are not eaten within the last or next 5 days, and randomly pick one (if any such options exist) that brings the overall amount spent closer to the target.
(For a meal plan that is likely to vary more over a long period, I'd suggest trying simulated annealing. That will produce more interesting results, but it will also take longer.)

Algorithm needed - benelux contest 2007

This question (last one) appeared in Benelux Algorithm Programming Contest-2007
http://www.cs.duke.edu/courses/cps149s/spring08/problems/bapc07/allprobs.pdf
Problem Statement in short:
A Company needs to figure out strategy when to - buy OR sell OR no-op on a given input so as to maximise profit. Input is in the form:
6
4 4 2
2 9 3
....
....
It means input is given for 6 days.
Day 1: You get 4 shares, each with price 4$ and at-max you can sell 2 of them
Day 2: You get 2 shares, each with price 9$ and at-max you can sell 3 of them
.
We need to output the maximum profit which can be achieved.
I m thinking about how to go for this problem. It seems to me that if we apply brute force, it will take too much time. If this can be converted to some DP problem like 0-1 Knapsack? Some help will be highly appreciated.
it can be solved by DP
suppose there are n days, and the total number of stock shares is m
let f[i][j] means, at the ith day, with j shares remaining, the maximum profit is f[i][j]
obviously, f[i][j]=maximum(f[i-1][j+k]+k*price_per_day[i]), 0<=k<=maximum_shares_sell_per_day[i]
it can be further optimized that, since f[i][...] only depends on f[i-1][...], a rolling array can be used here. hence u need only to define f[2][m] to save space.
total time complexity is O(n*m*maximum_shares_sell_per_day).
perhaps it can be further optimized to save time. any feedback is welcome
Your description does not quite match the last problem in the PDF - in the PDF you receive the number of shares specified in the first column (or are forced to buy them - since there is no decision to make it does not matter) and can only decide on how many shares to sell. Since it does not say otherwise I presume that short selling is not allowed (else ignore everything except the price and go make so much money on the derivatives market that you afford to both bribe the SEC or congress and retire :-)).
This looks like a dynamic program, where the state at each point in time is the total number of shares you have in hand. So at time n you have an array with one element for each possible number of shares you might have ended up with at that time, and in that element you have the maximum amount of money you can make up to then while ending up with that number of shares. From this you can work out the same information for time n+1. When you reach the end, then all your shares are worthless so the best answer is the one associated with the maximum amount of money.
We can't do better than selling the maximum amount of shares we can on the day with the highest price, so I was thinking: (this may be somewhat difficult to implement (efficiently))
It may be a good idea to calculate the total number of shares received so far for each day to improve the efficiency of the algorithm.
Process the days in decreasing order of price.
For a day, sell amount = min(daily sell limit, shares available) (for the max price day (the first processed day), shares available = shares received to date).
For all subsequent days, shares available -= sell amount. For preceding days, we binary search for (shares available - shares sold) and all entries between that and the day just processed = 0.
We might not need to physically set the values (at least not at every step), just calculate them on-the-fly from the history thus-far (I'm thinking interval tree or something similar).

What class of algorithms can be used to solve this?

EDIT: Just to make sure someone is not breaking their head on the problem... I am not looking for the best optimal algorithm. Some heuristic that makes sense is fine.
I made a previous attempt at formulating this and realized I did not do a great job at it so I removed that question. I have taken another shot at formulating my problem. Please feel free to provide any constructive criticism that can help me improve this.
Input:
N people
k announcements that I can make
Distance that my voice can be heard (say 5 meters) i.e. I may decide to announce or not depending on the number of people within these 5 meters
Goal:
Maximize the total number of people who have heard my k announcements and (optionally) minimize the time in which I can finish announcing all k announcements
Constraints:
Once a person hears my announcement, he is be removed from the total i.e. if he had heard my first announcement, I do not count him even if he hears my second announcement
I can see the same person as well as the same set of people within my proximity
Example:
Let us consider 10 people numbered from 1 to 10 and the following pattern of arrival:
Time slot 1: 1 (payoff = 1)
Time slot 2: 2 3 4 5 (payoff = 4)
Time slot 3: 5 6 7 8 (payoff = 4 if no announcement was made previously in time slot 2, 3 if an announcement was made in time slot 2)
Time slot 4: 9 10 (payoff = 2)
and I am given 2 announcements to make. Now if I were an oracle, I would choose time slots 2 and time slots 3 because then 7 people would have heard (because 5 already heard my announcement in Time slot 2, I do not consider him anymore). I am looking for an online algorithm that will help me make these decisions on whether or not to make an announcement and if so based on what factors. Does anyone have any ideas on what algorithms can be used to solve this or a simpler version of this problem?
There should be an approach relying upon a max-flow algorithm. In essence, you're trying to push the maximum amount of messages from start->end. Though it would be multidimensional, you could have a super-sink, which connects to each value of t, then have each value of t connect to the people you can reach at this time and then have a super-sink. This way, you simply have to compute a max-flow (with the added constraint of no more than k shouts, which should be solvable with a bit of dynamic programming). It's a terrifically dirty way to solve it, but it should get the job done deterministically and without the use of heuristics.
I don't know that there is really a way to solve this or an algorithm to do it the way you have formulated it.
It seems like basically you are trying to reach the maximum number of people with exactly 2 announcements. But without knowing any information about the groups of people in advance, you can't really make any kind of intelligent decision about whether or not to use your first announcement. Your second one at least has the benefit of knowing when not to be used (i.e. if the group has no new members then you can know its not worth wasting the announcement). But it still has basically the same problem.
The only real way to solve this is to use knowledge about the type of data or the desired outcome to make guesses. If you know that groups average 100 people with a standard deviation of 10, then you could just refuse to announce if less than 90 people are present. Or, if you know you need to reach at least 100 people with two announcements, you could choose never to announce to less than 50 at once. Obviously those approaches risk never announcing at all if the actual data does not meet what you would expect. But that's always going to be a risk, since you could get 1 person in the first group and then 0 in all of the rest, no matter what you do.
Or, you could try more clearly defining the problem, I have a hard time figuring out how to relate this to computers.
Lets start my trying to solve the simplest possible variant of the problem: Lets assume N people and K timeslots, but only one possible announcement. Lets also assume that each person will only ever stay for one timeslot and that each person who hasn't yet shown up has an equally probable chance of showing up at any future timeslot.
Given these simplifications, at each timeslot you look at the payoff of announcing at the current timeslot and compare to the chance of a future timeslot having a higher payoff, eg, lets assume 4 people 3 timeslots:
Timeslot 1: Person 1 shows up, so you know you could get a payoff of 1 by announcing, but then you have 3 people to show up in 2 remaining timeslots, so at least one of those timeslots is guaranteed to have 2 people, so don't announce..
So at each timeslot, you can calculate the chance that a later timeslot will have a higher payoff than the current by treating the remaining (N) people and (K) timeslots as being N independent random numbers each from 1..k, and calculate the chance of at least one value k being hit more than or equal to the current-payoff times. (Similar to the Birthday problem, but for more than 1 collision) and then you need to decide hwo much to discount based on expected variances. (bird in the hand, etc)
Generalization of this solution to the original problem is left as an exercise for the reader.

Algorithm to Rate Objects with Numerous Comparisons

Lets say I have a list of 500 objects. I need to rate each one out of 10.
At random I select two and present them to a friend. I then ask the friend which they prefer. I then use this comparison (ie OBJECT1 is better than OBJECT2) to alter the two objects' rating out of ten.
I then repeat this random selection and comparison thousands of times with a group of friends until I have a list of 500 objects with a reliable rating out of ten.
I need to figure out an algorithm which takes the two objects current ratings, and alters them depending on which is thought to be better...
Each object's rating could be (number of victories)/(number of contests entered) * 10. So the rating of the winner goes up a bit and the rating of the loser goes down a bit, according to how many contests they've previously entered.
For something more complicated and less sensitive to the luck of the draw with smaller numbers of trials, I'd suggest http://en.wikipedia.org/wiki/Elo_rating_system, but it's not out of 10. You could rescale everyone's scores so that the top score becomes 10, but then a match could affect everyone's rating, not just the rating of the two involved.
It all sort of depends what "reliable" means. Different friends' judgements will not be consistent with respect to each other, and possibly not even consistent over time for the same person, so there's no "real" sorted order for you to sanity-check the rankings against.
On a more abstruse point, Arrow's Impossibility Theorem states some nice properties that you'd like to have in a system that takes individual preferences and combines them to form an aggregated group preference. It then proceeds to prove that they're mutually inconsistent - you can't have them all. Any intuitive idea of a "good" overall rating runs a real risk of being unachievable.

Algorithm for most recently/often contacts for auto-complete?

We have an auto-complete list that's populated when an you send an email to someone, which is all well and good until the list gets really big you need to type more and more of an address to get to the one you want, which goes against the purpose of auto-complete
I was thinking that some logic should be added so that the auto-complete results should be sorted by some function of most recently contacted or most often contacted rather than just alphabetical order.
What I want to know is if there's any known good algorithms for this kind of search, or if anyone has any suggestions.
I was thinking just a point system thing, with something like same day is 5 points, last three days is 4 points, last week is 3 points, last month is 2 points and last 6 months is 1 point. Then for most often, 25+ is 5 points, 15+ is 4, 10+ is 3, 5+ is 2, 2+ is 1. No real logic other than those numbers "feel" about right.
Other than just arbitrarily picked numbers does anyone have any input? Other numbers also welcome if you can give a reason why you think they're better than mine
Edit: This would be primarily in a business environment where recentness (yay for making up words) is often just as important as frequency. Also, past a certain point there really isn't much difference between say someone you talked to 80 times vs say 30 times.
Take a look at Self organizing lists.
A quick and dirty look:
Move to Front Heuristic:
A linked list, Such that whenever a node is selected, it is moved to the front of the list.
Frequency Heuristic:
A linked list, such that whenever a node is selected, its frequency count is incremented, and then the node is bubbled towards the front of the list, so that the most frequently accessed is at the head of the list.
It looks like the move to front implementation would best suit your needs.
EDIT: When an address is selected, add one to its frequency, and move to the front of the group of nodes with the same weight (or (weight div x) for courser groupings). I see aging as a real problem with your proposed implementation, in that it requires calculating a weight on each and every item. A self organizing list is a good way to go, but the algorithm needs a bit of tweaking to do what you want.
Further Edit:
Aging refers to the fact that weights decrease over time, which means you need to know each and every time an address was used. Which means, that you have to have the entire email history available to you when you construct your list.
The issue is that we want to perform calculations (other than search) on a node only when it is actually accessed -- This gives us our statistical good performance.
This kind of thing seems similar to what is done by firefox when hinting what is the site you are typing for.
Unfortunately I don't know exactly how firefox does it, point system seems good as well, maybe you'll need to balance your points :)
I'd go for something similar to:
NoM = Number of Mail
(NoM sent to X today) + 1/2 * (NoM sent to X during the last week)/7 + 1/3 * (NoM sent to X during the last month)/30
Contacts you did not write during the last month (it could be changed) will have 0 points. You could start sorting them for NoM sent in total (since it is on the contact list :). These will be showed after contacts with points > 0
It's just an idea, anyway it is to give different importance to the most and just mailed contacts.
If you want to get crazy, mark the most 'active' emails in one of several ways:
Last access
Frequency of use
Contacts with pending sales
Direct bosses
Etc
Then, present the active emails at the top of the list. Pay attention to which "group" your user uses most. Switch to that sorting strategy exclusively after enough data is collected.
It's a lot of work but kind of fun...
Maybe count the number of emails sent to each address. Then:
ORDER BY EmailCount DESC, LastName, FirstName
That way, your most-often-used addresses come first, even if they haven't been used in a few days.
I like the idea of a point-based system, with points for recent use, frequency of use, and potentially other factors (prefer contacts in the local domain?).
I've worked on a few systems like this, and neither "most recently used" nor "most commonly used" work very well. The "most recent" can be a real pain if you accidentally mis-type something once. Alternatively, "most used" doesn't evolve much over time, if you had a lot of contact with somebody last year, but now your job has changed, for example.
Once you have the set of measurements you want to use, you could create an interactive apoplication to test out different weights, and see which ones give you the best results for some sample data.
This paper describes a single-parameter family of cache eviction policies that includes least recently used and least frequently used policies as special cases.
The parameter, lambda, ranges from 0 to 1. When lambda is 0 it performs exactly like an LFU cache, when lambda is 1 it performs exactly like an LRU cache. In between 0 and 1 it combines both recency and frequency information in a natural way.
In spite of an answer having been chosen, I want to submit my approach for consideration, and feedback.
I would account for frequency by incrementing a counter each use, but by some larger-than-one value, like 10 (To add precision to the second point).
I would account for recency by multiplying all counters at regular intervals (say, 24 hours) by some diminisher (say, 0.9).
Each use:
UPDATE `addresslist` SET `favor` = `favor` + 10 WHERE `address` = 'foo#bar.com'
Each interval:
UPDATE `addresslist` SET `favor` = FLOOR(`favor` * 0.9)
In this way I collapse both frequency and recency to one field, avoid the need for keeping a detailed history to derive {last day, last week, last month} and keep the math (mostly) integer.
The increment and diminisher would have to be adjusted to preference, of course.

Resources