Algorithm for optimal packing with known inventory - algorithm

Hospitals are changing the way they sterilize their equipment. Previously the local surgeons kept all their own equipment and made their own surgery trays. Now they have to confine to a country wide standard. They want to know how many of the new trays they can make from their existing stock, and how much new equipment they need to buy.
The inventory of medical equipment looks like this:
http://pastebin.com/rstWSurU
each hospitals has codes for various medical equipment and then a number for how many they have of the corresponding item
3 surgery trays with their corresponding items are show in this dictionary.
http://pastebin.com/bUAZhanK
There are a total of 144 different operation trays
the hospitals will be told they need 25 of tray x, 30 of tray y, etc...
They would like to maximize the amounts of trays they can finish with their current stock. They would also like to know what equipment they need to purchase in order to finish the remaining trays.
I have thought about two possible solutions one being representing the problem as a linear programming problem. The other solving the problem by doing a round-robin brute force solve of the first 90% of the problem and solving the remaining 10% by doing a randomized algorithm several times and then pick the best solve of those tries.
I would love to hear if anyone knows a smart way of how to tackle this problem!

If I understand this correctly we can optimize for each hospital separately. My guess is that the following would be a good start for an MIP (Mixed Integer Programming) model:
I use the following indices: i is items and t is trays. x(t,i) indicates how many items we assign to each tray type. y(t) counts the number of trays of each type that we can compose using the available items. From the solution we can calculate the shortages that we need to order.
Of course we are just maximizing the number of trays we can make. There is no consideration of balancing (many trays of one type and few or zero of another). I mitigate a little bit by not allowing to create more trays than required (if we have more items they need to go to other types of trays). This requirement is formulated as an upper bound on y(t).
For large problems we can restrict the (t,i) combinations to the ones that are possible. This will make the model smaller. When using precise math notation:
A further optimization would be to substitute out the variables x(t,i).
Adding shipping surplus items to other hospitals would make the model more difficult. In that case we could end up with a model that needs to look at all hospitals simultaneously. May be an interesting case for some decomposition approach.

Related

Algorithm for picking orders from warehouses

I'll explain My Problem With an example.
Let's say we have:
An Order from a certain store for five products, We will name those products A,B,C,D, & E, with their quantities In the Order A(19),B(25),C(6),D(33),E(40).
A single Truck that can fit different amount of each product:A(30), B(40), C(25), D(50), E(30).
Ex: Transporting A & B together, I loaded the the Truck with A(19) so that's two thirds of what my Truck can handle, So that leaves one third for B, Which means i can only transport 1/3 of B's maximum Truck capacity which is (40/3 ≈ 13).
A Set of Warehouses which contains different amounts of each product.
I made an Excel spreadsheet which contains more useful info regarding those Warehouses like( Quantities, Distance from Each other, Distance from store ).
I want to Deliver this order to the store with the least amount of trips and distance traveled.
Is there an Algorithm for this kind of problem, Or something close i can modify on?
EDIT: Updated Links.
I would advise not to reinvent a wheel as a very first step of your work. Developing/adopting a custom algorithm for such a problem would be a very painful venture in my opinion. I would suggest using either a constraint satisfaction programming (CSP) toolkit or a direct mixed integer programming (MIP) solver.
My point is that it would be much easier to encode your problem using such tools. If performance/accuracy won't be enough for you - you could design a custom solution based on your preliminary results.
For CSP I would suggest Minizinc which has decent documentation and examples.
You could start your MIP research with GLPK. It's not very powerful, but it's definitely capable of dealing with some toy examples.

Clustering+Regression-the right approach or not?

I have a task of prognosing the quickness of selling goods (for example, in one category). E.g, the client inputs the price that he wants his item to be sold and the algorithm should displays that it will be sold with the inputed price for n days. And it should have 3 intervals of quick, medium and long sell. Like in the picture:
The question: how exactly should I prepare the algorithm?
My suggestion: use clustering technics for understanding this three price ranges and then solving regression task for each cluster for predicting the number of days. Is it a right concept to do?
There are two questions here, and I think the answer to each lies in a different domain:
Given an input price, predict how long will it take to sell the item. This is a well defined prediction problem, and can be tackled using ML algorithms. e.g. use your entire dataset to train and test a regression model for prediction.
Translate the prediction into a class: quick-, medium- or slow-sell. This problem is product oriented - there doesn't seem to be any concrete data allowing you to train a classifier on this translation; and I agree with #anony-mousse that using unsupervised learning might not yield easy-to-use results.
You can either consult your users or a product manager on reasonable thresholds to use (there might be considerations here like the type of item, season etc.), or try getting some additional data in order to train a supervised classifier.
E.g. you could ask your users, post-sell, if they think the sell was quick, medium or slow. Then you'll have some data to use for thresholding or for classification.
I suggest you simply define thesholds of 10 days and 31 days. Keep it simple.
Because these are the values the users will want to understand. If you use clustering, you may end up with 0.31415 days or similar nonintuitive values that you cannot explain to the user anyway.

Algorithm for cross selling -like amazon people who bought this

New to this and a long time since I've done any programming or forums ....
However, this is really getting under my skin.
I've been looking around at the algorithms used for Amazon etc on the recommendations they make around products which have an affinity to the ones people have selected - clearly this works very well.
Here is what I am wondering....
A - why would this be limited to affinity? Is there never a situation where a product would be exclusive of the original selection and perhaps a parallel but not like product might make sense?
B - why would a neural network not make sense? Could this not work well to provide a good link or would you just end up with a few product which have a very low weighting and therefore perpetuates their non selection?
Thanks for your views.
James
Question A: You do not need to limit it to affinity. However, you do need to "package up" all other pertinent information in a way that you can present it to the algorithm. You should read up on "association rules", "frequent itemsets" & recommender algorithms. Most of these algorithms analyze transaction data and learn rules like {peanuts, hot dogs} = {beer}. As to going away from affinity you can produce multiple sets where you reduce beer to {alcoholic beverage} then use multiple frequent item sets, at different levels of specificity, and then some sort of ensemble algorithm to combine them.
Question B: A neural network, or any other similar model would not work due to the dimensionality. Say you are Amazon.com and you want to input an item set of up to 5 items. You might encode that as an input neuron count equal to one neuron for each item times 5. How many items are on Amazon. I have no idea, but I am guessing its over 100k. Even if you get creative with dimensionality reduction, this is going to be a MASSIVE neural network, or even support vector machine or random forest.

combinatorial optimization - maximalize profit when creating furniture

Some firm is supplied with large wooden panels. These panels are cut to required pieces. To make for example bookshelf, they have to cut pieces from the large panel. In most cases, the pig panel is not used from 100%, there will be some loss, some remainder pieces, which can not be used. So to minimize the loss, they have to find optimal layout of separate pieces on big panel/panels. I think this is called "two dimensional rectangle bin packing problem".
Now it is getting more interesting.
Not all panels are the same, they can have slightly different tone. Ideal bookshelf is made from pieces all cut from one panels or multiple panels with same color tone. But bookshelf can be produced in different qualities (ideal one; one piece with different tone; two pieces..., three different color plates used; etc...). Each quality has its own price. (the superior in quality the more expensive).
Now we have some wooden panels in stock and request to some furnitures (e.g. 100 bookshelves). The goal is to maximize the profit (e.g. create some ones in ideal quality and some in less quality to keep material loss low).
How to solve this problem? How to combine it with bin packing problem? And hints, papers/articles would be appreciated. I know I can minimize/maximize some function and inequalities with integer linear programming, but I really do not know how to solve this.
(please, do not consider the real scenerio, when for example would be the best to create only ideal ones... imagine, that loss from remaining material is X money per cm^2 and Y is the price for specific product quality and that X and Y can be "arbitrary")
I can give an idea of how these problems are solved and why yours is particularly difficult.
In a typical optimization problem, you want to maximize or minimize a function (e.g. energy) with respect to a set number of variables (e.g. length). For example, how long should a spring be in order to minimize the stored energy. The answer is just a number, the equilibrium length of the spring. Another example would be "what price should we set our product to maximize profit?" (Too expensive and no-one will buy anything; too cheap and you won't cover your costs.) Again, the answer is just a number, the optimal price. Optimizations like that are handled with ordinary calculus.
A much more difficult optimization problem is where the answer isn't a number, but a function, like a shape. An example is: what shape will a hanging chain make in order to minimize its gravitational potential energy. Or: what shape should we cut out of these boards in order to maximize profit? This type of problem is solved using variational calculus, which is very difficult.
In any case, when solving optimization problems numerically, there are a few basic steps to follow. First you have to define a function, for example profit(cuts,params) that you want to maximize with respect to some variables 'cuts', with other parameters 'params' fixed. 'params' stores information like the amount and type of wood that you have, and the amount of money different type of furniture is worth.
The second step is to come up with a guess for the best set of cuts, we'll call it cuts_guess. In order to do this you need to come up with an algorithm that will suggest a set of furniture you could actually make using the supplies that you have. For example, if you can make at least one bookshelf from each board, then that could be your initial guess for the best way to use the wood.
The third phase is the optimization. For the initialization, set cuts_best=cuts_guess and profit_best=profit_guess=profit(cuts_guess, params). Then you need (an algorithm) to make small pseudo-random changes to 'cuts', and check if profit increases or decreases. Record the best set of cuts that you find, and the corresponding profit. Usually it's best if there some randomness involved, in order to explore the largest number of possibilities and not get 'stuck' on a poor choice. You'll find examples of this if you look up 'Monte Carlo algorithm'.
Anyway, all of this will be very difficult for your problem. It's easy how to come up with a guess for a variable (e.g. length), and then how to change that guess (e.g. increase or decrease the length a bit). It's not at all obvious how to make a 'guess' for how to place a cut-out on a board, or how to make a small change.

Which data mining algorithm would you suggest for this particular scenario?

This is not a directly programming related question, but it's about selecting the right data mining algorithm.
I want to infer the age of people from their first names, from the region they live, and if they have an internet product or not. The idea behind it is that:
there are names that are old-fashioned or popular in a particular decade (celebrities, politicians etc.) (this may not hold in the USA, but in the country of interest that's true),
young people tend to live in highly populated regions whereas old people prefer countrysides, and
Internet is used more by young people than by old people.
I am not sure if those assumptions hold, but I want to test that. So what I have is 100K observations from our customer database with
approx. 500 different names (nominal input variable with too many classes)
20 different regions (nominal input variable)
Internet Yes/No (binary input variable)
91 distinct birthyears (numerical target variable with range: 1910-1992)
Because I have so many nominal inputs, I don't think regression is a good candidate. Because the target is numerical, I don't think decision tree is a good option either. Can anyone suggest me a method that is applicable for such a scenario?
I think you could design discrete variables that reflect the split you are trying to determine. It doesn't seem like you need a regression on their exact age.
One possibility is to cluster the ages, and then treat the clusters as discrete variables. Should this not be appropriate, another possibility is to divide the ages into bins of equal distribution.
One technique that could work very well for your purposes is, instead of clustering or partitioning the ages directly, cluster or partition the average age per name. That is to say, generate a list of all of the average ages, and work with this instead. (There may be some statistical problems in the classifier if you the discrete categories here are too fine-grained, though).
However, the best case is if you have a clear notion of what age range you consider appropriate for 'young' and 'old'. Then, use these directly.
New answer
I would try using regression, but in the manner that I specify. I would try binarizing each variable (if this is the correct term). The Internet variable is binary, but I would make it into two separate binary values. I will illustrate with an example because I feel it will be more illuminating. For my example, I will just use three names (Gertrude, Jennifer, and Mary) and the internet variable.
I have 4 women. Here are their data:
Gertrude, Internet, 57
Jennifer, Internet, 23
Gertrude, No Internet, 60
Mary, No Internet, 35
I would generate a matrix, A, like this (each row represents a respective woman in my list):
[[1,0,0,1,0],
[0,1,0,1,0],
[1,0,0,0,1],
[0,0,1,0,1]]
The first three columns represent the names and the latter two Internet/No Internet. Thus, the columns represent
[Gertrude, Jennifer, Mary, Internet, No Internet]
You can keep doing this with more names (500 columns for the names), and for the regions (20 columns for those). Then you will just be solving the standard linear algebra problem A*x=b where b for the above example is
b=[[57],
[23],
[60],
[35]]
You may be worried that A will now be a huge matrix, but it is a huge, extremely sparse matrix and thus can be stored very efficiently in a sparse matrix form. Each row has 3 1's in it and the rest are 0. You can then just solve this with a sparse matrix solver. You will want to do some sort of correlation test on the resulting predicting ages to see how effective it is.
You might check out the babynamewizard. It shows the changes in name frequency over time and should help convert your names to a numeric input. Also, you should be able to use population density from census.gov data to get a numeric value associated with your regions. I would suggest an additional flag regarding the availability of DSL access - many rural areas don't have DSL coverage. No coverage = less demand for internet services.
My first inclination would be to divide your response into two groups, those very likely to have used computers in school or work and those much less likely. The exposure to computer use at an age early in their career or schooling probably has some effect on their likelihood to use a computer later in their life. Then you might consider regressions on the groups separately. This should eliminate some of the natural correlation of your inputs.
I would use a classification algorithm that accepts nominal attributes and numeric class, like M5 (for trees or rules). Perhaps I would combine it with the bagging meta classifier to reduce variance. The original algorithm M5 was invented by R. Quinlan and Yong Wang made improvements.
The algorithm is implemented in R (library RWeka)
It also can be found in the open source machine learning software Weka
For more information see:
Ross J. Quinlan: Learning with Continuous Classes. In: 5th Australian Joint Conference on Artificial Intelligence, Singapore, 343-348, 1992.
Y. Wang, I. H. Witten: Induction of model trees for predicting continuous classes. In: Poster papers of the 9th European Conference on Machine Learning, 1997.
I think slightly different from you, I believe that trees are excellent algorithms to deal with nominal data because they can help you build a model that you can easily interpret and identify the influence of each one of these nominal variables and it's different values.
You can also use regression with dummy variables in order to represent the nominal attributes, this is also a good solution.
But you can also use other algorithms such as SVM(smo), with the previous transformation of the nominal variables to binary dummy ones, same as in regression.

Resources