Fulfilling maximum customer orders - algorithm

There is an inventory of products like eg. A- 10Units, B- 15units, C- 20Units and so on. We have some customer orders of some products like customer1{A- 10Units, B- 15Units}, customer2{A- 5Units, B- 10Units}, customer3{A- 5Units, B- 5Units}. The task is fulfill maximum customer orders with the limited inventory we have. The result in this case should be filling customer2 and customer3 orders instead of just customer1.[The background for this problem is a realtime online retail scenario, where we have millions of customers and millions of products and we are trying to fulfill the orders as efficiently as possible]
How do I solve this?Is there an algorithm for this kind of problem, something like optimisation?
Edit: The requirement here is fixed. The only aim here is maximizing the number of fulfilled orders regardless of value. But we have millions of users and millions of products.

This problem includes as a special case a knapsack problem. To see why consider only one product A: the storage amount of the product is your bag capacity, the order quantities are the weights and each rock value is 1. Your problem is to maximize the total value you can fit in the bag.
Don't expect an exact solution for your problem in polynomial time...
An approach I'd go for is a random search: make a list of the orders and compute a solution (i.e. complete orders in sequence, skipping the orders you cannot fulfill). Then change the solution by applying a permutation on the orders and see if it's better.
Keep going with search until time runs out or you're happy with the solution.

It can be solved by DP.
Firstly sort all your orders with respect to A in increasing order.
Use this DP :
DP[n][m][o] = DP[n-a][m-b][o-c] + 1 where n-a>=0 and m-b >=0 o-c>=0
DP[0][0][0] = 1;
Do bottom up computation :
Set DP[i][j][k] = 0 , for all i =0 to Amax; j= 0 to Bmax; k = 0 to Cmax
For Each n : 0 to Amax
For Each m : 0 to Bmax
For Each o : 0 to Cmax
if(n>=a && m>=b && o>= c)
DP[n][m][o] = DP[n-a][m-b][o-c] + 1;
You will then have to find the max value of DP[i][j][k] for all values of i,j,k possible. This is your answer. - O(n^3)

Reams have been written about order fulfillment and yet no one has come up with a standard answer. The reason being that companies have different approaches and different requirements.
There are so many variables that a one size solution that fits all is not possible.
You would have to sit down and ask hundreds of questions before you could even start to come up with an approach tailored to your customers needs.
Indeed those needs might also vary, based on the time of year, the day of the week, what promotions are currently being run, whether customers are ranked, numbers of picking and packing staff/machinery currently employed, nature, size, weight of products, where products are in the warehouse, whether certain products are in fast/automated picking lines, standard picking faces or in bulk. The list can appear endless.
Then consider whether all orders are to be filled or are you allowed to partially fill an order and back-order out of stock products.
Does the entire order have to fit in a single box or are multiple box orders permitted.
Are you dealing with multiple warehouses and if so can partial orders be sent from each or do they have to be transferred for consolidation.
Should precedence be given to local or overseas orders.
The amount of information that you need at your finger tips before you can even start to plan a methodology to fit your customers specific requirements can be enormous and sadly, you are not going to get a definitive answer. It does not exist.
Whilst I realise that this is not a) an answer or b) necessarily a welcome post, the hard truth is that you will require your customer to provide you with immense detail as to what it is that they wish to achieve, how and when.
You job, initially, is the play devils advocate, in attempting to nail them down.
P.S. Welcome to S.O.

Related

Algorithms to process a list of transactions

this is a type of question I usually encounter when doing SWE Internship OA for Trading/Payment Services companies: given a list of transactions of the form "Action-Customer ID-Transaction ID-Amount", we are asked to process them and return a list of actions to take or the amount of money remained.
Here is a specific example. Input is a list of string, with the form "Action-Order ID-Price-Amount-Buy/Sell" with only 2 actions SUB (Submit) and CXL (Cancel).
For Buy, Higher price has higher priority while for Sell, lower price has higher priority. If for the same Buy or Sell category, the prices are similar then the string came earlier has higher priority.
If price of buy >= price of sell, then two orders are matched and Amount = min(Amount of Sell, Amount of Buy), Buy or Sell will be decided accordingly.
If an order has already been filled, it will not be cancelled.
If Order ID in CXL action does not exist, there will be no effect (no error).
We are asked to return a list of actions to take, output should be a list of strings with the following format: "Order ID-Buy/Sell-Amount to Buy/Sell"
Input and output examples:
Input: ["SUB-hghg-10-400-B", "SUB-abab-15-500-B", "SUB-abcd-10-400-S"]
Output: ["abcd-S-400"]
Explanation: "abab" has higher Buy price than "hghg" even though it came later, so it will be processed first. Amount of abab = 500, Amount of abcd = 400 => Amount = 400
Input: ["SUB-hghg-10-400-B", "CXL-hghg"]
Output: []
Explanation: do nothing because the order was cancelled before it is filled.
I have attempted the problem using hash maps but it gets too complicated for me. Before, I also encountered a similar problem but the difference is that the Cancel will remove the order regardless of whether it has been filled or not, and in that case I use a LinkedList to keep track of the orders.
I want to ask if there are any general approaches to optimally solve these kinds of problems. I have wandered LeetCode for some time, practicing some Medium questions but have not encountered this problem type. If there is any typical data structure to efficiently store the information of each order, I would like to know also. I have also searched the Internet for some keywords like algorithmic trading, algorithms to process payments/transactions but I have not found anything useful yet. Any help is greatly appreciated!
Thank you so much for reading my lengthy post.
So your input is a series of submitted orders and cancels and your output is a sequence of the trades that happen when orders are matched, right?
I'd approach the problem as follows: Create an order book data structure that contains all open (unmatched) oders, buys and sells separately, ordered by price.
Init: The order book is of course initialized empty. Initialize your list of resulting trades empty as well.
Loop: Then process the incoming requests (submit or cancel) one by one and apply them to the order book. This will either add the order to the book or the order matches one or more other orders and a new trade is generated. Append the resulting trade to your trades list.
That's it basically. However, please note that matching a new order with the book is not completely trivial. One order in the book may only be matched partially and remain in the book or the new order will only be matched partially and the rest amount has to be added to the book. I'd recommend to write unit tests for the single step so you are sure your orders are sorted and matched as intended.

System Design of Google Trends?

I am trying to figure out system design behind Google Trends (or any other such large scale trend feature like Twitter).
Challenges:
Need to process large amount of data to calculate trend.
Filtering support - by time, region, category etc.
Need a way to store for archiving/offline processing. Filtering support might require multi dimension storage.
This is what my assumption is (I have zero practial experience of MapReduce/NoSQL technologies)
Each search item from user will maintain set of attributes that will be stored and eventually processed.
As well as maintaining list of searches by time stamp, region of search, category etc.
Example:
Searching for Kurt Cobain term:
Kurt-> (Time stamp, Region of search origin, category ,etc.)
Cobain-> (Time stamp, Region of search origin, category ,etc.)
Question:
How do they efficiently calculate frequency of search term ?
In other words, given a large data set, how do they find top 10 frequent items in distributed scale-able manner ?
Well... finding out the top K terms is not really a big problem. One of the key ideas in this fields have been the idea of "stream processing", i.e., to perform the operation in a single pass of the data and sacrificing some accuracy to get a probabilistic answer. Thus, assume you get a stream of data like the following:
A B K A C A B B C D F G A B F H I B A C F I U X A C
What you want is the top K items. Naively, one would maintain a counter for each item, and at the end sort by the count of each item. This takes O(U) space and O(max(U*log(U), N)) time, where U is the number of unique items and N is the number of items in the list.
In case U is small, this is not really a big problem. But once you are in the domain of search logs with billions or trillions of unique searches, the space consumption starts to become a problem.
So, people came up with the idea of "count-sketches" (you can read up more here: count min sketch page on wikipedia). Here you maintain a hash table A of length n and create two hashes for each item:
h1(x) = 0 ... n-1 with uniform probability
h2(x) = 0/1 each with probability 0.5
You then do A[h1[x]] += h2[x]. The key observation is that since each value randomly hashes to +/-1, E[ A[h1[x]] * h2[x] ] = count(x), where E is the expected value of the expression, and count is the number of times x appeared in the stream.
Of course, the problem with this approach is that each estimate still has a large variance, but that can be dealt with by maintaining a large set of hash counters and taking the average or the minimum count from each set.
With this sketch data structure, you are able to get an approximate frequency of each item. Now, you simply maintain a list of 10 items with the largest frequency estimates till now, and at the end you will have your list.
How exactly a particular private company does it is likely not publicly available, and how to evaluate the effectiveness of such a system is at the discretion of the designer (be it you or Google or whoever)
But many of the tools and research is out there to get you started. Check out some of the Big Data tools, including many of the top-level Apache projects, like Storm, which allows for the processing of streaming data in real-time
Also check out some of the Big Data and Web Science conferences like KDD or WSDM, as well as papers put out by Google Research
How to design such a system is challenging with no correct answer, but the tools and research are available to get you started

Rating Algorithm

I'm trying to develop a rating system for an application I'm working on. Basically app allows you to rate an object from 1 to 5(represented by stars). But I of course know that keeping a rating count and adding the rating the number itself is not feasible.
So the first thing that came up in my mind was dividing the received rating by the total ratings given. Like if the object has received the rating 2 from a user and if the number of times that object has been rated is 100 maybe adding the 2/100. However I believe this method is not good enough since 1)A naive approach 2) In order for me to get the number of times that object has been rated I have to do a look up on db which might end up having time complexity O(n)
So I was wondering what alternative and possibly better ways to approach this problem?
You can keep in DB 2 additional values - number of times it was rated and total sum of all ratings. This way to update object's rating you need only to:
Add new rating to total sum.
Divide total sum by total times it was rated.
There are many approaches to this but before that check
If all feedback givers treated at equal or some have more weight than others (like panel review, etc)
If the objective is to provide only an average or any score band or such. Consider scenario like this website - showing total reputation score
And yes - if average is to be omputed, you need to have total and count of feedback and then have to compute it - that's plain maths. But if you need any other method, be prepared for more compute cycles. balance between database hits and compute cycle but that's next stage of design. First get your requirement and approach to solution in place.
I think you should keep separate counters for 1 stars, 2 stars, ... to calcuate the rating, you'd have to compute rating = (1*numOneStars+2*numTwoStars+3*numThreeStars+4*numFourStars+5*numFiveStars)/numOneStars+numTwoStars+numThreeStars+numFourStars+numFiveStars)
This way you can, like amazon also show how many ppl voted 1 stars and how many voted 5 stars...
Have you considered a vote up/down mechanism over numbers of stars? It doesn't directly solve your problem but it's worth noting that other sites such as YouTube, Facebook, StackOverflow etc all use +/- voting as it is often much more effective than star based ratings.

optimal allocation of products to maximize time before restocking

stock allocation problem.
I have a problem where each of a known set of products with various rates of sale need to be allocated into one of more of a fixed number of buckets.
Each product must be in at least one bucket and buckets cannot share product.
All buckets must be filled, and products will usually be in more than one bucket
My problem is to optimize the allocation of products into all of the buckets such that it maximises the amount of time before any one product sells out.
To complicate matters, each type of bucket may hold differing amounts of each type of product.
This is not necessarily related to the size of the product (which is not known), but may be arbitrary.
Eg,
Bucket A holds 10 Product 1, Bucket B holds 20 product 2, however
Bucket A holds 5 Product 2, Bucket B holds 8 Product 1.
So, as inputs we have a set of products and their sales velocity eg
Product 1 Sells 6 per day
Product 2 Sells 5 per day
Product 3 Sells 4 per day
Product 4 Sells 7 per day
A set of Buckets
Bucket A
Bucket B
Bucket C
Bucket D
Bucket E
Bucket F
Bucket G
And a Product-Bucket lookup table to determine each buckets capacity for each product eg
Prod 1 Bucket A = 40;
Prod 1 Bucket B = 45:
Prod 1 Bucket C = 40;
...
Prod 2 Bucket A = 35;
...
Prod 2 Bucket E = 20;
...
etc
Approaches i have tried so far include
reduce the products per bucket to a common factor - until I realised the product-bucket size relationship was arbitrary.
Place products into buckets at random and the iterate through each product swapping for an existing product in a bucket and test whether it improves the time taken till sold out.
My concerns with this approach are that it may take a path that is optimal at the decision time but obscures a later more optimal choice.
or perhaps the optimal decision requires multiple product changes that will never occur because the individual choices are not optimal.
An exhaustive search - turns out this produces a very large combination of possibilities for a not so large set of products and buckets.
I initially thought the optimum solution would be allocate products in the same ratio as their sale rates, but discovered this not to be true as a configuration holding a very small number of products matching their sales ratios perfectly would be less desirable than a configuration holding much more stock and thus having a longer sale time before first sell out.
Any c# or pseudo code appreciated
I suggest a variant of approach 2 based on simulated annealing -- great approach to optimization where your underlying strategy is based on steepest-descent or the like. Wikipedia does a good job explaining the idea; the crucial conceptual part is:
each step of the SA algorithm replaces
the current solution by a random
"nearby" solution, chosen with a
probability that depends on the
difference between the corresponding
function values and on a global
parameter T (called the temperature),
that is gradually decreased during the
process
I think this problem may be NP complete and that you may have to resort to the usual methods GA/SA/Breadth/Depth searches and/or settle for non-optimal solutions depending on how many buckets/products you have.
Assuming that you have enough product to fit all your buckets (which you don't say), you may be able to brute force a single product with every bucket to determine which product is the best for each bucket. I somehow doubt that this is the case, but in case it is, here is the general algorithm.
(extremely pseudo-code python. This does not run unmodified!!)
index = {} # a hash table containing hash tables of buckets
for bucket in buckets:
for product in products:
capacity = find_capacity(bucket,product)
sell_rate = 1/sales_velocity[product] #assuming sales_velocity are not fractions
longevity = capacity * sell_rate
index[bucket][product] = longevity
for bucket in buckets:
product = find_maximum_longevity(index, bucket)
print bucket, product
Simulated annealing sounds good, although you have to be careful choosing the parameters and the mutation functions to get a good solution.
You could also specify the problem as a series of equations and call an Integer Programming (IP) package such as http://www.coin-or.org/ to find an optimal or near-optimal solution.

Finding a subset of numbers that equals a single number

The reason I place this post is that I am looking to reconcile customer accounts receivable accounts where "payments" are posted to accounts instead of matched with the open invoices and cleared. So here is my issue:
Have a single number (payment) that should equal a subset of a given set of numbers (invoice amounts). Simple example:
Payment $10,002
Invoices values:
5001
2932
876
98
21
9923
2069
123
432
765
I would want a way to pull out 5001, 2932 and 2069 from this set.
Being a non-programmer, an Excel spreadsheet application is easiest for me to create. Ideas?
You're talking about an NP-Complete problem called Subset-sum.
Basically, this means that in general it is very computationally hard to compute the subset of prices that sums to your grand total. It is, however, very easy to check your answer since you merely sum your answers together.
My guess is, that if you want to examine N prices, you're going to have to use about 2^N cells in Excel to calculate this. The wikiepdia article linked above give some heuristics for approximating this.
Bottom line is, if you need to do this on a large scale (N is, say, in the thousands hundreds) you should rethink why you need to do this.
If you can find out a way to do it very efficiently, there may be a prize involved.
I worked on a very similar Java application that mapped receipts to accounts receivable transactions. We did not try to progammatically link summed receipts to a single transactions or vice-versa for a number of reasons. However, we did allow users to manually do that mapping. We just mapped receipt figures to transactions figures that matched, if there were multiple reciepts and transactions with the same amount, we only matched when there were the same number of duplicate amounts.

Resources