Progressive non-linear algorithm for increasing discount - algorithm

A system has to support 100 users and the price for support is 3
A system has to support 10 000 users and the price for support is 1
I have to devise an algorithm to give me the price in between so it will gradually rise with the number of users.
I tried to multiply the number of users by 0.0002 to get the discount value and I got
300 users * 0.0002 = 0.06 discount, so price for support = 2.94
total income = 300 * 2.94 = 882
5000 users * 0.0002 = 1 discount, so price for support = 2
total income = 5000 * 2 = 10 000
8000 users * 0.0002 = 1.6 discount, so price for support = 1.4
total income = 8000 * 1.4 = 11 200
10 000 users * 0.0002 = 2 discount, so price for support = 1
total income = 8000 * 1.4 = 10 000
So you see after a given point I am actually having more users but receiving less payment.
I am not a mathematician and I now this is not really a programming question, but I don't know where else to ask this. I will appreciate if someone can help me with any information. Thanks!

price = n * (5 - log10(n)) will work for 100 < n < 10000.
Just make sure you're using base-10 log and not natural (base-e) log. If your language doesn't have base-10 log normally, you can calculate it like this:
function log10(x) { return log(x)/log(10); }.
For 100 users, that's 100 * (5 - log10(100)), which gives 100 * (5 - 2), which is 300.
For 1000 users, that's 1000 * (5 - log10(1000)), which gives 1000 * (5 - 3), which is 2000.
For 10000 users, that's 10000 * (5 - log10(10000)), which gives 10000 * (5 - 4), which is 10000.
Let's pick some more random figures.
2500 users: 2500 * (5 - log10(2500)) gives us 2500 * (5 - 3.39794), which is 4005.
6500 users: 6500 * (5 - log10(6500)) gives us 6500 * (5 - 3.81291), which is 7716.
8000 users: 8000 * (5 - log10(8000)) gives us 8000 * (5 - 3.90309), which is 8775.
Should work out about right for what you're modelling.

Scaling the price per user linearly didn't work as you showed, but you can try scaling the total income linearly instead.
total income for 100 users = 300
total income for 10000 users = 10000
total income for n users = (n-100) / (10000-100) * (10000-300) + 300
You know that the total income for n users is the price for support per user times the number of users, that means, now you have to find the function f(n) such that f(n) * n = (n-100) / (10000-100) * (10000-300) + 300.
And if you have to show that as the total income always increase, the price for support always decrease, just show that f'(n) ≤ 0 when 100 ≤ n ≤ 10000.

Related

Calculate shipping costs on the basis of product weight

my math is not so good, but can you guys help me with this
problem statement
Suppose I have 4 books with weights and prices.
Book1, 0.5KG
Book2, 0.8KG
Book3, 1KG
Book4, 0.3KG
I have a base price (shipping cost) based on weight, which is 30 Rs Per 0.5KG.
Now when I select "book 1", the shipping cost will be 30 Rs, but how can I get the shipping cost for book 2,book 3 and book 4?
it's not related to any programming or algorithm , anyways
if
30Rs -> 0.5KG
x -> 0.8Kg
then simply for Book2
x = (30Rs*0.8KG)/0.5KG = 48Rs
similarly for book3 and book4:
book3 = (30Rs*1KG)/0.5KG = 60Rs
book4 = (30Rs*0.3KG)/0.5KG = 18Rs
another way to solve it is if every 30RS corresponds to 0.5KG then by dividing each side by 5 then 6RS corresponds to 0.1KG.
Book2 is 0.8KG which is 8 times the value 0.1KG then it must cost 8 times the value 6RS so 8 * 6 = 48RS similarly for **Book3 and Book4 where
Book3 = 10 * 6 = 60RS
Book4 = 3 * 6 = 18RS
If the pricing is in brackets, that is the cost for 0 - 0.5kg is 30RS, and 0.5 - 1kg is 60RS, then you have to do as follows:
First find how many brackets you have:
weight / bracketSize
For your books, this will be:
Book1: 0.5/0.5 // 1
Book2: 0.8/0.5 // 1.6
Book3: 1/0.5 // 2
Book4, 0.3/0.5 // 0.6
Then, you need to round that value up to the nearest whole number. How you do this will depend on what language you're using, but it's often called Ceiling or ceil:
Book1: Ceiling(1) // 1
Book2: Ceiling(1.6) // 2
Book3: Ceiling(2) // 2
Book4, Ceiling(0.6) // 1
Then multiply by price to get your answer.
Book1: 1 * 30 // 30
Book2: 2 * 30 // 60
Book3: 2 * 30 // 60
Book4, 1 * 30 // 30
In one line:
result = Ceiling(weight / bracketSize) * pricePerBracket

Algorithm: Fill different baskets

Let's assume I have 3 different baskets with a fixed capacity
And n-products which provide different value for each basket -- you can only pick whole products
Each product should be limited to a max amount (i.e. you can maximal pick product A 5 times)
Every product adds at least 0 or more value to all baskets and come in all kinds of variations
Now I want a list with all possible combinations of products fitting in the baskets ordered by accuracy (like basket 1 is 5% more full would be 5% less accurate)
Edit: Example
Basket A capacity 100
Basket B capacity 80
Basket C capacity 30
fake products
Product 1 (A: 5, B: 10, C: 1)
Product 2 (A: 20 B: 0, C: 0)
There might be hundreds more products
Best fit with max 5 each would be
5 times Product 1
4 times Product 2
Result
A: 105
B: 50
C: 5
Accuracy: (qty_used / max_qty) * 100 = (160 / 210) * 100 = 76.190%
Next would be another combination with less accuracy
Any pointing in the right direction is highly appreciated Thanks
Edit:
instead of above method, accuracy should be as error and the list should be in ascending order of error.
Error(Basket x) = (|max_qty(x) - qty_used(x)| / max_qty(x)) * 100
and the overall error should be the weighted average of the errors of all baskets.
Total Error = [Σ (Error(x) * max_qty(x))] / [Σ (max_qty(x))]

Quasi-Simple computation in program

I did not know whether I should post this in mathSE or stackoverflow, but since it involves code and some basic algorithms I went for SO.
My question comes from a program that I have to do based on this article:
Article
The problem is that I cannot seem to be able to allocate or understand some of the variables and how they fit, I personally think this is very sloppy mathematics and some rigorous stats would have probably benefited this article, but that's just me.
Anyway this is my pseudo-code/algorithm for the computation and it works:
/* Algorithm
*
* 1 Avg amount of sales - cupon face value
* 85 - 75 = 10 Additional $
*
* 2 Nbr cupons sold * redemption percentage (percentage Of Cupons Sold)
* 3000 * 85 = 2550 Number of tickets redemeed
*
* 3 Nbr cupons sold * sale price * percent taken by groupon
* 3000 * 35 * .50 = 52500 Groupon money limit goal
*
* 4 Nbr of tickets redeemed * Additional $
* 2550 * 10 = 25500 Additional money spent by customer
*
*
* 5 additional money spent by customer + grupon money limit
* 25500 + 52500 = 78000 Gross income
*
* Expenses
*
* 6 Nbr of tickets redeemed * avg amount sold * percent of incremental Cost Sales
* 2550 * 85 * 40 = 86700 Total expense
*
* 7 Nbr of tickets redeemed / Avg amount of cupons purchased by customers (number cupons purchased by custormers)
* 2550 / 2 = 1275 Nbr customers
*
* 8 Nbr customers * percent of existing customers (cuponsUsersAlreadyCustomers)
* 1275 * 0.60 = 765 amount of new customer (Standard deviation of average customer per population)
*
* 9 SD of avg customer per population * Percentage of new customer who returned (percent cupon user who become customers)
* 765 * 0.10 = 76.5 new repeat customer avg
*
* 10 Net cost / Avg new repeat customer
* 8700 / 76 = 114 Amount paid for each new regular
*
*/
The question is, where the heck that 60% comes from? and is it a fixed value? I mean technically 40% + 10% is 50% and 40% is the old customers. Second what about:
"7. What is the advertising value of having your business promoted to 900,000 people — that’s the number on Groupon’s Chicago list — even if they don’t buy a coupon? $1,000 advertising value."
Why do I need that? I mean I am already comparing how much each new customer will cost me with Groupon and traditional advertisement why is that there? do I need it in part of my computation?
It's a good project but this is really weird how the guy in the document is explaining the math!
The 60% comes from the assumption "4. 40 percent used by existing customers." Implicit seems to be the assumption that the "average number of coupons bought by each customer" does not differ significantly between new and existing customers. This is not mentioned explicitly, but since 2,550 is the number of redeemed coupons and the percentage is multiplied by 2,550 / 2 (assumed numbers of customers associated with these coupons) this seems to be a necessary assumption.
Edit: Sorry, I overlooked your second question. The $1,000 is mentioned only in the Revenue but not included in the calculation of the cost. In theory you could subtract it from the cost, but this is only sensible if you'd have spent that money on advertising anyways and it could thus be considered a cost external to the deal. It is however prudent to simply mention this additional benefit (which you get in addition to the new customers) but still consider it as part of the cost since it definitely has to be paid for.

how to find maximum profit while selling some garbage item

I have one problem statement for which i need write an algo. can somebody help me?
Problem is :
i have different length of iron rod let say {26, 103, 59}, i want to sell the same length of iron rod so that i can earn maximum profit. Also i have to cutting charge lets say 10 Rs/unit.
Case 1, if cutting charge is Rs 100 and sell length of 51 feet with cost 100 per unit.
then 103/ 51 = 2 length ((51 * 100 * 2) - ((1 * 100) + 200 ) = 9900
59 / 51 = 1 length ((51 * 100 * 1) - ((8 * 100) + 1* 100) = 4200
26/52 = 0 length((0 * 100)) - (26 * 100 ) = -2600
now total profit is = 11500
But if cutting charge is vary then this calculation like failed, can some one tell how can develop algo to find the maximum profit.

Basic Velocity Algorithm?

Given the following dataset for a single article on my site:
Article 1
2/1/2010 100
2/2/2010 80
2/3/2010 60
Article 2
2/1/2010 20000
2/2/2010 25000
2/3/2010 23000
where column 1 is the date and column 2 is the number of pageviews for an article. What is a basic velocity calculation that can be done to determine if this article is trending upwards or downwards for the most recent 3 days?
Caveats, the articles will not know the total number of pageviews only their own totals. Ideally with a number between 0 and 1. Any pointers to what this class of algorithms is called?
thanks!
update: Your data actually already is a list of velocities (pageviews/day). The following answer simply shows how to find the average velocity over the past three days. See my other answer for how to calculate pageview acceleration, which is the real statistic you are probably looking for.
Velocity is simply the change in a value (delta pageviews) over time:
For article 1 on 2/3/2010:
delta pageviews = 100 + 80 + 60
= 240 pageviews
delta time = 3 days
pageview velocity (over last three days) = [delta pageviews] / [delta time]
= 240 / 3
= 80 pageviews/day
For article 2 on 2/3/2010:
delta pageviews = 20000 + 25000 + 23000
= 68000 pageviews
delta time = 3 days
pageview velocity (over last three days) = [delta pageviews] / [delta time]
= 68,000 / 3
= 22,666 + 2/3 pageviews/day
Now that we know the maximum velocity, we can scale all the velocities to get relative velocities between 0 and 1 (or between 0% and 100%):
relative pageview velocity of article 1 = velocity / MAX_VELOCITY
= 240 / (22,666 + 2/3)
~ 0.0105882353
~ 1.05882353%
relative pageview velocity of article 2 = velocity / MAX_VELOCITY
= (22,666 + 2/3)/(22,666 + 2/3)
= 1
= 100%
"Pageview trend" likely refers to pageview acceleration, not velocity. Your dataset actually already is a list of velocities (pageviews/day). Pageviews are non-decreasing values, so pageview velocity can never be negative. The following describes how to calculate pageview acceleration, which may be negative.
PV_acceleration(t1,t2) = (PV_velocity{t2} - PV_velocity{t1}) / (t2 - t1)
("PV" == "Pageview")
Explanation:
Acceleration is simply change in velocity divided by change in time. Since your dataset is a list of page view velocities, you can plug them directly into the formula:
PV_acceleration("2/1/2010", "2/3/2010") = (60 - 100) / ("2/3/2010" - "2/1/2010")
= -40 / 2
= -20 pageviews per day per day
Note the data for "2/2/2010" was not used. An alternate method is to calculate three PV_accelerations (using a date range that goes back only a single day) and averaging them. There is not enough data in your example to do this for three days, but here is how to do it for the last two days:
PV_acceleration("2/3/2010", "2/2/2010") = (60 - 80) / ("2/3/2010" - "2/2/2010")
= -20 / 1
= -20 pageviews per day per day
PV_acceleration("2/2/2010", "2/1/2010") = (80 - 100) / ("2/2/2010" - "2/1/2010")
= -20 / 1
= -20 pageviews per day per day
PV_acceleration_average("2/3/2010", "2/2/2010") = -20 + -20 / 2
= -20 pageviews per day per day
This alternate method did not make a difference for the article 1 data because the page view acceleration did not change between the two days, but it will make a difference for article 2.
Just a link to an article about the 'trending' algorithm reddit, SUs and HN use among others.
http://www.seomoz.org/blog/reddit-stumbleupon-delicious-and-hacker-news-algorithms-exposed

Resources