Given the following dataset for a single article on my site:
Article 1
2/1/2010 100
2/2/2010 80
2/3/2010 60
Article 2
2/1/2010 20000
2/2/2010 25000
2/3/2010 23000
where column 1 is the date and column 2 is the number of pageviews for an article. What is a basic velocity calculation that can be done to determine if this article is trending upwards or downwards for the most recent 3 days?
Caveats, the articles will not know the total number of pageviews only their own totals. Ideally with a number between 0 and 1. Any pointers to what this class of algorithms is called?
thanks!
update: Your data actually already is a list of velocities (pageviews/day). The following answer simply shows how to find the average velocity over the past three days. See my other answer for how to calculate pageview acceleration, which is the real statistic you are probably looking for.
Velocity is simply the change in a value (delta pageviews) over time:
For article 1 on 2/3/2010:
delta pageviews = 100 + 80 + 60
= 240 pageviews
delta time = 3 days
pageview velocity (over last three days) = [delta pageviews] / [delta time]
= 240 / 3
= 80 pageviews/day
For article 2 on 2/3/2010:
delta pageviews = 20000 + 25000 + 23000
= 68000 pageviews
delta time = 3 days
pageview velocity (over last three days) = [delta pageviews] / [delta time]
= 68,000 / 3
= 22,666 + 2/3 pageviews/day
Now that we know the maximum velocity, we can scale all the velocities to get relative velocities between 0 and 1 (or between 0% and 100%):
relative pageview velocity of article 1 = velocity / MAX_VELOCITY
= 240 / (22,666 + 2/3)
~ 0.0105882353
~ 1.05882353%
relative pageview velocity of article 2 = velocity / MAX_VELOCITY
= (22,666 + 2/3)/(22,666 + 2/3)
= 1
= 100%
"Pageview trend" likely refers to pageview acceleration, not velocity. Your dataset actually already is a list of velocities (pageviews/day). Pageviews are non-decreasing values, so pageview velocity can never be negative. The following describes how to calculate pageview acceleration, which may be negative.
PV_acceleration(t1,t2) = (PV_velocity{t2} - PV_velocity{t1}) / (t2 - t1)
("PV" == "Pageview")
Explanation:
Acceleration is simply change in velocity divided by change in time. Since your dataset is a list of page view velocities, you can plug them directly into the formula:
PV_acceleration("2/1/2010", "2/3/2010") = (60 - 100) / ("2/3/2010" - "2/1/2010")
= -40 / 2
= -20 pageviews per day per day
Note the data for "2/2/2010" was not used. An alternate method is to calculate three PV_accelerations (using a date range that goes back only a single day) and averaging them. There is not enough data in your example to do this for three days, but here is how to do it for the last two days:
PV_acceleration("2/3/2010", "2/2/2010") = (60 - 80) / ("2/3/2010" - "2/2/2010")
= -20 / 1
= -20 pageviews per day per day
PV_acceleration("2/2/2010", "2/1/2010") = (80 - 100) / ("2/2/2010" - "2/1/2010")
= -20 / 1
= -20 pageviews per day per day
PV_acceleration_average("2/3/2010", "2/2/2010") = -20 + -20 / 2
= -20 pageviews per day per day
This alternate method did not make a difference for the article 1 data because the page view acceleration did not change between the two days, but it will make a difference for article 2.
Just a link to an article about the 'trending' algorithm reddit, SUs and HN use among others.
http://www.seomoz.org/blog/reddit-stumbleupon-delicious-and-hacker-news-algorithms-exposed
Related
my math is not so good, but can you guys help me with this
problem statement
Suppose I have 4 books with weights and prices.
Book1, 0.5KG
Book2, 0.8KG
Book3, 1KG
Book4, 0.3KG
I have a base price (shipping cost) based on weight, which is 30 Rs Per 0.5KG.
Now when I select "book 1", the shipping cost will be 30 Rs, but how can I get the shipping cost for book 2,book 3 and book 4?
it's not related to any programming or algorithm , anyways
if
30Rs -> 0.5KG
x -> 0.8Kg
then simply for Book2
x = (30Rs*0.8KG)/0.5KG = 48Rs
similarly for book3 and book4:
book3 = (30Rs*1KG)/0.5KG = 60Rs
book4 = (30Rs*0.3KG)/0.5KG = 18Rs
another way to solve it is if every 30RS corresponds to 0.5KG then by dividing each side by 5 then 6RS corresponds to 0.1KG.
Book2 is 0.8KG which is 8 times the value 0.1KG then it must cost 8 times the value 6RS so 8 * 6 = 48RS similarly for **Book3 and Book4 where
Book3 = 10 * 6 = 60RS
Book4 = 3 * 6 = 18RS
If the pricing is in brackets, that is the cost for 0 - 0.5kg is 30RS, and 0.5 - 1kg is 60RS, then you have to do as follows:
First find how many brackets you have:
weight / bracketSize
For your books, this will be:
Book1: 0.5/0.5 // 1
Book2: 0.8/0.5 // 1.6
Book3: 1/0.5 // 2
Book4, 0.3/0.5 // 0.6
Then, you need to round that value up to the nearest whole number. How you do this will depend on what language you're using, but it's often called Ceiling or ceil:
Book1: Ceiling(1) // 1
Book2: Ceiling(1.6) // 2
Book3: Ceiling(2) // 2
Book4, Ceiling(0.6) // 1
Then multiply by price to get your answer.
Book1: 1 * 30 // 30
Book2: 2 * 30 // 60
Book3: 2 * 30 // 60
Book4, 1 * 30 // 30
In one line:
result = Ceiling(weight / bracketSize) * pricePerBracket
I'm in need of some kind of algorithm I can't figure out on my own sadly.
My biggest problem is that I have no good way to describe the problem... :/
I will try like this:
Imagine you have a racing game where everyone can try to be the fastest on a track or map. Every Map is worth 100 Points in total. If someone finished a map in some amount of time he gets a record in a database. If the player is the first and only player to finish this map he earns all the 100 points of this map.
Now, that's easy ;) but...
Now another player finishes the map. Let's imagine the first player finishes in 50 Seconds and the 2nd player finishes in 55 seconds, so a bit slower. I now need a calculation depending on both records in the database. Each of both players now earn a part of the 100 points. The faster player a bit more then the slower player. Let's say they finished the exact same time they both would get 50 points from 100, but as the first one is slightly faster, he now earns something around 53 of the points and the slower player just 47.
I started to calculate this like this:
Sum of both records is 105 seconds, the faster player took 50/105 in percent of this, so he earns 100-(50/105*100) points and the slower player 100-(55/105*100) points. The key to this is, that all points distributed among the players always equals to 100 in total. This works for 2 players, but it breaks at 3 and more.
For example:
Player 1 : 20 seconds
Player 2 : 20 seconds
Player 3 : 25 seconds
Calculation would be:
Player 1: 100-(20/65*100) = 69 points
Player 2: 100-(20/65*100) = 69 points
Player 3: 100-(25/65*100) = 61 points
This would no longer add up to 100 points in total.
Fair would be something around values of:
Player 1 & 2 (same time) = 35 points
Player 3 = 30 points
My problem is i can't figure out a algorithm which solves this.
And I need the same algorithm for any amount of players. Can someone help with an idea? I don't need a complete finished algorithm, maybe just an idea at which step i used the wrong idea, maybe the sum of all times is already a bad start.
Thx in advance :)
We can give each player points proportional to the reciprocal of their time.
One player with t seconds gets 100 × (1/t) / (1/t) = 100 points.
Of the two players, the one with 50 seconds gets 100 × (1/50) / (1/50 + 1/55) ≈ 52.4, and the one with 55 gets 100 × (1/55) / (1/50 + 1/55) ≈ 47.6.
Of the three players, the ones with 20 seconds get 100 × (1/20) / (1/20 + 1/20 + 1/25) ≈ 35.7, and the one with 25 seconds gets 100 × (1/25) / (1/20 + 1/20 + 1/25) ≈ 28.6.
Simple observation: Let the sum of times for all players be S. A person with lower time t would have a higher value of S-t. So you can reward points proportional to S-t for each player.
Formula:
Let the scores for N players be a,b,c...,m,n. Total sum S = a+b+c...+m+n. Then score for a given player would be
score = [S-(player's score)]/[(N-1)*S] * 100
You can easily see that using this formula, the sum of scores of all players will be always be 100.
Example 1:
S = 50 + 55 = 105, N-1 = 2-1 = 1
Player 1 : 50 seconds => score = ((105-50)/[1*105])*100 = 52.38
Player 2 : 55 seconds => score = ((105-55)/[1*105])*100 = 47.62
Similarly, for your second example,
S = 20 + 20 + 25 = 65
N - 1 = 3 - 1 = 2
For Player 1, (S-t) = 65-20 = 45
Player 1's score => (45/(2*65))*100 = 34.6
Player 2 => same as Player 1
For Player 3, (S-t) = 65-25 = 40
Player 3's score => (40/(2*65))*100 = 30.8
This method avoids any division in the intermediate states, so there will be no floating point issues for the calculations.
I'm using Stata to estimate Value-at-risk (VaR) with the historical simulation method. Basically, I will create a rolling window with 100 observations, to estimate VaR for the next 250 days (repeat 250 times). Hence, as I've known, the rolling window with time series command in Stata would be useful in this case. Here is the process:
Input: 350 values
1. Ascending sort the very first 100 values (by magnitude).
2. Then I need to take the 5th smallest for each window.
3. Repeat 250 times.
Output: a list of the 5th values (250 in total).
Sound simple, but I cannot do it the right way. This was my attempt below:
program his,rclass
sort lnreturn
return scalar actual=lnreturn in 5
end
tsset stt
time variable: stt, 1 to 350
delta: 1 unit
rolling actual=r(actual), window(100) saving(C:\result100.dta, replace) : his
(running his on estimation sample)
And the result is:
Start end actual
1 100 -.047856
2 101 -.047856
3 102 -.047856
4 103 -.047856
.... ..... ......
251 350 -.047856
What I want is 250 different 5th values in panel "actual", not the same like that.
If I understand this correctly, you want the 5th percentile of values in a window of 100. That should yield to summarize, detail or centile. I see no need to write a program.
Your bug is that your program his calculates the same thing each time it is called. There is no communication about windows other than what is explicit in your code. It is like saying
move here: now add 2 + 2
move there: now add 2 + 2
move to New York: now add 2 + 2
The result is invariant to your supposed position.
Note that I doubt that
return scalar actual=lnreturn in 5
really is your code. lnreturn[5] should work.
UPDATE You don't even need rolling here. Looping over data is easy enough. The data in this example are clearly fake.
clear
* sandpit
set obs 500
set seed 2803
gen y = ceil(exp(rnormal(3,2)))
l y in 1/5
* initialise
gen p5 = .
* windows of length 100: 1..100, 101..200, ...
quietly forval j = 1/401 {
local J = `j' + 99
su y in `j'/`J', detail
replace p5 = r(p5) in `j'
}
* check first calculation
su y in 1/100, detail
l in 1/5
The problem I'm trying to solve: calculate the current average velocity of some data series where the data points are unevenly spread. For example, calculating the current speed of an upload, where the 'amount uploaded' signals arrive unevenly:
t = 0, sent = 0
t = 5, sent = 10
t = 6, sent = 12
t = 9, sent = 20
(last - first) / (time delta between first and last)
And that would be exactly the average velocity.
Unsless you forgot to tell us some details, you do not need the data points in the middle.
You can calculate the average per time unit by taking the delta of the new values and the previous values.
And if you want the average over multiple points, you can calculate the averages between several points, and than take the average of those averages.
For example:
Current average:
t34 = 9 - 6 = 3
sent34 = 20 - 12 = 8
average34 = 8 / 3 = 2.67
Average of last two time slots:
t23 = 6 - 5 = 1
sent23 = 12 - 10 = 2
average23 = 2 / 1 = 2
average234 = (2 + 2.67) / 2 = 2.33
Just rescale latest results
For you example:
t = 0, sent = 0
t = 5, sent = 10
t = 6, sent = 12
t = 9, sent = 20
CurrentSpeed = (20 -12) / (9 - 6) = 8/3 = 2.666666
You may use different rescale interval size to decrease speed of changing velocity (when connection "lost" "restored")
The standard way of calculating a velocity from noisy data is to apply a Kalman filter.
Given a set of tasks:
T1(20,100) T2(30,250) T3(100,400) (execution time, deadline=peroid)
Now I want to constrict the deadlines as Di = f * Pi where Di is new deadline for ith task, Pi is the original period for ith task and f is the factor I want to figure out. What is the smallest value of f that the tasks will continue to meet their deadlines using rate monotonic scheduler?
This schema will repeat (synchronize) every 2000 time units. During this period
T1 must run 20 times, requiring 400 time units.
T2 must run 8 times, requiring 240 time units.
T3 must run 5 times, requiring 500 time units.
Total is 1140 time units per 2000 time unit interval.
f = 1140 / 2000 = 0.57
This assumes long-running tasks can be interrupted and resumed, to allow shorter-running tasks to run in between. Otherwise there will be no way for T1 to meet it's deadline once T3 has started.
The updated deadlines are:
T1(20,57)
T2(30,142.5)
T3(100,228)
These will repeat every 1851930 time units, and require the same time to complete.
A small simplification: When calculating factor, the period-time cancels out. This means you don't really need to calculate the period to get the factor:
Period = 2000
Required time = (Period / 100) * 20 + (Period / 250) * 30 + (Period / 400) * 100
f = Required time / Period = 20 / 100 + 30 / 250 + 100 / 400 = 0.57
f = Sum(Duration[i] / Period[i])
To calculate the period, you could do this:
Period(T1,T2) = lcm(100, 250) = 500
Period(T1,T2,T3) = lcm(500, 400) = 2000
where lcm(x,y) is the Least Common Multiple.