Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I am working on task http://poj.org/problem?id=1884. I didn't understand just one part of it:
Adjacent stations on each route are far enough apart to allow a train to accelerate to its maximum velocity before beginning to decelerate.
Does it mean that train is accelerating and then going in constant speed/velocity before it goes to deaccelrate or it is accelerating for longer period, and when it comes to max velocity, it is immediately going to deaccelerate?
The fist one, yes, "train is accelerating and then going in constant speed/velocity before it goes to deaccelrate".
Basically the train accelerates as much as it can, then maintains maximum speed for the journey, then brakes as approaches the station.
What they meant was: if the station were too close, trains would have to accelerate then brake before reaching peak speed to be able to stop in the next station, but you don't have this problem.
from the text:
It remains at that maximum velocity until it begins to decelerate (at
the same constant rate) as it approaches the next station.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
I obviously am familiar with the texts mentioning it is an average lower bound etc... but still wondering why the word amortized was put there ?
Why is amortize used in describing algorithm analysis ?
Because the computer scientists who thought up the idea were using a financial analogy.
You amortise a significant expenditure (like building a new house) by paying for it over time (perhaps with a mortgage, which has the same root).
Similarly, in amortised analysis of algorithms you pay for a huge and uncommon occurrence (copying an entire vector of when it gets full) by spreading its cost over subsequent operations (or previous operations in the banker's model).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
By now, especially after this post and other similar internet resources, I guess most people have figured out how to easily win at Gabriele Cirulli's game 2048: even manually, by observing simple rules, reaching 2048 is not a problem.
However, losing at this game seems far more challenging than winning! As much as I try, the minimum tile I got so far was 16. It seems to me that losing depends on chance much more than winning. Is there any strategy that can guarantee to lose with no tile more than 8?
(Of course, some of the hints suggested here might help, such as calculating all possible moves for n steps and choosing the combination of moves that maximises the probability to get the tiles stuck and end the game. But is there a more logical principle to obtain that?)
In the luckiest case, you would alternate 2s and 4s; alternating 2s, 4s and 8s should be easier. Actually I have just made it with five 8s, seven 4s and four 2s.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I developed an algorithm to detect events in time domain and I want to know the efficiency of the algorithm.
The problem is related with the time duration of the data.
Each file has data with a time duration of hundreds of minutes and I have dozens of files.
Instead of calculate the specificity and the sensitivity of this algorithm for the entire data set, I was thinking to choose random samples.
My question is:
What is the correct approach to have a valid statistical analysis?
Update:
I am studying sounds in time domain. Usually this sound has a low amplitude profile, just noise. Sometimes, a sound is generated and there is an increase in the signal amplitude.
The goal of the algorithm is to detect this increase in the signal amplitude. Unfortunately, the generation of a sound can be interpreted as random. It is possible that a low amplitude profile lasts for minutes or even hours without a single sound being generated. On the other hand, it is possible that there is a sequence of sounds for several minutes with a time difference between the sound n+1 and the sound n of just a few seconds.
I am concerned with the quality of the detection, sensitivity and specificity. This means a desire of knowing if a generated sound is either detected or not detected and if there is a "detected" sound when no sound is generated.
I do not have any prior information, just the one given by the algorithm.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Imagine that you have ropes which are 5 meters long. And you want to cut the rope in some certain lengths(30 cm,73 cm) for some certain times. I want to write a program that minimize the total length of the excessed robe and tells you how you should cut every rope. But, I don't know where to start and use what algorithm. Can you give me some reference? Thank you in advance.
What you are looking for is so called Cutting stock problem.
Start by looking at this Wikipedia article and follow Suggested readings. I remember we had this as a part of some course back at the university (although I can't remember which one), so you could have a look at coursera.
Seems like homework, but I can still point you in the right direction. What you have on hand is an example of dynamic programming. From what I can understand from your question, you have a sub-case of the ever popular knapsack problem. Which is in essence an optimization problem of using the space on hand most efficiently, thus reducing the waste. Tweak it a bit to your own needs and you should be able to manage to get the solution for your problem.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
The above image represents an article's page views over time. I'm looking for a decent, not to complex either physics or statistical calculation that would be able to give me (based on the history of the page views) what the current trending of the page views is for the past n days (which is represented by the blue box).
So basically, in the past 5 days is this link trending unusually higher than it usually does and if so by what degree/magnitude?
Ideally the accepted answer would provide an algorithm class that applies to this problem as well as some example of that using the data provided from this chart above.
thanks!
One approach could be to perform a least squares fit of the points within the blue box. Trends could then measured by the difference between the points and the least squares fit approximation value.
It sounds like you want to compare a short term (5-day) moving average to a longer-term moving average (e.g., something like 90 days).
As a refinement, you might want to do a least-squares linear regression over the longer term, and then compare the shorter term average to the projection you get from that.