Backtesting: Which is better? Sliding Window or Expanding Window? - arima

I know that there are two ways to do backtesting, sliding window and expanding window.
In practice, which method is better? What are the pros and cons of each method?
In my opinion, I guess if the time series pattern is more related to the current event, then the sliding method is better.
Sliding window as below figure
Expanding window as below figure
Source: https://www.kaggle.com/cworsnup/backtesting-cross-validation-for-timeseries/notebook

Which is better? Simplify answer is its depend on your Time Series data. Sliding Window and Expanding Window both have their use cases.
When you come up to testing high frequency data such as daily and hourly time series its better to use Sliding Window backtesting approach. Otherwise, if your historical points of Time Series are limited such as weekly, monthly or quarterly the Expanding Window form better.
Reference: Omphalos

Related

Pharo: How to increase MouseMoveEvent Frequency?

In the Pharo book there is an example for a Paint Canvas.
The problem is that the frequency in which mouse move events are passed to the handler is rather low, therefore you cannot draw continous paths if you move the mouse too quickly.
Is there some way to increase the update frequency for a morph? In Squeak, there is a SketchMorphEditor which does not have that problem, but I have not figured out why yet.
I am using Pharo 5.0.
As far as I know there is no way to increase the sampling rate. Even if it could be done, it would be a very bad idea for several reasons.
First, linear interpolation yields fairly good results (which can be improved with techniques like anti-aliasing, if necessary):
Second, we cannot rely on the sampling rate to be the same on every machine and to have consistent results. And third, since I plan to use a gesture recognizer, algorithms like the $1 Recognizer do not rely on sampling rates and work surprisingly well.

Adding time gap and partially concurrent transitions in Sequential Transition JavaFX

I would like to repeat the same (similar pattern) JavaFX animation - repeating the pattern of showing a set of buttons, erasing them from the screen after a certain duration, and showing them again after a certain duration. After doing a brief research, I learned about SequentialTransition which encapsulates other transition objects and performs them sequentially (as the name suggests) - a good easy example found at https://docs.oracle.com/javafx/2/api/javafx/animation/SequentialTransition.html. Two things that I need to figure out when using this tool though:
I do not want all encapsulated transitions to happen sequentially - I want a few buttons to appear and disappear concurrently and this will be a few FadeTransition objects which must not occur concurrently.
I want to add time gaps between each transitions (for instance, wait for 5 seconds before the buttons start fading and also wait for 3 seconds before the buttons reappear and etc.). What is a conventional way to add time gaps within SequentialTransition (or any transitions in JavaFX for that matter, as Thread.sleep() seems to block the event listener and thus is not suitable)?
Any pointers regarding the two issues would be appreciated.
Regarding mixing of transitions you can do parallel transitions right after the sequential transitions finish using an onFinished event handler. For the time gaps you can use a PauseTransition.

performance of layered canvases vs manual drawImage()

I've written a small graphics engine for my game that has multiple canvases in a tree(these basically represent layers.) Whenever something in a layer changes, the engine marks the affected layers as "soiled" and in the render code the lowest affected layer is copied to its parent via drawImage(), which is then copied to its parent and so on up to the root layer(the onscreen canvas.) This can result in multiple drawImage() calls per frame but also prevents rerendering anything below the affected layer. However, in frames where nothing changes no rendering or drawImage() calls take place, and in frames where only foreground objects move, rendering and drawImage() calls are minimal.
I'd like to compare this to using multiple onscreen canvases as layers, as described in this article:
http://www.ibm.com/developerworks/library/wa-canvashtml5layering/
In the onscreen canvas approach, we handle rendering on a per-layer basis and let the browser handle displaying the layers on screen properly. From the research I've done and everything I've read, this seems to be generally accepted as likely more efficient than handling it manually with drawImage(). So my question is, can the browser determine what needs to be re-rendered more efficiently than I can, despite my insider knowledge of exactly what has changed each frame?
I already know the answer to this question is "Do it both ways and benchmark." But in order to get accurate data I need real-world application, and that is months away. By then if I have an acceptable approach I will have bigger fish to fry. So I'm hoping someone has been down this road and can provide some insight into this.
The browser cannot determine anything when it comes to the canvas element and the rendering as it is a passive element - everything in it is user rendered by the means of JavaScript. The only thing the browser does is to pipe what's on the canvas to the display (and more annoyingly clear it from time to time when its bitmap needs to be re-allocated).
There is unfortunately no golden rule/answer to what is the best optimization as this will vary from case to case - there are many techniques that could be mentioned but they are merely tools you can use but you will still have to figure out what would be the right tool or the right combination of tools for your specific case. Perhaps layered is good in one case and perhaps it doesn't bring anything to another case.
Optimization in general is very much an in-depth analysis and break-down of patterns specific to the scenario, that are then isolated and optimized. The process if often experiment, benchmark, re-adjust, experiment, benchmark, re-adjust, experiment, benchmark, re-adjust... of course experience reduce this process to a minimum but even with experience the specifics comes in a variety of combinations that still require some fine-tuning from case to case (given they are not identical).
Even if you find a good recipe for your current project it is not given that it will work optimal with your next project. This is one reason no one can give an exact answer to this question.
However, when it comes canvas what you want to achieve is a minimum of clear operations and minimum areas to redraw (drawImage or shapes). The point with layers is to groups elements together to enable this goal.

Time remaining from NSProgressIndicator

I am developing an application in cocoa,I need to calculate the time remaining from NSProgressIndicator .Is that posiible???
Thanks in advance........
This is a "how long is a piece of string" question. Without knowing more about what your app is doing, there is no way we can answer it sufficiently.
You are responsible for updating the progress indicator to give the user an idea of how far through a task you are. How you obtain that information (if it is obtainable) will determine how you update the progress indicator.
A progress indicator has no concept of time. If it isn't indeterminate, it has a minimum value, current value, and maximum value. That's pretty much it.
Tracking the current value, the time you're taking, and your estimate for how long you have yet to take is your job.

Any name for this concept?

Say, we have a program that gets user input or any other unpredictable events at arbitrary moments of time.
For each kind of event the program should perform some computation or access a resource, which is reasonably time-consuming to be considered. The program should output a result as fast as possible. If next events arrive, it might be acceptable to drop previous computations and take up new ones.
To complicate it further, some computations/resource access might be interdependent, i.e. produce data that can be used in other computations.
What's important we know the pattern in which these events usually occur. For example: their relative frequency with respect to each other, or a common order and time intervals in which they happen.
The task is to make an algorithm which deals with the problem in the most statistically efficient way. Approaches yielding sub-optimal solutions can be more than sufficient.
Is there a concept which embraces designing such algorithms?
Example:
A tabbed internet browser.
When told to load different web pages in several tabs, should decide whether to load the page in an active tab with higher priority, to render just the visible part of the page or pre-render the full page, if so what to do first - pre-render the whole page for the active tab or render other tabs instead, etc.
(I know nothing about how browsers actually work, but assuming it is this way won't hurt)
I think scheduling algorithms deal with these kind of scenarios.
What you're describing is a prioritizing application scheduler. You would need to be more specific to determine which algorithm would be best, but here's a list that you might find useful.
I am tossing keywords: Scheduling with pre-emption? Also, prefetching, double-buffering
I don't know a lot about it but this sounds like something that the reactor patern may be used for.

Resources