what is use of overlapping window in spark? - spark-streaming

To my knowledge if Window duration and slide duration are equal we have non-overlapping window and is called Tumbling otherwise if slide duration is less than window duration is called overlap.what is the pros and cons of both of them?

Related

Camera and screen sync

Problem Description:
we have a camera that is sending video of a live sports game in 30 frames per second.
on the other side we have a screen that is representing immediately every fram that is coming.
Assumptions
*frames will arrive in order
1.what will be the experience for a person that is wathcing the screen?
2.what can we do in order to improve it?
Your playback will have very variable framerate which would cause visible artifacts during any smooth movement ...
To remedy this You need to implement image FIFO that will cover bigger time that is your worst delay difference (Idealy at least 2x times more). So if you got 300ms-100ms delay difference and 30 fps then minimal FIFO size is:
n = 2 * (300-100) * 0.001 * 30 = 12 images
Now the reproduction should be like this:
init playback
simply start obtaining images into FIFO until the FIFO is half FULL (contains images for biggest delay difference)
playback
so any incoming image is inserted into FIFO at time of receival (unless FIFO is full in which case you wait until you have room to place new image or skip frame). Meanwhile in some thread or timer (that runs in parallel) you fetch image from FIFO every 1/30 seconds and render it (if the FIFO is empty you use last image and you can even go to bullet #1 again).
playback stop
once FIFO is empty for longer duration then some threshold (no new frames are incoming) you stop the playback.
The FIFO size reserve and point when to start playback depends on the image source timing properties (so it does not overrun nor underrun the FIFO)...
In case you need to implement your own FIFO class then cyclic buffer of constant size is your friend (so you do not need to copy all the stored images on FIFO in/out operations).

Can't change Animation frame number in unity3D

I am trying to create an button animation in unity
When i create the clip,
frame number stays on 60 by default
then i change the frame number, but after moving the mouse pointer it go back to 60
i tried in again again by deleting the clip and recreating the clip
but no effect
still the same
for better understanding
1. when i create the clip
2. changing the frame number 60 to 0
3. after moving mouse pointer it back again to 60
You're trying to change total frame number.
Yes, It's in Unity 5.6 and upper versions, so that can be confusing.
If you look above to your sample 60 bar, you can see a place where 0 is typed:
That's your current frame
and you can make your desired animation changing the current time frame.
That number is the Sampling Rate of the animation, i.e. how many frames of that animation clip are "executed" in a second.
60 means the animation runs at 60fps, or 1 frame every 16.6ms, so the general formula is:
Sample = Number of frames / second
Hence, you can't set that value to 0, an animation that runs at 0 fps is a still frame.
To get a specific frame of the animation, you need to move the red vertical line or click on a specific time on the timeline bar.

What is the significance of rasterize paint extending over multiple frames

I have started looking into the client side performance of my app using Chrome's Timeline tools. However, whilst I have found many articles on how to use them, information on how to interpret the results is more sparse and often vague.
Currently I am looking at scroll performance and attempting to hit 60FPS.
This screenshot show's the results of my most recent timeline recording.
As can be seen most frames are over 60 FPS and several are over 30 FPS.
If I zoom in on one particular frame - the one with duration 67.076ms I can see a few things:
the duration of the frame is 67ms, but aggregated time is 204ms
201ms of this time is spent painting BUT the two paint events in
this frame are of duration 1.327 ms and 0.106 ms
The total duration for the JS event, update layer tree and paint events is
only 2.4 ms
There is a long green hollow bar (rasterize Paint)
which lasts the duration of the frame and in fact starts before and
continues after it.
I have a few questions on this:
the aggregated time is far longer than the frame time - is it
correct to assume that these are parrallel processes?
the paint time for the frame (204ms) far exceeds the time for the two paint events (1.433ms) - is this because it includes the
rasterize paint events
why does the rasterize paint event span multiple frames?
where would one start optimizing this?
Finally can someone point me to some good resources on understanding this?
This is somewhat unfortunate result of the way the 'classic' waterfall Timeline view coalesces multiple events. If you expand that long "Rasterize Paint" event, you'll see a bunch of individual events, which are going to be somewhat shorter. You may really want to switch to Flame Chart mode for troubleshooting rendering performance, where the rasterize events are shown on appropriate threads.

How to set what window size is preferred when OS X "zooms" it?

In OS X, window "zoom" (the green window button/titlebar doubleclick since Yosemite) is supposed to expand window to its preferred size (larger than content, but not maximum).
Content in my window varies, so the ideal size is not known at compile time, but I can compute it at run time.
I would like the zoom to adjust window size to its preferred size, but I don't want to constrain window size otherwise (i.e. user should still be free to resize it to much larger or smaller than the ideal).
What is the right way to tell OS X what window (or window content's) size it should use when zooming?
When the user zooms or unzooms the window, the window will send its delegate a windowWillUseStandardFrame:defaultFrame: message.
The first argument is the window being zoomed; the second is the default standard frame, which is the size of the screen. You return the preferred (“standard”) frame.
If the window's frame is already equal to the standard frame you return, then the window will unzoom to the user's preferred size (as they expressed by manually resizing it). Otherwise, the window will zoom to the standard frame.
A corollary to this is that if the standard frame changes between zooms, the window will zoom the first time and will zoom again (to the new standard frame) the second time. This is the behavior you might expect if the size of the content changed between zooms.

What is the algorithm for selecting the best scene of a video?

When we upload a video to Youtube or other video sharing sites, the site automatically selects the best or the most representative scene from the video to show as the icon of the video. How is that done?
I want to know which data mining or other algorithms to study to extract the most relevant scene from a video. Any pointers to literature or implementations would be very useful.
I strongly suspect that the "algorithm" is roughly (in pseudo-code):
Random(0, clip.Length)
My guess:
i = 1
Compare frame i with frame i-1 (using e.g. sum of squared difference in pixel colour intensities)
Is the difference > preset_threshold?
If yes: A sequence of below-threshold frames has just ended. Is this the longest sequence yet?
If yes: best = start of this sequence.
i++
If i < length_of_clip: Goto 2.
Choose frame best.
The idea is: Find the longest "scene" (series of frames whose transitions are below some arbitrary threshold), and show the first frame in that series.
A simple solution is to extract some frames of a video and display them randomly. By tracking the user's click through rate, Youtube already know how to rank those frames.

Resources