FFMPEG force_keyframes multiple / several times - ffmpeg

I am looking for the correct way to use ffmpeg argument force_key_frames at multiple times in a transcode job. As an example right now I can us force_key_frames 0:00:22 to force a keyframe at 22 seconds, but I need both more granularity forcing at fram level as well as being able to select more time points in the job to have forced key frames.
Maybe one at 3 minutes 32 seconds 15 frames and another at 8 minutes 16 seconds 11 frames and so on.

Related

NiFi MergeRecords leaving out one file

I'm using NiFi to take in some user data and combine all the JSONs into one record. The MergeRecord processor is working just like I need, except it always leaves out one record (usually the same one every time). The processor is set to run ever 60 seconds. I can't understand why because there are only 56 records to merge. I've included images below for any help y'all may have.
Firstly, you have 56 FlowFiles, that does not necessarily mean 56 Records unless you have 1 Record per FlowFile.
You are using MergeRecord which counts Records, not files.
Your current config is set to Min 50 - Max 1000 Records
If you have 56 files with 1 Record in each, then merging 50 files is enough to meet the Minimum condition and release the bucket.
You also say Merge is set to run every 60 seconds, and perhaps this is not doing what you think it is. In almost all cases, Merge should be left to the default 0 sec schedule.
NiFi has no idea what all means, it takes an input and works on it - it does not know if or when the next input will come.
If every FlowFile is 1 Record, and it is categorically always 56 and that will never change, then your setting could be Min 56 - Max 56 and that will always merge 56 times.
However, that is very inflexible to change - if it suddenly changed to 57, you need to modify the flow.
Instead, you could set the Min-Max to very high numbers, say 10,000-20,000 and then set a Max Bin Age to 60 seconds (and the processor scheduling back to 0 sec). This would have the effect of merging every Record that enters the processor until A) 10-20k Records have been merged, or B) 60 seconds expire.
Example scenarios:
A) All 56 arrives within the first 2 seconds of the flow starting
All 56 are merged into 1 file after 60 seconds of the first file arriving
B) 53 arrive within the first 60 seconds, 3 arrive in the second 60 seconds
The first 53 are merged into 1 file after 60 seconds of the first file arriving, the last 3 are merged into another file after 60 seconds from the frst of the 3 arriving
C) 10,000 arrive in the first 5 seconds
All 10k will merge immediately into 1 file, they will not wait for 60 seconds

Apache Storm UI window

In Apache Storm UI, Window specifies The past period of time for which the statistics apply. So it may be 10 mins, 3 hr, 1day. But actually when a topology is running, Is the number of tuples emitted/ transferred be computed using this window time because If I see the actual time 10 mins is quite big but the window shows 10 mins statistics before actual 10 mins which doesn't make sense?
For Example: emitted = 1764260 tuples, so will the rate of tuples emission is 1764260/600= 9801 tuples/sec?
It does not display the average, it displays the total number of tuples emitted in the last period of time (10 min, 3h or 1 day).
Therefore, if you started the application 2 minutes ago, it will display all tuples emitted the last two minutes and you'll see that the number increases until you get to 10 minutes.
After 10 minutes, it will only show the number of tuples emitted in the last 10 minutes, and not an average of the tuples emitted. So if, for example, you started the application 30 minutes ago, it will display the number of tuples emitted between minutes 20 to 30.

Intel VTune Sampling after certain period of time

I am new to VTune and was playing around with it. One thing that I was not able to figure out was how do I get multiple samples of the events after every 20 seconds and save them in a text file.
For example, run an application using VTune and get back the general exploration results every 20 seconds for 2 minutes. Which means, I should have 6 samples of the events at the end.

change max output of Milliseconds from 1000 to 2000 in HTML

I am wondering how to change the time format of natural time from 24 hours 60 minutes 60 seconds 1000 milliseconds to 30 hours 30 minutes 30 seconds 3200 milliseconds in HTML
Ok, makes slightly more sense now. Firstly you want this to be done in JavaScript. HTML is markup. If you are saying you wish to change the time format, then I can give you examples of changing the format (i.e.: dd-mm-yyyy to mm-dd-yyyy, and many many other variations). Check this out - there you find lots of examples. If you want to turn a day into 30 hours, however, then this is a proper customisation you have to do. So basically a timer that adds a new hour each 30 minutes, and a minute each 30 seconds. Not sure if that's worth the effort for amusement purposes only ? You might want to impress your friends with other things ... i.e. how quickly you can down a pint!

Log data reduction for variable bandwidth data link

I have an embedded system which generates samples (16bit numbers) at 1 milli second intervals. The variable uplink bandwidth can at best transfer a sample every 5ms, so I am
looking for ways to adaptively reduce the data rate while minimizing the loss
of important information -- in this case the minimum and maximum values in a time interval.
A scheme which I think should work involves sparse coding and a variation of lossy compression. Like this:
The system will internally store the min and max values during a 10ms interval.
The system will internally queue a limited number (say 50) of these data pairs.
No loss of min or max values is allowed but the time interval in which they occur may vary.
When the queue gets full, neighboring data pairs will be combined starting at the end of the queue so that the converted min/max pairs now represent 20ms intervals.
The scheme should be iterative so that further interval combining to 40ms, 80ms etc is done when necessary.
The scheme should be linearly weighted across the length of the queue so that there is no combining for the newest data and maximum necessary combining of the oldest data.
For example with a queue of length 6, successive data reduction should cause the data pairs to cover these intervals:
initial: 10 10 10 10 10 10 (60ms, queue full)
70ms: 10 10 10 10 10 20
80ms: 10 10 10 10 20 20
90ms: 10 10 20 20 20 20
100ms: 10 10 20 20 20 40
110ms: 10 10 20 20 40 40
120ms: 10 20 20 20 40 40
130ms: 10 20 20 40 40 40
140ms: 10 20 20 40 40 80
New samples are added on the left, data is read out from the right.
This idea obviously falls into the categories of lossy-compression and sparse-coding.
I assume this is a problem that must occur often in data logging applications with limited uplink bandwidth therefore some "standard" solution might have emerged.
I have deliberately simplified and left out other issues such as time stamping.
Questions:
Are there already algorithms which do this kind of data logging? I am not looking for the standard, lossy picture or video compression algos but something more specific to data logging as described above.
What would be the most appropriate implementation for the queue? Linked list? Tree?
The term you are looking for is "lossy compression" (See: http://en.wikipedia.org/wiki/Lossy_compression ). The optimal compression method depends on various aspects such as the distribution of your data.
As i understand you want to transmit min() and max() of all samples in a timeperiod.
eg. you want transmit min/max every 10ms with taking samples every 1ms?
if you do not need the individual samples you simply compare them after each sampling
i=0; min=TYPE_MAX; max=TYPE_MIN;// First sample will always overwrite the initial values
while true do
sample = getSample();
if min>sample then
min=sample
if max<sample then
max=sample
if i%10 == 0 then
send(min, max);
// if each period should be handled seperatly: min=TYPE_MAX; max=TYPE_MIN;
done
you can also save bandwidth with sending data only on changes (depends on sample data: if they dont change very quick you will save a lot)
Define a combination cost function that matches your needs, e.g. (len(i) + len(i+1)) / i^2, then iterate the array to find the "cheapest" pair to replace.

Resources