How is it possible to display the time taken to convert a video using FFMPEG ?
i am using the below command and the time returned as benchmark is not exactly the time taken to complete the conversion.. i am confused how can it be missing in a such nice tool.
ffmpeg -cpuflags -sse-sse2-sse3-sse3slow-sse4.1-sse4.2+mmx -i E:/MSC/test.flv ^
-benchmark E:/MSC/testmmx.mp4
If you can use Bash it has the time command
time: time [-p] pipeline
Report time consumed by pipeline's execution.
Execute PIPELINE and print a summary of the real time, user CPU time,
and system CPU time spent executing PIPELINE when it terminates.
Example
$ time ffmpeg -i infile outfile
real 0m3.682s
user 0m0.015s
sys 0m0.000s
I offer Cygwin for Windows, which includes Bash and time
bitbucket.org/svnpenn/a/downloads
Related
Note: I've seen https://stackoverflow.com/a/64315882/21728 and understand that time is not necessarily that precise. However, I'm seeing a 4× difference between the reported time and time it actually took, and I'd like to understand what's causing it on macOS – that's the point of this question.
I'm trying to compare two ways to run a binary and they report very similar time info:
$ time ../../../node_modules/.bin/quicktype --version
quicktype version 15.0.214
Visit quicktype.io for more info.
../../../node_modules/.bin/quicktype --version 0.46s user 0.06s system 110% cpu 0.474 total
$ time $(yarn bin quicktype) --version
quicktype version 15.0.214
Visit quicktype.io for more info.
$(yarn bin quicktype) --version 0.44s user 0.06s system 110% cpu 0.449 total
However, the latter feels much slower. So I've added timestamps before and after:
$ date +"%T.%3N" && time $(yarn bin quicktype) --version && date +"%T.%3N"
15:11:09.667
quicktype version 15.0.214
Visit quicktype.io for more info.
$(yarn bin quicktype) --version 0.49s user 0.06s system 108% cpu 0.513 total
15:11:11.400
Indeed, the difference between 15:11:09.667 and 15:11:11.400 is almost two seconds but time is reporting about 0.5 second. What explains this rather vast difference?
I was using the time wrong.
First, time is different from /usr/bin/time on my Mac:
time is a shell built-in (I use Zsh)
/usr/bin/time is BSD time
This gives the expected results:
$ /usr/bin/time bash -c '../../../node_modules/.bin/quicktype --version'
quicktype version 15.0.214
Visit quicktype.io for more info.
0.49 real 0.47 user 0.06 sys
$ /usr/bin/time bash -c '$(yarn bin quicktype) --version'
quicktype version 15.0.214
Visit quicktype.io for more info.
2.02 real 1.92 user 0.27 sys
Generally speaking, for an external command cmd, it is the case that shell expansions are done by the shell before cmd is started. This includes command substitution cmd $(other_cmd), globbing cmd *.txt, variable expansion cmd $FOO, and so forth. So cmd is executed with the result of the expansion as its arguments, and never sees the original command line typed by the user.
Thus if time were an ordinary external command, as it is if you use /usr/bin/time, then the command yarn bin quicktype would be run before time even starts, and it would be just as if you had run time .../quicktype --version. It only measures the time taken to execute .../quicktype --version, and it would not (and could not) account for the time taken for the shell to generate that command line by running yarn.
Now the situation here is not quite that simple because time is actually a builtin command in many shells, and not an external command. So it doesn't necessarily have to follow the above behavior. However, in my tests, zsh's builtin time behaves the same, and does not count the time taken to run the substituted command yarn. On the other hand, in bash, the time builtin does include it.
As you saw, you can circumvent the issue by timing the running of a new shell which does both the command substitution (running yarn) and the resulting ../quicktype command itself. Then you will definitely include the time taken by both steps.
I've been trying to use perf to profile my running process, but I cannot make sense of some numbers output by perf, here is the command I used and output I got:
$ sudo perf stat -x, -v -e branch-misses,cpu-cycles,cache-misses sleep 1
Using CPUID GenuineIntel-6-55-4
branch-misses: 7751 444665 444665
cpu-cycles: 1212296 444665 444665
cache-misses: 4902 444665 444665
7751,,branch-misses,444665,100.00,,
1212296,,cpu-cycles,444665,100.00,,
4902,,cache-misses,444665,100.00,,
May I know what event does the number "444665" represent?
-x format of perf stat is described in man page of perf-stat, section CSV FORMAT. There is fragment of this man page without optional columns:
CSV FORMAT top
With -x, perf stat is able to output a not-quite-CSV format output
Commas in the output are not put into "". To make it easy to parse it
is recommended to use a different character like -x \;
The fields are in this order:
· counter value
· unit of the counter value or empty
· event name
· run time of counter
· percentage of measurement time the counter was running
Additional metrics may be printed with all earlier fields being
empty.
So, you have value of counter, empty unit of counter, event name, run time, percentage of counter being active (compared to program running time).
By comparing output of these two commands (recommended by Peter Cordes in comment)
perf stat awk 'BEGIN{for(i=0;i<10000000;i++){}}'
perf stat -x \; awk 'BEGIN{for(i=0;i<10000000;i++){}}'
I think than run time is nanoseconds for all time this counter was active. When you run perf stat with non-conflicting set of events, and there are enough hardware counters to count all required events, run time will be almost total time of profiled program being run on CPU. (Example of too large event set: perf stat -x , -e cycles,instructions,branches,branch-misses,cache-misses,cache-references,mem-loads,mem-stores awk 'BEGIN{for(i=0;i<10000000;i++){}}' - run time will be different for these events, because they were dynamically multiplexed during program execution; and sleep 1 will be too short to have multiplexing to activate.)
For sleep 1 there is very small amount of code to be active on CPU, it is just libc startup code and calling syscall nanosleep for 1 second (check strace sleep 1). So in your output 444665 is in ns or is just 444 microseconds or 0.444 milliseconds or 0.000444 seconds of libc startup for sleep 1 process.
If you want to measure whole system activity for one second, try adding -a option of perf stat (profile all processes), optionally with -A to separate events for cpu cores (or with -I 100 to have periodic printing):
perf stat -a sleep 1
perf stat -Aa sleep 1
perf stat -a -x , sleep 1
perf stat -Aa -x , sleep 1
I'm running an ffmpeg command like this:
ffmpeg -loglevel quiet -report -timelimit 15 -timeout 10 -protocol_whitelist file,http,https,tcp,tls,crypto -i ${inputFile} -vframes 1 ${outputFile} -y
This is running in an AWS Lambda function. My Lambda timeout is at 30 seconds. For some reason I am getting "Task timed out" messages still. I should note I log before and after the command, so I know it's timing out during this task.
Update
In terms of the entire lambda execution I do the following:
Invoke a lambda to get an access token. This lambda makes on API request. It has a timeout of 5 seconds. The max time was 660MS for one request.
Make another API request to verify data. The max time was 1.6 seconds.
Run FFMPEG
timelimit is supposed to Exit after ffmpeg has been running for duration seconds in CPU user time.. Theoretically this shouldn't run more than 15 seconds then, plus maybe 2-3 more before the other requests.
timeout is probably superfluous here. There were a lot of definitions for it in the manual, but I think that was mainly waiting on input? Either way, I'd think timelimit would cover my bases.
Update 2
I checked my debug log and saw this:
Reading option '-timelimit' ... matched as option 'timelimit' (set max runtime in seconds) with argument '15'.
Reading option '-timeout' ... matched as AVOption 'timeout' with argument '10'.
Seems both options are supported by my build
Update 2
I have updated my code with a lot of logs. I definitively see the FFMPEG command as the last thing that executes, before stalling out for the 30 second timeout
Update 3
I can reproduce the behavior by pointing at a track instead of full manifest. I have set the command to this:
ffmpeg -loglevel debug -timelimit 5 -timeout 5 -i 'https://streamprod-eastus-streamprodeastus-usea.streaming.media.azure.net/0c495135-95fa-48ec-a258-4ba40262e1be/23ab167b-9fec-439e-b447-d355ff5705df.ism/QualityLevels(200000)/Manifest(video,format=m3u8-aapl)' -vframes 1 temp.jpg -y
A few things here:
I typically point at the actual manifest (not the track), and things usually run much faster
I have lowered the timelimit and timeout to 5. Despite this, when i run a timer, the command runs for ~15 seconds every time. It outputs a bunch of errors, likely due to this being track rather than full manifest, and then spits out the desired image.
The full output is at https://gist.github.com/DaveStein/b3803f925d64dd96cd45ae9db5e5a4d0
timelimit is supposed to Exit after FFmpeg has been running for duration seconds in CPU user time.
This is true but you can't use this timing metric to determine when FFmpeg should be forcefully exited during its operation (See here).
Best to watch the process from outside and force kill it by sending a TERM or KILL signal.
I'd recommend the timeout command that's part of the GNU coreutils.
Here's an example
timeout 14s -t 1s <FFMPEG COMMAND LINE>
This would ensure that the FFmpeg command execution runs for as long as 14 seconds and forces it to be killed (SIGKILL) one second after that in case it didn't quit with the SIGTERM signal.
You can check the man pages for the timeout command.
you can try these things:
increase the timeout-limit of your lambda function.
increase the memory allocation to your lambda function. It speeds up your lambda function.
If you still gets timeout than check RequestTimeout and ConnectionTimeout of your lambda function.
Is there a way to determine (in bash) how much time is remaining on a process that is running for a specified time?
For example, some time after executing
caffeinate -s -t 8000 &
is there a command or technique for determining when my system will be allowed to sleep?
Bash won't know that caffeinate has a timer attached to it; for all it knows, -t refers to the number of times you'll place an Amazon order of Red Bull before the process exits five minutes later.
If you know this, however, you can detect when the command was started and do the math yourself.
$ sleep 45 &
[1] 16065
$ ps -o cmd,etime
CMD ELAPSED
sleep 45 00:03
ps -o cmd,etime 00:00
/bin/bash 6-21:11:11
On OS X, this will be ps -o command,etime; see the FreeBSD ps man page or Linux ps docs for details and other switches/options.
I Have a shell script which uses couple of system calls (grep,ps etc). I need to find CPU utilization for each system call used inside a script. I am using AIX unix version 5.1.Please help.
I have already tried Topas, vmstat , iostat commands, but they display overall cpu utilization of processes.
use below commnad
ps -aef | grep "process_name"
there would be a column 'C' in ouptut, which display cpu utilization for that process.
Thanks,
Gopal
I'm not sure if it's available on AIX, but on Linux the time command is what you would use
time wc /etc/hosts
9 26 235 /etc/hosts
real 0m0.075s
user 0m0.002s
sys 0m0.004s
sys is the amount of system call time, user is not system call time used by the process