Am I missing a timeout param in FFMPEG? - ffmpeg

I'm running an ffmpeg command like this:
ffmpeg -loglevel quiet -report -timelimit 15 -timeout 10 -protocol_whitelist file,http,https,tcp,tls,crypto -i ${inputFile} -vframes 1 ${outputFile} -y
This is running in an AWS Lambda function. My Lambda timeout is at 30 seconds. For some reason I am getting "Task timed out" messages still. I should note I log before and after the command, so I know it's timing out during this task.
Update
In terms of the entire lambda execution I do the following:
Invoke a lambda to get an access token. This lambda makes on API request. It has a timeout of 5 seconds. The max time was 660MS for one request.
Make another API request to verify data. The max time was 1.6 seconds.
Run FFMPEG
timelimit is supposed to Exit after ffmpeg has been running for duration seconds in CPU user time.. Theoretically this shouldn't run more than 15 seconds then, plus maybe 2-3 more before the other requests.
timeout is probably superfluous here. There were a lot of definitions for it in the manual, but I think that was mainly waiting on input? Either way, I'd think timelimit would cover my bases.
Update 2
I checked my debug log and saw this:
Reading option '-timelimit' ... matched as option 'timelimit' (set max runtime in seconds) with argument '15'.
Reading option '-timeout' ... matched as AVOption 'timeout' with argument '10'.
Seems both options are supported by my build
Update 2
I have updated my code with a lot of logs. I definitively see the FFMPEG command as the last thing that executes, before stalling out for the 30 second timeout
Update 3
I can reproduce the behavior by pointing at a track instead of full manifest. I have set the command to this:
ffmpeg -loglevel debug -timelimit 5 -timeout 5 -i 'https://streamprod-eastus-streamprodeastus-usea.streaming.media.azure.net/0c495135-95fa-48ec-a258-4ba40262e1be/23ab167b-9fec-439e-b447-d355ff5705df.ism/QualityLevels(200000)/Manifest(video,format=m3u8-aapl)' -vframes 1 temp.jpg -y
A few things here:
I typically point at the actual manifest (not the track), and things usually run much faster
I have lowered the timelimit and timeout to 5. Despite this, when i run a timer, the command runs for ~15 seconds every time. It outputs a bunch of errors, likely due to this being track rather than full manifest, and then spits out the desired image.
The full output is at https://gist.github.com/DaveStein/b3803f925d64dd96cd45ae9db5e5a4d0

timelimit is supposed to Exit after FFmpeg has been running for duration seconds in CPU user time.
This is true but you can't use this timing metric to determine when FFmpeg should be forcefully exited during its operation (See here).
Best to watch the process from outside and force kill it by sending a TERM or KILL signal.
I'd recommend the timeout command that's part of the GNU coreutils.
Here's an example
timeout 14s -t 1s <FFMPEG COMMAND LINE>
This would ensure that the FFmpeg command execution runs for as long as 14 seconds and forces it to be killed (SIGKILL) one second after that in case it didn't quit with the SIGTERM signal.
You can check the man pages for the timeout command.

you can try these things:
increase the timeout-limit of your lambda function.
increase the memory allocation to your lambda function. It speeds up your lambda function.
If you still gets timeout than check RequestTimeout and ConnectionTimeout of your lambda function.

Related

Extracting frame every 30 seconds using FFmpeg is slow

We are extracting frame every 30 seconds from mp4 file with following FFmpeg command :
ffmpeg -vsync 0 -i file.mp4 -vf fps=1,select='not(mod(t,30))' -frame_pts 1 temp\file.%d.jpg
We are using vsync and frame_pts options since we need frame's position w.r.t. seconds in output file names. It is running slow. It takes around 90 seconds to extract around ~235 images from ~118 minutes (259 MB) mp4 file. CPU usage while running this command goes 100%.
We are using this command in multi-threaded java application. During load test, we found that with 10 threads running this command simultaneously, time to complete this command is 5-15 minutes.
Is there any way to improve performance of this FFmpeg command ?

How to check live stream is still alive use "ffprobe" command?

I want to schedule a job script to check a live stream is still alive use "ffprobe" command. So that I can change database state for those steam already dead.
I tried the command:
ffprobe -v quiet -print_format json -show_streams rtmp://xxxx
but when the stream is not avaiable, the command will hang.
I tried add -timeout argument, but still cannot work properly.
The timeout option seems to be for listen mode. I would try to run ffmpeg using the timeout command or use something similar for the framework/language your using.
So for example:
timeout 10s ffprobe -v quiet ...
And maybe even use --kill-after. And then look at the exit code to determinate what happened.

Ansible ad-hoc command background not working

It is my understanding that running ansible with -B will put the process in the background and I will get the console back. I don't know if I am using it wrong, or it is not working as expected. What I expect is to have the sleep command initiate on all three computers and then the prompt will be available to me to run another command. What happens is that I do not get access to the console until the command completes (in this case 2 minutes).
Is something wrong, am I misunderstanding what the -B does or am I doing it wrong?
With polling:
Without polling:
There are two parameters to configure async tasks in Ansible: async and poll.
async in playbooks (-B in ad-hoc) – total number of seconds you allow the task to run.
poll in playbooks (-P in ad-hoc) – period in seconds how often you want check for result.
So if you just need fire and forget ad-hoc command, use -B 3600 -P 0: allow 1 min execution and don't care about result.
By default -P 15, so ansible doesn't exit but checks your job every 15 seconds.

Is there a way to create a bash script that will only run for X hours?

Is there a way to create a bash script that will only run for X hours? I'm currently setting up a cron job to initiate a script every night. This script essentially runs until a certain condition is met, exporting it's status to a holding variable to keep track of 'where it is' after each iteration. The intention is to start-up the process every night, run for a few hours, and then stop, holding the status until the process starts up the next night.
Short of somehow collecting the start time, and checking it against the current time in each iteration of the loop, is there an easier way to do this? Bash scripting is not my forte (I know enough to get things done and be dangerous) and I have not done something like this before. Any help would be appreciated. Thanks.
Use GNU Coreutils
GNU coreutils contains an actual timeout binary, usually invoked like this:
# timeout after 5 seconds when sleeping for 30
/usr/bin/timeout 5s /bin/sleep 30
In your case, you'd want to specify hours instead of seconds, so to timeout in 2 hours use something like 2h instead of 5s. See timeout(1) or info coreutils 'timeout invocation' for additional options.
Hacks and Workarounds
Native timeouts or the GNU timeout command are really the best options. However, see the following for some ideas if you decide to roll your own:
How do I run a command, and have it abort (timeout) after N seconds?
The TMOUT variable using read and process or command substitution.
Do it as you described - it is the cleanest way.
But if for some strange reason want kill the process after a time, can use the next
./long_runner &
(sleep 5; kill $!; wait; exit 0) &
will kill the long_runner after 5 secs.
By using the SIGALRM facility you can rig a signal to be sent after a certain time, but traditionally, this was not easily accessible from shell scripts (people would write small custom C or Perl programs for this). These days, GNU coreutils ships with a timeout command which does this by wrapping your command:
timeout 4h yourprogram

Need Help with Unix bash "timer" for mp3 URL Download Script

I've been using the following Unix bash script:
#!/bin/bash
mkdir -p ~/Desktop/URLs
n=1
while read mp3; do
curl "$mp3" > ~/Desktop/URLs/$n.mp3
((n++))
done < ~/Desktop/URLs.txt
to download and rename a bunch mp3 files from URLs listed in "URLs.txt". It works well (thanks to StackOverflow users), but due to a suspected server quantity/time download limit, it's only allowing me to access a range of 40 - 50 files from my URL list.
Is there a way to work around this by adding a "timer" inside the while loop so it downloads 1 file per "X" seconds?
I found another related question, here:
How to include a timer in Bash Scripting?
but I'm not sure where to add the "sleep [number of seconds]"... or even if "sleep" is really what I need for my script...?
Any help enormously appreciated — as always.
Dave
curl has some pretty awesome command-line options (documentation), for example, --limit-rate will limit the amount of bandwidth that curl uses, which might completely solve your problem.
For example, replace the curl line with:
curl --limit-rate 200K "$mp3" > ~/Desktop/URLs/$n.mp3
would limit the transfers to an average of 200K per second, which would download a typical 5MB MP3 file in 25 seconds, and you could experiment with different values until you found the maximum speed that worked.
You could also try a combination of --retry and --retry-delay so that when and if a download fails, curl waits and then tries again after a certain amount of time.
For example, replace the curl line with:
curl --retry 30 "$mp3" > ~/Desktop/URLs/$n.mp3
This will transfer the file. If the transfer fails, it will wait a second and try again. If it fails again, it will wait two seconds. If it fails again, it will wait four seconds, and so on, doubling the waiting time until it succeeds. The "30" means it will retry up to 30 times, and it will never wait more than 10 minutes. You can learn this all at the documentation link I gave you.
#!/bin/bash
mkdir -p ~/Desktop/URLs
n=1
while read mp3; do
curl "$mp3" > ~/Desktop/URLs/$n.mp3 &
((n++))
if ! ((n % 4)); then
wait
sleep 5
fi
done < ~/Desktop/URLs.txt
This will spawn at most 4 instances of curl and then wait for them to complete before it spawns 4 more.
A timer?
Like your crontab?
man cron
You know what they let you download, just count the disk usage of your files that you did get.
There is the transfer you are allowed. You need that, and you will need the PID of your script.
ps aux | grep $progname | print awk '{print $1}'
or something like that. The secret sauce here is that you can suspend with
kill -SIGSTOP PID
and resume with
kill -SIGCONT PID
So the general method would be
Urls on an array or queue or whatever
bash lets you have
Process an url.
increment transfer counter
When transfer counter gets close
kill -SIGSTOP MYPID
You are suspended.
in your crontab foreground your script after a minute/hour/day whatever
Continue processing
Repeat until done.
just don't log out or you'll need to do the whole thing over again, although if you used perl it would be trivial.
Disclaimer, I am not sure if this is an exercise in bash or whatnot, I confess freely that I see the answer in perl, which is always my choice outside of a REPL. Code in Bash long enough , or heaven forbid, Zsh ( my shell ) and you will see why Perl was so popular. Ah memories...
Disclaimer 2: Untested, drive by , garbage methodology here only made possible because you've an idea what that transfer might be. Obviously, if you have ssh , use ssh -D PORT you#host and pull the mp3's out of the proxy half the time.
In my own defense, if you slow pull the urls with sleep you'll be connected for a while. Perhaps "they" might notice that. Suspend and resume and you only should be connected while grabbing tracks, and gone otherwise.
Not so much an answer as an optimization. If you can consistently get the first few URLs, but it times out on the later ones, perhaps you could trim your URL file as the mp3s were successfully received?
That is, as 1.mp3 is successfully downloaded, strip it from the list:
tail url.txt -n +2 > url2.txt; mv -f url2.txt url.txt
Then the next time the script runs, it'll begin from 2.mp3
If that works, you might just set up a cron job to periodically execute the script over and over, taking bites at a time.
It just occurred to me that you're programatically numbering the mp3s, and curl might clobber some of them on restart, since every time it runs it'll start counting at 1.mp3 again. Something to be aware of, if you go this route.

Resources