Option to generate a .m4s file every second - ffmpeg

I am trying to stream my live recording from a camera (web cam/ IP cam) to my web application. The streaming technique I use is MPEG-DASH, which has manifest in MPD format. To generate an MPD format from the web-cam, I use FFmpeg tool in shell command line:
ffmpeg -re -y -f dshow -i video="Logitech HD Webcam C525" -c:v libx264 -c:a libfdk_aac -f dash "manifest.mpd"
This code will generate a video chunk in .m4s format every 5-8 seconds.
Question is, what FFmpeg option can I use to generate a .m4s file every second instead of every 5-8 seconds? I suppose it has something to do with segment?

-seg_duration 1 -ldash 1 -streaming 1 would help you.

Related

Using ffmpeg, jpg to mp4 to mpegts, play with HLS M3U8, only first TS file plays - why?

Before posting I have searched and found similar questions on stackoverflow (I list some below) - none have helped me towards a solution, hence this post. The duration that each image is shown within the movie file differs from many posts that I have seen thus far.
A camera captures 1 image every 30 seconds. I need stream them, preferably via HLS, thus I wrap 2 images in an MP4. I then convert MP4 to mpegts. Each MP4 and TS file play fine individually (each contain two images, each image transitions after 30seconds, each movie file is 1minute long).
When I reference the two TS files in an M3U8 playlist, only the first TS file gets played. Can anyone advise why it stops and how I can get it to play all the TS files that I expect to create, not just the first TS file? Besides my ffmpeg commands, I also include my VLC log file (though I expect to stream to Firefox/Chrome clients). I am using ffmpeg 4.2.2-static installed on an AWS EC2 with AMI2 Linux.
I have four jpgs named image11.jpg, image12.jpg, image21.jpg, image22.jpg - The images look near identical as only the timestamp in top left changes.
The following command creates 1.mp4, using image11.jpg and image12.jpg, each image displayed for 30 seconds, total duration of the mp4 is 1 minute. It plays like expected.
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 1.mp4
I then convert 1.mp4 to an mpegts file, creating 1.ts. It plays like expected.
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 1.ts
I repeat the above steps except specific to image21.jpg and image22.jpg, creating 2.mp4 and 2.ts
ffmpeg -y -framerate 1/30 -f image2 -i image1%1d.jpg -c:v libx264 -vf "fps=1,format=yuvj420p" 2.mp4
ffmpeg -y -i 1.mp4 -c:v libx264 -vbsf h264_mp4toannexb -flags -global_header -f mpegts 2.ts
Thus now I have 1.mp4, 1.ts, 2.mp4, 2.ts and all four play individually just fine.
Using ffprobe I can confirm their duration is 60seconds, for example:
ffprobe -i 1.ts -v quiet -show_entries format=duration -hide_banner -print_format json
My m3u8 playlist follows:
#EXTM3U
#EXT-X-VERSION:4
#EXT-X-PLAYLIST-TYPE:VOD
#EXT-X-MEDIA-SEQUENCE:1
#EXT-X-TARGETDURATION:60.000
#EXTINF:60.0000,
1.ts
#EXTINF:60.000,
2.ts
#EXT-X-ENDLIST
Can anyone advise where I am going wrong?
VLC Error Log (though I expect to play via web browser)
I have researched the process using these (and other pages) as a guide:
How to create a video from images with ffmpeg
convert from jpg to mp4 by ffmpeg
ffmpeg examples page
FFMPEG An Intermediate Guide/image sequence
How to use FFmpeg to convert images to video
Take a look at the start_pts/start_time in the ffprobe -show_streams output, my guess is that they all start at zero/near-zero which will cause playback to fail after your first segment.
You can still produce them independently but you will want to use something like -output_ts_offset to correctly set the timestamps for subsequent segments.
The following solution works well for me. I have tested it uninterrupted for more than two hours and believe it ticks all my boxes. (Edited because I forgot the all important -re tag)
ffmpeg will loop continuously, reading test.jpg and stream it to my RTMP server. When my camera posts an image every 30seconds, I copy the new image on top of the existing test.jpg which in effect changes what is streamed out.
Note the command below is all one line, I have put new lines in to assist reading and The order of the parameters are important - the loop and fflags genpts for example must appear before the -i parameter
ffmpeg
-re
-loop 1
-fflags +genpts
-framerate 1/30
-i test.jpg
-c:v libx264
-vf fps=25
-pix_fmt yuvj420p
-crf 30
-f fifo -attempt_recovery 1 -recovery_wait_time 1
-f flv rtmp://localhost:5555/video/test
Some arguments explained:
-re implies play in real time
loop 1 (1 turns the loop on, 0 off)
-fflags +genpts is something I only half understand. PTS I believe is the start/end time of the segment and without this flag, the PTS is reset to zero with every new image. Using this arguement means I avoid EXT-X-DISCONTINUITY when a new image is served.
-framerate 1/30 means one frame for 30seconds
-i test.jpg is my image 'placeholder'. As new images are received via a separate script, it overwrites this image. When combined with loop it means the ffmpeg output will reference the new image.
-c:v libx264 is for H264 video output formating
-vf fps=25 Removing this, or using a different value resulted in my output stream not being 30seconds.
-pix_fmt yuvj420p (sometimes I have seen yuv420p referenced but this did not work on my environment). I believe there are different jpg colour palettes and this switch ensures I can process a wider choice.
-crf 30 implies highest quality image, lowest compression (important for my client)
-f fifo -attempt_recovery 1 -recovery_wait_time 1 -f flv rtmp://localhost:5555/video/test is part of the magic to go with loop. I believe it keeps the connection open with my stream server, reduces the risk of DISCONTINUITY in the play list.
I hope this helps someone going forward.
The following links helped nudge me forward and I share as it might help others to improve upon my solution
Creating a video from a single image for a specific duration in ffmpeg
How can I loop one frame with ffmpeg? All the other frames should point to the first with no changes, maybe like a recusion
Display images on video at specific framerate with loop using FFmpeg
Loop image ffmpeg HLS
https://trac.ffmpeg.org/wiki/Slideshow
https://superuser.com/questions/1699893/generate-ts-stream-from-image-file
https://ffmpeg.org/ffmpeg-formats.html#Examples-3
https://trac.ffmpeg.org/wiki/StreamingGuide

Is it possible to stream MJPEG content over MPEG-DASH?

I am trying to re-stream an MJPEG stream over dash using ffmpeg.
I have an ESP32 camera module that outputs an MJPEG livestream at 192.168.2.128:81/stream (Arduino code here).
I can open this stream directly in the browser and see the video in realtime, but the camera will only allow for a single client at a time while I am in need of a multi client solution.
What doesn't work
A solution I am currently trying to implement is to use a seperate server (Raspberry Pi) running apache and ffmpeg to re-stream the MJPEG content using DASH:
ffmpeg -re -i http://192.168.2.128:81/stream -strict -2 -an -c:v copy -b:v 2000k -f dash -window_size 4 -extra_window_size 0 -min_seg_duration 2000000 -remove_at_exit 1 /var/www/html/out.mpd
I get no errors when executing this command on the server.
I then use this ffmpeg-dash.html to display the video in the browser.
This code unfortunately fails, in Firefox the console reports the error:
[72][Stream] No streams to play.
followed by:
Cannot play media. No decoders for requested formats: video/mp4;codecs="mp4v.6c";width="640";height="480"
What does work
What is puzzling me is that the above code works fine if I replace the MJPEG livestream url with a sample .mkv file, so if I use
ffmpeg -re -i /var/www/html/video.mkv -strict -2 -an -c:v copy -b:v 2000k -f dash -window_size 4 -extra_window_size 0 -min_seg_duration 2000000 -remove_at_exit 1 /var/www/html/out.mpd
I can view the livestreamed sample video (video.mkv) without problems using the previously mentioned ffmpeg-dash.html file.
Furthermore, it seems that ffmpeg can read the MJPEG livestream without problems, since
ffmpeg -t 10 -i http://192.168.2.128:81/stream -filter:v fps=15 -c:v flv test.flv
returns a 10 second clip of the livestream succesfully.
So to me it seems that the problem lies in how I combine the two. What am I missing? Is it even possible to stream MJPEG content over MPEG-DASH?
(I am new to this, sorry in advance for my ignorance)

multiple input files with complex operations in ffmpeg

I have just started using ffmpeg for one of my project. I have very limited knowledge of ffmpeg.
I need a help on below problem. Thanks in advance.
I have two files:-
Audio File
Video File
I want to generate single file after performing below operations:-
trim the audio file to custom start and stop point.
merge audio and video file to a single file (video file is of same size)
apply speed filter on the generated file.
I am able to achieve the output but with three different ffmpeg commands due to which it is taking lot of time. I want to achieve the all there tasks in a single ffmpeg command.
Thanks.
Use the setpts and atempo (or rubberband) filters. This example will double the speed:
ffmpeg -i video.mp4 -ss 3 -t 10 -i audio.mp3 -filter_complex "[0:v]setpts=0.5*PTS[v];[1:a]atempo=2[a]" -map "[v]" -map "[a]" -shortest output.mp4
-ss 3 will skip beginning 3 seconds in audio.mp3.
-t 10 will limit audio.mp3 duration to 10 seconds.
-shortest will make output.mp4 duration the same as the shortest output stream duration.

ffmpeg command to combine audio and images

I'm trying to achieve something which I earlier thought should be a simple task.
Is there a ffmpeg command that can do the following:
convert an audio.wav file to a video,
Add some 100 pics (img%d.png) to the video so they "automatically" stretch to fill the entire length of the video.
I don't want to set the frame rate manually because it's either making the audio go ahead or lack behind.
I also don't want the following to happen, which happenned when I used "loop_input":
A short video of images got created, which played fast and then repeated itself for the entire duration of the audio.
Please let me know the command.
I've currently tried the following, but these are not giving me the desired results:
This one makes, but video goes fast and audio is not full:
ffmpeg -i img%d.png -i audio.wav -acodec copy output.mpg
This one makes short video which repeats for full audio duration:
ffmpeg -loop_input -shortest -i img%d.png -i audio.wav -acodec copy output.mpg
This one works nearly, but "-r 4" makes video go slow and audio goes ahead. If I use "-r 5" then audio goes slow, and video goes ahead:
ffmpeg -r 4 -i img%d.png -i audio.wav -acodec copy -r 30 output.mpg
measure the time of the audio track and then use -t $audio_duration.
this arg, along with "-loop 1" will stop the mp4 at a time that matches the audio.
you might also try 2 pass technique , including -vcodec libx264 as it works well producing mp4.
and think about the following adjusted for your inputs and record rates:
-b:v 200k -bt 50k

Add multiple audio files to video at specific points using FFMPEG

I am trying to create a video out of a sequence of images and various audio files using FFmpeg. While it is no problem to create a video containing the sequence of images with the following command:
ffmpeg -f image2 -i image%d.jpg video.mpg
I haven't found a way yet to add audio files at specific points to the generated video.
Is it possible to do something like:
ffmpeg -f image2 -i image%d.jpg -i audio1.mp3 AT 10s -i audio2.mp3 AT 15s video.mpg
Any help is much appreciated!
EDIT:
The solution in my case was to use sox as suggested by blahdiblah in the answer below. You first have to create an empty audio file as a starting point like that:
sox -n -r 44100 -c 2 silence.wav trim 0.0 20.0
This generates a 20 sec empty WAV file. After that you can mix the empty file with other audio files.
sox -m silence.wav "|sox sound1.mp3 -p pad 0" "|sox sound2.mp3 -p pad 2" out.wav
The final audio file has a duration of 20 seconds and plays sound1.mp3 right at the beginning and sound2.mp3 after 2 seconds.
To combine the sequence of images with the audio file we can use FFmpeg.
ffmpeg -i video_%05d.png -i out.wav -r 25 out.mp4
See this question on adding a single audio input with some offset. The -itsoffset bug mentioned there is still open, but see users' comments for some cases in which it does work.
If it works in your case, that would be ideal:
ffmpeg -i in%d.jpg -itsoffset 10 -i audio1.mp3 -itsoffset 15 -i audio2.mp3 out.mpg
If not, you should be able to combine all the audio files with sox, overlaying or inserting silence to produce the correct offsets and then use that as input to FFmpeg. Not as convenient, but guaranteed to work.
One approach I can think of is to create your audio file for the whole duration of the video first and then mux the audio with the video file

Resources