I have a trouble with ffmpeg
I receive a rtsp stream from a grabbing device (camera) and I stream-out it to rtmp (Youtube Live)
I want to have a copy of the stream in my computer so I write at the same time in a local file
I use this command :
ffmpeg -y -i 'RTSP_SOURCE' -c:v copy -c:a libvo_aacenc -map 0:v -bsf:v dump_extra -fflags +genpts -flags +global_header -movflags +faststart
-map_metadata 0 -metadata title= -f tee -filter_complex aevalsrc=0 '[f=mp4]/tmp/backup.mp4|[f=mpegts]/tmp/backup.ts|[f=flv]rtmp://a.rtmp.youtube.com/live2/STREAM_ID'
The problem is when I have some disconnections, ffmpeg exits and stop to recording
Is there any flag or option for telling to ffmpeg to continue recording in local files even there is not internet ?
Thank you very much for your help =)
You can try:
ffmpeg -f tee "[onfail=ignore] ...
More description is available here.
Related
I am trying to add audio(repeat it until the end of the video) to a .webm file. but getting an error-
code i am using is-
ffmpeg -i 1.webm -stream_loop -1 -i 1.mp3 -c copy -shortest -map 0:v:0 -map 1:a:0 output.webm
error i am getting is-
Only VP8 or VP9 or AV1 video and Vorbis or Opus audio and WebVTT subtitles are supported for WebM.
Could not write header for output file #0 (incorrect codec parameters ?): Invalid argument
I have checked other posts before writing this post but those solutions did not work for me.
is there any way to make it work?
The WebM Container does not support the old MP3 audio codec.
Use Opus instead. You need less than half the bitrate for the same quality. Here I choose 96Kbit/s bitrate which should equal to roughly 200 in MP3. Adjust that param. -mapping_family 0 is required for ffmpeg to use most opus optimizations, standard -1 will deactivate most of them. Use mapping_family 1 if the input source has more than 2 channels.
ffmpeg -i 1.webm -stream_loop -1 -i 1.mp3 -vcodec copy -acodec libopus -mapping_family 0 -b:a 96k -shortest -map 0:v:0 -map 1:a:0 output.webm
If you really want to use old MP3 you can also just use the .mkv container. MKV nearly supports everything.
ffmpeg -i 1.webm -stream_loop -1 -i 1.mp3 -c copy -shortest -map 0:v:0 -map 1:a:0 output.mkv
I'm not familiar with ffmpeg, but came across this script that takes in the file and creates an output with eac3 audio.
#!/bin/sh
echo "Dolby Digital Plus Muxer"
echo "Developed by #kdcloudy, not affiliated with Dolby Laboratories"
echo "Enter the file name to be converted: "
read filepath
if [! -d $filepath]
then
exit $err
fi
ffmpeg -i $filepath -vn ddp.eac3
ffmpeg -i $filepath -i ddp.eac3 -vcodec copy -c:a eac3 -map 0:s -map 0:v:0 -map 1:a:0 output.mp4
rm ddp.eac3
I'd like to know what to modify in this code to ensure all the subtitles are copied from the original file and all the available audio tracks are converted to eac3 and added to the output.mp4 file.
For the subtitles copying I tried -map but couldn't get it to work. Thanks for the help!
You only need one ffmpeg command:
ffmpeg -i input.mkv -map 0 -c:v copy -c:a eac3 -c:s copy output.mkv
-map 0 Selects all streams. Otherwise only one stream per stream type will be selected. See FFmpeg Wiki: Map for more into on -map.
-c:v copy Stream copy all video.
-c:a eac3 Encodes all audio to E-AC-3.
-c:s copy Stream copy all subtitles.
For compatibility this assumes that the input and output are both Matroska (.mkv).
That script is not great. Here's a cleaner, simpler version (not that I think a script is necessary for this):
#!/bin/bash
# Usage: ./eac3 input.mkv output.mkv
ffmpeg -i "$1" -map 0 -c:v copy -c:a eac3 -c:s copy "$2"
If you want to convert a whole directory see How do you convert an entire directory with ffmpeg?
I'm trying to change FFMPEG encoder writing application with FFMPEG -metadata and for whatever reason, it's reading the input but not actually writing anything out.
-map_metadata -metadata:s:v:0 -metadata writing_application, basically every single stack overflow and stack exchange thread, but they all won't write to the file at all.
ffmpeg -i x.mp4 -s 1920x1080 -r 59.94 -c:v h264_nvenc -b:v 6000k -vf yadif=1 -preset fast -fflags +bitexact -flags:v +bitexact -flags:a +bitexact -ac 2 x.mp4
ffmpeg -i x.mp4 -c:v copy -c:a copy -metadata Encoder="TeXT Encoder" -fflags +bitexact -flags:v +bitexact -flags:a +bitexact test.mp4
ffmpeg -i x.mp4 -vcodec copy -acodec copy -map_metadata out.mp4
ffmpeg -i x.mp4 -vcodec copy -acodec copy -metadata encoder="Encoder" -metadata comment="XX" testmeta.mp4
ffmpeg -i x.ts -c:v copy -c:a copy -metadata:s:v:0 h264 ISFT='TeXT' x.mp4
ffmpeg -i x.mp4 -i FFMETADATAFILE -map_metadata 1 -codec copy testcopy.mp4
ffmpeg -i x.ts -f ffmetadata FF
METADATAFILE
I tried to extracting the data and rewrite it back with FFMETADATAFILE but it doesn't show up. Tried forcing ffmpeg to write without any emtadata and write it back but doesn't work. Was wondering if I can write my own encoder that writes the specific encoder name, like how Handbrake/Lavf writes the encoder application into the METADATA of the video file. Or just use FFMPEG and modify the METADATA natively.
To set the writing application (mediainfo) or encoder (ffmpeg) for MP4s, use
ffmpeg -i input {-encoding parameters} -metadata:g encoding_tool=myapp out.mp4
ffmpeg -f avfoundation -i "1:0" -vf "crop=1920:1080:0:0" -pix_fmt yuv420p -y -r 30 -c:a aac -b:a 128k -f flv rtmp://RTMP_SERVER:RTMP_PORT/STREAM_KEY
Hello guys, the above command works pretty well. It records the audio/video of the computer. But what I want to do is pipe a repeating video or image(png/jpeg/gif), so that there is no live video feed from the computer, but just the image on the stream with the audio.
How would you go about doing this?
Also, if you know any programming interfaces that can do this same thing, please give suggestions. Because I would rather not use a CLI.
I think you should be able to achieve this by using -loop and some -map:ing. I can't test with avfoundation myself but something like this works for me:
ffmpeg -loop 1 -i image.png -i file_to_take_audio_from.mp4 -vf "scale=1920:1080:0:0" -pix_fmt yuv420p -r 30 -c:a aac -b:a 128k -map 0:v -map 1:a output.mp4
Replace -i file_to_take_audio_from.mp4 with -f avfoundation -i "1:0" and output.mp4 with -f flv rtmp://RTMP_SERVER:RTMP_PORT/STREAM_KEY.
Also you might be able to skip -vf if the image has correct resolution.
Hope that helps!
Use none or no value at all (:0) for the video device index and provide a secondary input:
ffmpeg -f avfoundation -i :0 -i image.png ...
There's a loop option for images such as animated GIFs and -stream_loop for input streams.
You can use the FFmpeg APIs directly instead of CLI.
I have written a small piece of code to re-stream camera RTSP stream on Nginx stream server using FFMPEG.
Everything working fine, my re-stream RTSP on to Nginx stream server using following FFMPEG command:
ffmpeg -rtsp_transport tcp -i 'rtsp://212.78.10.88:554/stream' -f lavfi -i aevalsrc=0 -vcodec copy -acodec aac -map 0:0 -map 1:0 -shortest -strict experimental -f flv rtmp://localhost:1935/live/stream
My basic problem is H265 and H265+. FFMPEG failed to re-stream H265 format streams. I tried differently command params but no luck.
Any body know how to re-stream H265 and + in FFMPEG?
Finally, I solved the issue to re-stream h265 stream.
I just used following arguments in command to solve this issue.
-c:v libx264 -pix_fmt yuv420p
and final command is:
ffmpeg -rtsp_transport tcp -i 'rtsp://212.78.10.88:554/stream' -f lavfi -i aevalsrc=0 -vcodec copy -acodec aac -map 0:0 -map 1:0 -c:v libx264 -pix_fmt yuv420p -shortest -strict experimental -f flv rtmp://localhost:1935/live/stream