Set correct start time of ts-file using ffmpeg - ffmpeg

I am splitting up a video into multiple 10 second ts-parts (mpeg-ts format) using ffmpeg on windows.
To create the 2nd part (that starts at 10 seconds into the video and ends at 20 seconds into the video):
ffmpeg -i sample.avi -ss 00:00:10 -to 00:00:20 -vcodec libx264 -acodec aac -vf scale=426:-1 out1.ts
But when i check the file using ffprobe it says:
Duration: 00:00:10.02, start: 1.458667, bitrate: 359 kb/s
So the duration is ok but the start time is incorrect. Is it anyway i can use ffmpeg to correct it to 00:00:20?
The best solution would of course to be able to set the correct start time in my first command where i take out the 10 second part but i would also be ok with running a 2nd command to fix the time.
Is this possible? Cant find any documentation and all examples i found are not for my exact problem and don't seem to work then i play around with them.
Full output from ffprobe:
ffprobe.exe out1.ts
ffprobe version git-2020-02-06-343ccfc Copyright (c) 2007-2020 the FFmpeg developers
built with gcc 9.2.1 (GCC) 20200122
configuration: --enable-gpl --enable-version3 --enable-sdl2 --enable-fontconfig --enable-gnutls --enable-iconv --enable-libass --enable-libdav1d --enable-libbluray --enable-libfreetype --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libtheora --enable-libtwolame --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxml2 --enable-libzimg --enable-lzma --enable-zlib --enable-gmp --enable-libvidstab --enable-libvorbis --enable-libvo-amrwbenc --enable-libmysofa --enable-libspeex --enable-libxvid --enable-libaom --enable-libmfx --enable-ffnvcodec --enable-cuvid --enable-d3d11va --enable-nvenc --enable-nvdec --enable-dxva2 --enable-avisynth --enable-libopenmpt --enable-amf
libavutil 56. 39.100 / 56. 39.100
libavcodec 58. 68.100 / 58. 68.100
libavformat 58. 38.100 / 58. 38.100
libavdevice 58. 9.103 / 58. 9.103
libavfilter 7. 74.100 / 7. 74.100
libswscale 5. 6.100 / 5. 6.100
libswresample 3. 6.100 / 3. 6.100
libpostproc 55. 6.100 / 55. 6.100
Input #0, mpegts, from 'out1.ts':
Duration: 00:00:10.02, start: 1.458667, bitrate: 359 kb/s
Program 1
Metadata:
service_name : Service01
service_provider: FFmpeg
Stream #0:0[0x100]: Video: h264 (High) ([27][0][0][0] / 0x001B), yuv420p(progressive), 426x260 [SAR 780:781 DAR 18:11], 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0:1[0x101]: Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 131 kb/s

You can add -muxdelay 10 argument.
Update: As jarno commented, add: -muxdelay 10 -muxpreload 10
I found the solution here.
Using the following command:
ffmpeg -y -i sample.avi -ss 00:00:10 -to 00:00:20 -vcodec libx264 -acodec aac -vf scale=426:-1 -muxdelay 10 out1.ts
I am getting the following result from ffprobe:
Input #0, mpegts, from 'out1.ts':
Duration: 00:00:10.05, start: 20.020222, bitrate: 141 kb/s
I used the following commands for testing:
Generating AVI sample file with synthetic video and synthetic audio (in uncompressed raw format):
ffmpeg -y -f lavfi -i testsrc=size=192x108:rate=30 -f lavfi -i sine=frequency=500 -c:v rawvideo -pix_fmt bgr24 -c:a pcm_s16le -ar 22050 -t 30 sample.avi
Executing the command from your question and ffprobe (I added -y for overwriting the output):
ffmpeg -y -i sample.avi -ss 00:00:10 -to 00:00:20 -vcodec libx264 -acodec aac -vf scale=426:-1 -muxdelay 10 out1.ts
ffprobe out1.ts
Update:
As jarno commented, adding -muxpreload 10 is also necessary.
For cleaner solution add: -muxdelay 10 -muxpreload 10
Command example:
ffmpeg -y -i sample.avi -ss 00:00:10 -to 00:00:20 -vcodec libx264 -acodec aac -vf scale=426:-1 -muxdelay 10 -muxpreload 10 out1.ts

Related

FFMPEG blending screen two libvpx-vp9 webm yuva420p video files comes out looking incorrect

I'm trying to screen blend two libvpx-vp9 webm files, so that the blend comes out looking correct in FFMPEG. The example below takes two rgba png input files, loops them for a couple of seconds into libvpx-vp9 webm files with the pixel format yuva420p. It then tries to blend them using FFMPEG. I then output frames of these to visualise how it looks here in this Stack Overflow post.
I have these two input rgba pngs (circle and Pikachu)
I create two libvpx-vp9 webm files from them like this:-
ffmpeg -loop 1 -i circle_50_rgba.png -c:v libvpx-vp9 -t 2 -pix_fmt yuva420p circle_libvpx-vp9_yuva420p.webm
ffmpeg -loop 1 -i pikachu_rgba.png -c:v libvpx-vp9 -t 2 -pix_fmt yuva420p pikachu_libvpx-vp9_yuva420p.webm
I then try and do a blend of these two libvpx-vp9 webm files like this:-
ffmpeg -y -c:v libvpx-vp9 -i circle_libvpx-vp9_yuva420p.webm -c:v libvpx-vp9 -i pikachu_libvpx-vp9_yuva420p.webm -filter_complex "[1:v][0:v]blend=all_mode=screen" pikachu_reverse_all_mode_screened_onto_circle_both_yuva420p.webm
and extract a frame from that like this
ffmpeg -c:v libvpx-vp9 -i pikachu_reverse_all_mode_screened_onto_circle_both_yuva420p.webm -frames:v 1 pikachu_reverse_all_mode_screened_onto_circle_from_yuva420p.png
Which looks like this:-
If I do this without all_mode, like this
ffmpeg -y -c:v libvpx-vp9 -i circle_libvpx-vp9_yuva420p.webm -c:v libvpx-vp9 -i pikachu_libvpx-vp9_yuva420p.webm -filter_complex "[1:v][0:v]blend=screen" pikachu_reverse_screened_onto_circle_both_yuva420p.webm
and then extract the png so we can visualise it, like this:-
ffmpeg -c:v libvpx-vp9 -i pikachu_reverse_screened_onto_circle_both_yuva420p.webm -frames:v 1 pikachu_reverse_screened_onto_circle_from_yuva420p.png
it gives this output:-
which is also incorrect because the white part of the circle should be completely white in the screen blend. We shouldn't see a faint yellow outline of Pikachu inside the white part.
It should look like this:-
Here is the full log of this is like this:-
ffmpeg -y -c:v libvpx-vp9 -i circle_libvpx-vp9_yuva420p.webm -c:v libvpx-vp9 -i pikachu_libvpx-vp9_yuva420p.webm -filter_complex "[1:v][0:v]blend=screen" pikachu_reverse_screened_onto_circle_both_yuva420p.webm
ffmpeg version 4.2.7-0ubuntu0.1 Copyright (c) 2000-2022 the FFmpeg developers
built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.1)
configuration: --prefix=/usr --extra-version=0ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
[libvpx-vp9 # 0x55d5b1f34680] v1.8.2
Last message repeated 1 times
Input #0, matroska,webm, from 'circle_libvpx-vp9_yuva420p.webm':
Metadata:
ENCODER : Lavf58.29.100
Duration: 00:00:02.00, start: 0.000000, bitrate: 19 kb/s
Stream #0:0: Video: vp9 (Profile 0), yuva420p(tv), 50x50, SAR 1:1 DAR 1:1, 25 fps, 25 tbr, 1k tbn, 1k tbc (default)
Metadata:
alpha_mode : 1
ENCODER : Lavc58.54.100 libvpx-vp9
DURATION : 00:00:02.000000000
[libvpx-vp9 # 0x55d5b1f854c0] v1.8.2
Last message repeated 1 times
Input #1, matroska,webm, from 'pikachu_libvpx-vp9_yuva420p.webm':
Metadata:
ENCODER : Lavf58.29.100
Duration: 00:00:02.00, start: 0.000000, bitrate: 29 kb/s
Stream #1:0: Video: vp9 (Profile 0), yuva420p(tv), 50x50, SAR 1:1 DAR 1:1, 25 fps, 25 tbr, 1k tbn, 1k tbc (default)
Metadata:
alpha_mode : 1
ENCODER : Lavc58.54.100 libvpx-vp9
DURATION : 00:00:02.000000000
[libvpx-vp9 # 0x55d5b1f38940] v1.8.2
[libvpx-vp9 # 0x55d5b1f49440] v1.8.2
Stream mapping:
Stream #0:0 (libvpx-vp9) -> blend:bottom
Stream #1:0 (libvpx-vp9) -> blend:top
blend -> Stream #0:0 (libvpx-vp9)
Press [q] to stop, [?] for help
[libvpx-vp9 # 0x55d5b1f49440] v1.8.2
[libvpx-vp9 # 0x55d5b1f38940] v1.8.2
[libvpx-vp9 # 0x55d5b1f80c40] v1.8.2
Output #0, webm, to 'pikachu_reverse_screened_onto_circle_both_yuva420p.webm':
Metadata:
encoder : Lavf58.29.100
Stream #0:0: Video: vp9 (libvpx-vp9), yuva420p, 50x50 [SAR 1:1 DAR 1:1], q=-1--1, 200 kb/s, 25 fps, 1k tbn, 25 tbc (default)
Metadata:
encoder : Lavc58.54.100 libvpx-vp9
Side data:
cpb: bitrate max/min/avg: 0/0/0 buffer size: 0 vbv_delay: -1
frame= 50 fps=0.0 q=0.0 Lsize= 7kB time=00:00:01.96 bitrate= 29.3kbits/s speed=33.2x
video:4kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 96.711426%
I also tried doing a convertion to rgba, like this:-
ffmpeg -y -c:v libvpx-vp9 -i circle_libvpx-vp9_yuva420p.webm -c:v libvpx-vp9 -i pikachu_libvpx-vp9_yuva420p.webm -filter_complex "[0:v]format=pix_fmts=rgba[zero];[1:v]format=pix_fmts=rgba[one];[one][zero]blend=screen" pikachu_reverse_screened_all_mode_onto_circle_after_rgba_conversion_webm.webm
However the result of this also comes out with yellow inside the white circle, which should be white
I was wondering what I need to do so that the blend of these two webm libvpx-vp9 video files looks correct, like it does above.
note: I need to retain the alpha channels, because sometimes assets have transparent alpha channels. In the examples above the assets happen to have opaque alpha channels.
Secret is to use gbrp to convert the file and also all_mode, like this:-
ffmpeg -y -c:v libvpx-vp9 -i circle_libvpx-vp9_yuva420p.webm -c:v libvpx-vp9 -i pikachu_libvpx-vp9_yuva420p.webm -filter_complex "[0:v]format=pix_fmts=gbrp[zero];[1:v]format=pix_fmts=gbrp[one];[one][zero]blend=all_mode=screen" pikachu_reverse_screened_all_mode_onto_circle_after_gbrp_conversion_webm.webm
if you then extract the frame from that like this:-
fmpeg -c:v libvpx-vp9 -i pikachu_reverse_screened_all_mode_onto_circle_after_gbrp_conversion_webm.webm -frames:v 1 pikachu_reverse_screened_all_mode_onto_circle_after_gbrp_conversion_png.png
you'll get it looking like this:-

ffmpeg command to copy video config from ffprobe

What's the command to convert an MP4 to the output format similar to a video with this ffprobe:
ffprobe version N-82151-g1e660fe Copyright (c) 2007-2016 the FFmpeg developers
built with gcc 5.4.0 (GCC)
configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-libebur128 --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-decklink --enable-zlib
libavutil 55. 35.100 / 55. 35.100
libavcodec 57. 65.100 / 57. 65.100
libavformat 57. 57.100 / 57. 57.100
libavdevice 57. 2.100 / 57. 2.100
libavfilter 6. 66.100 / 6. 66.100
libswscale 4. 3.100 / 4. 3.100
libswresample 2. 4.100 / 2. 4.100
libpostproc 54. 2.100 / 54. 2.100
Input #0, avi, from '.\sample.mp4.hd.mojo':
Metadata:
encoder : Lavf57.57.100
Duration: 00:37:28.85, start: 0.000000, bitrate: 10461 kb/s
Stream #0:0: Video: mjpeg (MJPG / 0x47504A4D), yuvj420p(pc, bt470bg/unknown/unknown), 960x540 [SAR 1:1 DAR 16:9], 9745 kb/s, 20 fps, 20 tbr, 20 tbn, 20 tbc
Stream #0:1: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 22050 Hz, 2 channels, s16, 705 kb/s
I've tried
ffmpeg -i "input.mp4" -c:v mjpeg -c:a pcm_s16le -an output.mp4
The output however does not play with the custom player.
Update:
I found a file which seemed to contain some config:
[high]
label=High quality
labelHelp=Converts to high quality
outSuffix=hd
codecParam=-vcodec mjpeg -vf scale=min'(960,iw)':-1 -acodec pcm_s16le -ar 22050 -ac 2 -r 20 -q:v 2
[medium]
label=Medium quality
labelHelp=Converts to medium quality
outSuffix=mid
codecParam=-vcodec mjpeg -vf scale=min'(960,iw)':-1 -acodec pcm_s16le -ar 22050 -ac 2 -r 20 -q:v 5
[low]
label=Low quality
labelHelp=Converts to low quality
outSuffix=low
codecParam=-vcodec mjpeg -vf scale=min'(960,iw)':-1 -acodec pcm_s16le -ar 22050 -ac 2 -r 20 -q:v 8
[main]
label=Convert to NComputing MOJO
labelHelp=Transcodes original file format to the NComputing MOJO format
outSuffix=
codecParam=
Finally I made it to work.
So to make a MOJO video file for NComputing devices here's the ffmpeg command:
ffmpeg -i "input.mp4" -vcodec mjpeg -vf scale=min'(960,iw)':-1 -acodec pcm_s16le -ar 22050 -ac 2 -r 20 -q:v 8 -f avi output.mojo
For low quality video. For other video quality just refer to the mojo.col file.

how to stream live m3u8 file using ffmpeg to youtube rtmp

i want to restream a live m3u8 file to youtube
.i used following code
fmpeg -re -i <http://mypanel.tv:8080/live/****/slyv0955k9/14131.m3u8
> -c:v copy -c:a aac -ar 44100 -ab 128k -ac 2 -strict -2 -flags +global_header -bsf:a aac_adtstoasc -bufsize 3000k -f flv "<rtmp://live-dfw.twitch.tv/app/{live_231566994_FS4BN0qoJMeXEuWklm6j0l1ODQj9u6}>"
and i return i get this from my linux server
[root#server ~]# ffmpeg -re -i http://mypanel.tv:8080/live/****/slyv0955k9/14131.m3u8
-c:v copy -c:a aac -ar 44100 -ab 128k -ac 2 -strict -2 -flags +global_header -bsf:a aac_adtstoasc -bufsize 3000k -f flv "<rtmp://live-dfw.twitch.tv/app/{live_23156556994_FS4BN0qoJMeXEuWklm6j0l1ODQj9u6}>"ffmpeg version 2.6.8 Copyright (c) 2000-2016 the FFmpeg developers
built with gcc 4.8.5 (GCC) 20150623 (Red Hat 4.8.5-4)
configuration: --prefix=/usr --bindir=/usr/bin --datadir=/usr/share/ffmpeg --incdir=/usr/include/ffmpeg --libdir=/usr/lib64 --mandir=/usr/share/man --arch=x86_64 --optflags='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --enable-bzlib --disable-crystalhd --enable-gnutls --enable-ladspa --enable-libass --enable-libcdio --enable-libdc1394 --enable-libfaac --enable-nonfree --enable-libfdk-aac --enable-nonfree --disable-indev=jack --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-openal --enable-libopenjpeg --enable-libopus --enable-libpulse --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libvorbis --enable-libv4l2 --enable-libx264 --enable-libx265 --enable-libxvid --enable-x11grab --enable-avfilter --enable-avresample --enable-postproc --enable-pthreads --disable-static --enable-shared --enable-gpl --disable-debug --disable-stripping --shlibdir=/usr/lib64 --enable-runtime-cpudetect
libavutil 54. 20.100 / 54. 20.100
libavcodec 56. 26.100 / 56. 26.100
libavformat 56. 25.101 / 56. 25.101
libavdevice 56. 4.100 / 56. 4.100
libavfilter 5. 11.102 / 5. 11.102
libavresample 2. 1. 0 / 2. 1. 0
libswscale 3. 1.101 / 3. 1.101
libswresample 1. 1.100 / 1. 1.100
libpostproc 53. 3.100 / 53. 3.100
[h264 # 0x1029ba0] non-existing SPS 0 referenced in buffering period
Last message repeated 1 times
[h264 # 0x1073680] non-existing SPS 0 referenced in buffering period
Input #0, hls,applehttp, from 'http://mypanel.tv:8080/live/***/slyv0955k9/14131.m3u8':
Duration: N/A, start: 39062.400000, bitrate: N/A
Program 0
Metadata:
variant_bitrate : 0
Stream #0:0: Video: h264 (Main) ([27][0][0][0] / 0x001B), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 90k tbn, 50 tbc
Stream #0:1: Audio: aac (LC) ([15][0][0][0] / 0x000F), 48000 Hz, stereo, fltp, 133 kb/s
At least one output file must be specified
so can anyone help me with this?please note i'm not an expert in linux so please give me specific commands to restream a live m3u8 file
Try it:
#! /bin/bash
PRESET="ultrafast" # ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow, placebo
SOURCE="http://sample.vodobox.net/skate_phantom_flex_4k/skate_phantom_flex_4k.m3u8"
YOUTUBE_URL="rtmp://a.rtmp.youtube.com/live2"
KEY="xxxx-xxxx-xxxx-xxxx" # Your youtube key. (https://www.youtube.com/live_dashboard > encoder config > name/key)
ffmpeg \
-re -i "$SOURCE" -vcodec libx264 -preset $PRESET -maxrate 3000k -b:v 2500k \
-bufsize 600k -pix_fmt yuv420p -g 60 -c:a aac -b:a 160k -ac 2 \
-ar 44100 -f flv -s 1280x720 "$YOUTUBE_URL/$KEY"

FFmpeg error with multiple outputs

I'm trying to make a stream using a webcam as data input with FFmpeg, but I need to stream a video in addition to the stream. Both features with the same command for a few minutes.
(If placed separately the recording code works perfectly)
FFmpeg code:
ffmpeg -f dshow -i video="Integrated Webcam" -t 300 -c:v libx264 -segment_atclocktime 1 -segment_format mp4 '/meu_video.mp4' | -s 640x360 -ac 2 -f flv -vcodec libx264 -profile:v baseline -maxrate 600000 -bufsize 600000 -r 25 -ar 44100 -c:a libfaac -b:a 128k "http://localhost:3030"
There are two errors, one when I try to join the two codes using | or \ and the other when I put only the stream code to test.
Log multiple outputs:
ffmpeg version 3.3.3 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 7.1.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
libavutil 55. 58.100 / 55. 58.100
libavcodec 57. 89.100 / 57. 89.100
libavformat 57. 71.100 / 57. 71.100
libavdevice 57. 6.100 / 57. 6.100
libavfilter 6. 82.100 / 6. 82.100
libswscale 4. 6.100 / 4. 6.100
libswresample 2. 7.100 / 2. 7.100
libpostproc 54. 5.100 / 54. 5.100
Input #0, dshow, from 'video=Integrated Webcam':
Duration: N/A, start:
264374.193000, bitrate: N/A
Stream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x480, 30 fps, 30 tbr, 10000k tbn, 10000k tbc
http://localhost:3030/: Unknown error
Edit 3: I ran the command using -report and generated the report, but it's too big to paste into the question.
https://www.dropbox.com/s/2xsuzq5fx464o4w/ffmpeg-20171109-145406.log?dl=0
You don't need a separator.
ffmpeg -f dshow -rtbufsize 32M -i video="Integrated Webcam" -t 300 -c:v libx264 -segment_atclocktime 1
-segment_format mp4 '/meu_video_%d.mp4' -s 640x360 -f flv
-vcodec libx264 -profile:v baseline -maxrate 600000 -bufsize 600000 -r 25 "http://localhost:3030"
(I haven't removed the audio options although you don't have any audio inputs).

ffmpeg overlay image and lower transparency

I have this ffmpeg command that I use to create a video from a photo and a animated GIF border overlay, and a audio track.
ffmpeg -framerate 15 -loop 1 -i photo.jpg -ignore_loop 0 -i overlay.gif -filter_complex "scale=(iw*sar)*max(600/(iw*sar)\,750/ih):ih*max(600/(iw*sar)\,750/ih), crop=600:750, overlay" -i audio.wav -c:v libx264 -c:a aac -b:a 192k -shortest output.mp4
What I want is to lower the opacity of the overlay image.
I have checked a lot of threads, but I can't figure out how to combine something like this with my existing filters.
-filter_complex "blend=all_mode='overlay':all_opacity=0.7"
Any ideas?
Here's the full ffmpeg output of one of my tests:
ffmpeg -framerate 15 -loop 1 -i photo.jpg -ignore_loop 0 -i overlay.gif -filter_complex "scale=(iw*sar)*max(600/(iw*sar)\,750/ih):ih*max(600/(iw*sar)\,750/ih), crop=600:750, blend=all_mode='overlay':all_opacity=0.7" -i audio.wav -c:v libx264 -c:a aac -b:a 192k -shortest output.mp4
ffmpeg version N-83507-g8fa18e0 Copyright (c) 2000-2017 the FFmpeg developers
built with gcc 5.4.0 (GCC)
configuration: --enable-gpl --enable-version3 --enable-cuda --enable-cuvid --enable-d3d11va --enable-dxva2 --enable-libmfx --enable-nvenc --enable-avisynth --enable-bzlib --enable-fontconfig --enabl
e-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug -
-enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenh264 --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libsnappy --enable-libsoxr --enable-libspe
ex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable
-libxavs --enable-libxvid --enable-libzimg --enable-lzma --enable-zlib
libavutil 55. 47.100 / 55. 47.100
libavcodec 57. 80.100 / 57. 80.100
libavformat 57. 66.102 / 57. 66.102
libavdevice 57. 2.100 / 57. 2.100
libavfilter 6. 73.100 / 6. 73.100
libswscale 4. 3.101 / 4. 3.101
libswresample 2. 4.100 / 2. 4.100
libpostproc 54. 2.100 / 54. 2.100
Input #0, image2, from 'photo.jpg':
Duration: 00:00:00.07, start: 0.000000, bitrate: 15374 kb/s
Stream #0:0: Video: mjpeg, yuvj420p(pc, bt470bg/unknown/unknown), 400x600 [SAR 72:72 DAR 2:3], 15 fps, 15 tbr, 15 tbn, 15 tbc
Input #1, gif, from 'overlay.gif':
Duration: N/A, bitrate: N/A
Stream #1:0: Video: gif, bgra, 600x750, 5.42 fps, 5 tbr, 100 tbn, 100 tbc
Guessed Channel Layout for Input Stream #2.0 : mono
Input #2, wav, from 'audio.wav':
Duration: 00:00:23.00, bitrate: 705 kb/s
Stream #2:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, mono, s16, 705 kb/s
[swscaler # 00000000023300a0] deprecated pixel format used, make sure you did set range correctly
[swscaler # 000000000234d1e0] deprecated pixel format used, make sure you did set range correctly
[Parsed_blend_2 # 00000000022fd0c0] First input link top parameters (size 600x750, SAR 1:1) do not match the corresponding second input link bottom parameters (600x750, SAR 0:1)
[Parsed_blend_2 # 00000000022fd0c0] Failed to configure output pad on Parsed_blend_2
Error configuring complex filters.
Invalid argument
Use the colorchannelmixer filter.
ffmpeg -framerate 15 -loop 1 -i photo.jpg
-ignore_loop 0 -i overlay.gif
-i audio.wav
-filter_complex "[0]scale=(iw*sar)*max(600/(iw*sar)\,750/ih):ih*max(600/(iw*sar)\,750/ih),
crop=600:750[b];
[1]format=argb,colorchannelmixer=aa=0.5[ol];[b][ol]overlay"
-c:v libx264 -c:a aac -b:a 192k -shortest output.mp4
The 0.5 sets it to 50% transparent.

Resources