Blackmagic Deck Link Quad 2 and Multiple Streams with FFmpeg - ffmpeg

I am trying to accomplish streaming videos from 4 or more feeds on a local display from a DeckLink Quad 2 using FFmpeg as my transcoder. I can play two different videos (I only have two sources I can use simultaneously at my desk) fine, but struggle with connecting them into a single video if they are both on the DeckLink. The code I have for a single stream run as a .bat is below...
ffplay -video_size 1280x720 -framerate 60 -pixel_format uyvy422 -f dshow -i video="Decklink Video Capture" \ pause
Reading most forums it would seem that sticking them together with a complex filter should work, as such:
ffmpeg -video_size 1280x720 -pixel_format uyvy422 -framerate 60 -vsync drop -f dshow -rtbufsize 150M -i video="Decklink Video Capture (5)" -i video="Decklink Video Capture" -i video="Decklink Video Capture (5)" -i video="Decklink Video Capture" -an -filter_complex "[0:v][1:v]hstack[t]; [2:v][3:v]hstack[b]; [t][b]vstack" -c:v libx264 -preset ultrafast -f mpegts pipe: | ffplay pipe: -vf scale=1280:720 \ pause
And, with two videos not from the DeckLink (i.e. DeckLink and file), it does work! But with both coming from the DeckLink I get the following in the console:
Input #0, dshow, from 'video=Decklink Video Capture (5)':0B f=0/0
Duration: N/A, start: 71582788.354257, bitrate: N/A
Stream #0:0: Video: rawvideo (HDYC / 0x43594448), uyvy422(tv), 1280x720, 60 fps, 60 tbr, 10000k tbn, 10000k tbc
video=Decklink Video Capture: No such file or directory
pipe:: Invalid data found when processing inputKB sq= 0B f=0/0
And that stream works running on its own too. So my optimistic concern is just that I'm using the wrong naming scheme; my only other idea is that I can't read two streams from the DeckLink card simultaneously (though I feel like I've read I can). Another concern is introduced here too: one of my streams does not run with frame rate set to 60fps, I need to set it to 59.94fps to work, otherwise it is a black screen.
Would I need to split these into multiple processed to run each stream simultaneously, save them to a temporary file or a pipeline, then combine them in another stream to display? I am concerned about the latency that program would introduce though. Thank you in advance!

You have not enabled USB Debugging in your mobile.
So enable Develope Mode and USB Debugging the run the below command
adb shell screenrecord --output-format=h264 - | ffplay -
Wait 10 o 15 seconds then you should see your screen on your pc

Related

How to stream the desktop using FFMPEG , and set the output to http://127.0.0.1:8080

i am trying to use FFMPEG on windows to stream my entire desktop, through my localhost address : 127.0.0.1:8080 , and it will be accessible from another computer in the same network , using vlc by opening network url, or embed it in a source video file for exemple.
i tried the commande here :
ffmpeg -f gdigrab -framerate 6 -i desktop output.mp4
but this record the entire desktop (what i want to do) and store it in ouput.mp4 file , i tried changing it to :
ffmpeg -f gdigrab -framerate 6 -i desktop http://127.0.0.1:8080
but i get this error :
[gdigrab # 0000023b7ee4e540] Capturing whole desktop as 1920x1080x32 at (0,0)
[gdigrab # 0000023b7ee4e540] Stream #0: not enough frames to estimate rate; consider increasing probesize
Input #0, gdigrab, from 'desktop':
Duration: N/A, start: 1625841636.774340, bitrate: 398133 kb/s
Stream #0:0: Video: bmp, bgra, 1920x1080, 398133 kb/s, 6 fps, 1000k tbr, 1000k tbn
[NULL # 0000023b7ee506c0] Unable to find a suitable output format for 'http://127.0.0.1:8080'
http://127.0.0.1:8080: Invalid argument
but i want to set the output as : http://127.0.0.1:8080
how should i do that ?
Update :
I found this command :
ffmpeg -f gdigrab -framerate 30 -i desktop -vcodec mpeg4 -q 12 -f mpegts http://127.0.0.1:8080
it seems to stream, but i am not able to open it from nor vlc nor media player
I used instead HLS for HTTP Live Stream with ffmpeg, for recording screen and store .ts and .m3u8 files in a folder in the local machine.
And then self host the application (specify the root directory) using NancyServer, pointing to the .m3u8 file.
Each time the local machine start streaming, the folder will be cleared.
Adapted from this helpful post, I was able to share my desktop of my server Win10 machine to my client Win10 machine.
Win10 machine stream/server:
ffmpeg -f gdigrab -framerate 60 -i desktop -vcodec mpeg4 -q 12 -f mpegts udp://20.20.5.5:6666
Win10 machine play/client:
ffplay -f mpegts udp://127.0.0.1:6666
My Win10 machine that is streaming/server ip address is 20.20.5.111 while the Win10 machine that is recieving/playing/client is 20.20.5.5.
As mentioned from another post, using localhost/127.0.0.1 was the way to get the client to stream the video.

avconv / ffmpeg webcam capture while using minimum CPU processing

I have a question about avconv (or ffmpeg) usage.
My goal is to capture video from a webcam and saving it to a file.
Also, I don't want to use too much CPU processing. (I don't want avconv to scale or re-encode the stream)
So, I was thinking to use the compressed mjpeg video stream from the webcam and directly saving it to a file.
My webcam is a Microsoft LifeCam HD 3000 and its capabilities are:
ffmpeg -f v4l2 -list_formats all -i /dev/video0
Raw: yuyv422 : YUV 4:2:2 (YUYV) : 640x480 1280x720 960x544 800x448 640x360 424x240 352x288 320x240 800x600 176x144 160x120 1280x800
Compressed: mjpeg : MJPEG : 640x480 1280x720 960x544 800x448 640x360 800x600 416x240 352x288 176x144 320x240 160x120
What would be the avconv command to save the Compressed stream directly without having avconv doing scaling or re-encoding.
For now, I am using this command:
avconv -f video4linux2 -r 30 -s 320x240 -i /dev/video0 test.avi
I'm not sure that this command is CPU efficient since I don't tell anywhere to use the mjpeg Compressed capability of the webcam.
Is avconv taking care of the configuration of the webcam setting before starting to record the file ? Is it always working of raw stream and doing scaling and enconding on the raw stream ?
Thanks for your answer
Reading the actual documentation™ is the closest thing to magic you'll get in real life:
video4linux2, v4l2
input_format
Set the preferred pixel format (for raw video) or a codec name. This option allows one to select the input format, when several are available.
video_size
Set the video frame size. The argument must be a string in the form WIDTHxHEIGHT or a valid size abbreviation.
The command uses -c:v copy to just copy the received encoding without touching it therefore achieving the lowest resource use:
ffmpeg -f video4linux2 -input_format mjpeg -video_size 640x480 -i /dev/video0 -c:v copy <output>

ffmpeg rtsp_transport to rtmp issues

I'm working on a project that requires taking rtsp links from youtube, and using ffmpeg to stream those videos to an rtmp server. My solution works, however it is having some issues.
I'm using these settings:
-max_delay 0 -initial_pause 0 -rtsp_transport udp -i " + inputLink + " -vcodec libx264 -acodec mp3 -ab 48k -bit_rate 450 -r 25 -s 640x480 -f flv " + stream
inputLink is replaced with the rtsp link, and stream is replaced with the rtmp server link
So this works but here are the issues I'm having:
At the beginning of each video, there is a big lag spike/lots of frames dropped, and then the video resyncs and plays normally
Some videos would crash ffmpeg, with a "Conversion failed" message and many frames dropped during the conversion/stream.
At the end of each video it would start lagging/ dropping frame, right near the end of the video, in other words it doesn't end normally, every video ends by lagging out / dropping frames
I've been struggling for a long time just to get this working, and now I finally did, I just need to perfect it by taking care of those two issues, if anyone has useful information about the rtsp_transport protocol and how to make it stream with no issues, I would greatly appreciate it. Thanks!
You got some settings wrong.
-bit_rate 450: you asked for a 450 bits per second, it's no wonder it drops a lot of frames! It should be 450k.
If you want a 450 kbps stream then use -ab 48k -vb 402k, where 402 = 450 - 48.
The flv format only supports certain audio rates. You need to also use -ar with one of the following values: 44100, 22050 or 11025.
ffmpeg -i rtsp://... -c:v libx264 -c:a mp3 -ab 48k -ar 44100 -vb 402k -r 25 -s 640x480 -f flv test.flv

How can I place a still image before the first frame of a video?

When I encode videos by FFMpeg I would like to put a jpg image before the very first video frame, because when I embed the video on a webpage with "video" html5 tag, it shows the very first picture as a splash image. Alternatively I want to encode an image to an 1 frame video and concatenate it to my encoded video. I don't want to use the "poster" property of the "video" html5 element.
You can use the concat filter to do that. The exact command depends on how long you want your splash screen to be. I am pretty sure you don't want an 1-frame splash screen, which is about 1/25 to 1/30 seconds, depending on the video ;)
The Answer
First, you need to get the frame rate of the video. Try ffmpeg -i INPUT and find the tbr value. E.g.
$ ffmpeg -i a.mkv
ffmpeg version N-62860-g9173602 Copyright (c) 2000-2014 the FFmpeg developers
built on Apr 30 2014 21:42:15 with gcc 4.8 (Ubuntu 4.8.2-19ubuntu1)
[...]
Input #0, matroska,webm, from 'a.mkv':
Metadata:
ENCODER : Lavf55.37.101
Duration: 00:00:10.08, start: 0.080000, bitrate: 23 kb/s
Stream #0:0: Video: h264 (High 4:4:4 Predictive), yuv444p, 320x240 [SAR 1:1 DAR 4:3], 25 fps, 25 tbr, 1k tbn, 50 tbc (default)
At least one output file must be specified
In the above example, it shows 25 tbr. Remember this number.
Second, you need to concatenate the image with the video. Try this command:
ffmpeg -loop 1 -framerate FPS -t SECONDS -i IMAGE \
-t SECONDS -f lavfi -i aevalsrc=0 \
-i INPUTVIDEO \
-filter_complex '[0:0] [1:0] [2:0] [2:1] concat=n=2:v=1:a=1' \
[OPTIONS] OUTPUT
If your video doesn't have audio, try this:
ffmpeg -loop 1 -framerate FPS -t SECONDS -i IMAGE \
-i INPUTVIDEO \
-filter_complex '[0:0] [1:0] concat=n=2:v=1:a=0' \
[OPTIONS] OUTPUT
FPS = tbr value got from step 1
SECONDS = duration you want the image to be shown.
IMAGE = the image name
INPUTVIDEO = the original video name
[OPTIONS] = optional encoding parameters (such as -vcodec libx264 or -b:a 160k)
OUTPUT = the output video file name
How Does This Work?
Let's split the command line I used:
-loop 1 -framerate FPS -t SECONDS -i IMAGE: this basically means: open the image, and loop over it to make it a video with SECONDS seconds with FPS frames per second. The reason you need it to have the same FPS as the input video is because the concat filter we will use later has a restriction on it.
-t SECONDS -f lavfi -i aevalsrc=0: this means: generate silence for SECONDS (0 means silence). You need silence to fill up the time for the splash image. This isn't needed if the original video doesn't have audio.
-i INPUTVIDEO: open the video itself.
-filter_complex '[0:0] [1:0] [2:0] [2:1] concat=n=2:v=1:a=1': this is the best part. You open file 0 stream 0 (the image-video), file 1 stream 0 (the silence audio), file 2 streams 0 and 1 (the real input audio and video), and concatenate them together. The options n, v, and a mean that there are 2 segments, 1 output video, and 1 output audio.
[OPTIONS] OUTPUT: this just means to encode the video to the output file name. If you are using HTML5 streaming, you'd probably want to use -c:v libx264 -crf 23 -c:a libfdk_aac (or -c:a libfaac) -b:a 128k for H.264 video and AAC audio.
Further information
You can check out the documentation for the image2 demuxer which is the core of the magic behind -loop 1.
Documentation for concat filter is also helpful.
Another good source of information is the FFmpeg wiki on concatenation.
The answer above works for me but in my case it took too much time to execute (perhaps because it re-encodes the entire video). I found another solution that's much faster. The basic idea is:
Create a "video" that only has the image.
Concatenate the above video with the original one, without re-encoding.
Create a video that only has the image:
ffmpeg -loop 1 -framerate 30 -i image.jpg -c:v libx264 -t 3 -pix_fmt yuv420p image.mp4
Note the -framerate 30 option. It has to be the same with the main video. Also, the image should have the same dimension with the main video. The -t 3 specifies the length of the video in seconds.
Convert the videos to MPEG-2 transport stream
According to the ffmpeg official documentation, only certain files can be concatenated using the concat protocal, this includes the MPEG-2 transport streams. And since we have 2 MP4 videos, they can be losslessly converted to MPEG-2 TS:
ffmpeg -i image.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts image.ts
and for the main video:
ffmpeg -i video.mp4 -c copy -bsf:v h264_mp4toannexb -f mpegts video.ts
Concatenate the MPEG-2 TS files
Now use the following command to concatenate the above intermediate files:
ffmpeg -i "concat:image.ts|video.ts" -c copy -bsf:a aac_adtstoasc output.mp4
Although there are 4 commands to run, combined they're still much faster then re-encoding the entire video.
My solution. It sets an image with duration of 5 sec before the video along with aligning video to be 1280x720. Image should have 16/9 aspect ratio.
ffmpeg -i video.mp4 -i image.png -filter_complex '
color=c=black:size=1280x720 [temp]; \
[temp][1:v] overlay=x=0:y=0:enable='between(t,0,5)' [temp]; \
[0:v] setpts=PTS+5/TB, scale=1280x720:force_original_aspect_ratio=decrease, pad=1280:720:-1:-1:color=black [v:0]; \
[temp][v:0] overlay=x=0:y=0:shortest=1:enable='gt(t,5)' [v]; \
[0:a] asetpts=PTS+5/TB [a]'
-map [v] -map [a] -preset veryfast output.mp4

FFMPEG: how to save input camera stream into the file with the SAME codec format?

I have the camera-like device that produces video stream and passes it into my Windows-based machine via USB port.
Using the command:
ffmpeg -y -f vfwcap -i list
I see that (as expected) FFmpeg finds the input stream as stream #0.
Using the command:
ffmpeg -y -f vfwcap -r 25 -i 0 c:\out.mp4
I can successfully save the input stream into the file.
From the log I see:
Stream #0:0: Video: rawvideo (UYVY / 0x59565955), uyvy422, 240x320, 25 tbr, 1k tbn, 25 tbc
No pixel format specified, yuv422p for H.264 encoding chosen.
So, my input format is transcoded to yuv422p.
My question:
How can I cause FFmpeg to save my input video stream into out.mp4 WITHOUT transcoding - actually, to copy input stream to output file as close as possible, with the same format?
How can I cause ffmpeg to save my input videostream into out.mp4 WITHOUT transcoding
You can not. You can stream copy the rawvideo from vfwcap, but the MP4 container format does not support rawvideo. You have several options:
Use a different output container format.
Stream copy to rawvideo then encode.
Use a lossless encoder (and optionally re-encode it after capturing).
Use a different output container format
This meets your requirement of saving your input without re-encoding.
ffmpeg -f vfwcap -i 0 -codec:v copy rawvideo.nut
rawvideo creates huge file sizes.
Stream copy to rawvideo then encode
This is the same as above, but the rawvideo is then encoded to a more common format.
ffmpeg -f vfwcap -i 0 -codec:v copy rawvideo.nut
ffmpeg -i rawvideo.nut -codec:v libx264 -crf 23 -preset medium -pix_fmt yuv420p -movflags +faststart output.mp4
See the FFmpeg and x264 Encoding Guide for more information about -crf, -preset, and additional detailed information on creating H.264 video.
-pix_fmt yuv420p will use a pixel format that is compatible with dumb players like QuickTime. Refer to colorspace and chroma subsampling for more info.
-movflags +faststart relocates the moov atom which allows the video to begin playback before it is completely downloaded by the client. Useful if you are hosting the video and users will view it in their browser.
Use a lossless encoder
Using huffyuv:
ffmpeg -f vfwcap -i 0 -codec:v huffyuv lossless.mkv
Using lossless H.264:
ffmpeg -f vfwcap -i 0 -codec:v libx264 -qp 0 lossless.mp4
Lossless files can be huge, but not as big as rawvideo.
Re-encoding the lossless output is the same as re-encoding the rawvideo.

Resources