Xvfb + ffmpeg recording producing only red rectangles - ffmpeg

I'm trying to record a process I've automated with xdotools. It appears to be working correctly, but all I see are solid red rectangles. The rectangles look like they're the correct size/position for the windows that I expect xdotools to navigate through, but I'm not getting a real picture.
Here's my xvfb and ffmpeg invocations
export DISPLAY=:99.0
Xvfb $DISPLAY -screen 0 1920x1080x16 &
ffmpeg -y -f x11grab -video_size 1920x1080 -i $DISPLAY intellij.mpg &
Here's the media info on the screen.webm produced by ffmpeg.
General
Complete name : C:\vm-shared-folders\screen.webm
Format : WebM
Format version : Version 2
File size : 208 KiB
Writing application : Lavf58.20.100
Writing library : Lavf58.20.100
IsTruncated : Yes
Video
ID : 1
Format : VP8
Codec ID : V_VP8
Width : 1 920 pixels
Height : 1 080 pixels
Display aspect ratio : 16:9
Frame rate mode : Constant
Frame rate : 30.000 FPS
Compression mode : Lossy
Writing library : Lavc58.35.100 libvpx
Default : Yes
Forced : No

These two commands gave me good output. I have no idea why they worked over the commands I put in above. I just kept tinkering with things until it worked.
Xvfb $DISPLAY -screen 0 1920x1080x24 &
ffmpeg -y -probesize 200M -f x11grab -video_size 1920x1080 -i "$DISPLAY" out.webm &

Related

ScreenRecording a headless browser using xvfb with ffmpeg or jmf jar(java) shows distorted video in container, if the resolution greaterthan 1024x768

I am getting a proper video output, if i used to record a screen resolution for about 1024x768(max) or lesser. but
whenever i used to increase the resolution like "1600x900 or 1920x1080 or greater than
1024x768 (approx.)", i am getting distorted video.
distorted image(frame of a video) with 1600x900 resolution
https://i.stack.imgur.com/iajzC.png
1600x900 video information : https://i.stack.imgur.com/yutDq.png
proper image(frame of a video) with 1024x768 resolution
https://i.stack.imgur.com/NcUt1.png
1024x768 video information : https://i.stack.imgur.com/TeaW7.png
I am using Xvfb & [both ffmpeg and jmf jar (either)] to record a headless browser inside docker container.
I am getting a proper output, if i used to screen record an actual display(monitor), I am facing this issue only when i record the display inside docker (specifically headless browser) by using x11grab.
To Start the Xvfb and ScreenRecording
Xvfb :5 -screen 0 1600x900x16 &
ffmpeg -nostdin -hide_banner -nostats -loglevel panic -video_size
1600x900 -framerate 30 -f x11grab -i :5 output.mp4 &
if i replaced 1600x900 with 1024x768 or lesser than this, it is providing a proper video without any distortion.
Am I missing anything??
Please help!
Thanks for your time

How to stream the desktop using FFMPEG , and set the output to http://127.0.0.1:8080

i am trying to use FFMPEG on windows to stream my entire desktop, through my localhost address : 127.0.0.1:8080 , and it will be accessible from another computer in the same network , using vlc by opening network url, or embed it in a source video file for exemple.
i tried the commande here :
ffmpeg -f gdigrab -framerate 6 -i desktop output.mp4
but this record the entire desktop (what i want to do) and store it in ouput.mp4 file , i tried changing it to :
ffmpeg -f gdigrab -framerate 6 -i desktop http://127.0.0.1:8080
but i get this error :
[gdigrab # 0000023b7ee4e540] Capturing whole desktop as 1920x1080x32 at (0,0)
[gdigrab # 0000023b7ee4e540] Stream #0: not enough frames to estimate rate; consider increasing probesize
Input #0, gdigrab, from 'desktop':
Duration: N/A, start: 1625841636.774340, bitrate: 398133 kb/s
Stream #0:0: Video: bmp, bgra, 1920x1080, 398133 kb/s, 6 fps, 1000k tbr, 1000k tbn
[NULL # 0000023b7ee506c0] Unable to find a suitable output format for 'http://127.0.0.1:8080'
http://127.0.0.1:8080: Invalid argument
but i want to set the output as : http://127.0.0.1:8080
how should i do that ?
Update :
I found this command :
ffmpeg -f gdigrab -framerate 30 -i desktop -vcodec mpeg4 -q 12 -f mpegts http://127.0.0.1:8080
it seems to stream, but i am not able to open it from nor vlc nor media player
I used instead HLS for HTTP Live Stream with ffmpeg, for recording screen and store .ts and .m3u8 files in a folder in the local machine.
And then self host the application (specify the root directory) using NancyServer, pointing to the .m3u8 file.
Each time the local machine start streaming, the folder will be cleared.
Adapted from this helpful post, I was able to share my desktop of my server Win10 machine to my client Win10 machine.
Win10 machine stream/server:
ffmpeg -f gdigrab -framerate 60 -i desktop -vcodec mpeg4 -q 12 -f mpegts udp://20.20.5.5:6666
Win10 machine play/client:
ffplay -f mpegts udp://127.0.0.1:6666
My Win10 machine that is streaming/server ip address is 20.20.5.111 while the Win10 machine that is recieving/playing/client is 20.20.5.5.
As mentioned from another post, using localhost/127.0.0.1 was the way to get the client to stream the video.

FFMPEG store method: Separated fields?

I am using FFMPEG with GPU h264_nvenc codec to upscale MPEG2 interlaced files.
h264_nvenc generate h264 with the store method: Separate fields (in mediaInfo) instead of the store method: Interleaved fields. These files with separate fields seems to be incompatible with tools like GVG Edius. How to change this store method?
with ffmpeg version N-92103-gebc3d04b8d Copyright (c) 2000-2018 the FFmpeg developers
built with gcc 8.2.1 (GCC) 20180813
Command FFMPEG:
-ss 00:14:45 -hwaccel cuda -c:v mpeg2_cuvid -i "input.mpg" -t 00:00:10 -vf "scale=if(gt(dar\,1.6)\,1920\,1460):1080:flags=lanczos:interl=1" -c:v h264_nvenc -pix_fmt nv12 -flags +ilme+ildct -b:v 16M -maxrate:v 22M -bufsize:v 8M -profile:v high -level:v 4.1 -rc:v vbr -coder:v cabac -f mp4 -y "inputUpscaled_GPU.MP4"
mediainfo testUpscale_GPU.MP4:
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High#L4.1
Format settings : CABAC / 1 Ref Frames
Format settings, CABAC : Yes
Format settings, Reference frames : 1 frame
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Duration : 10 s 0 ms
Bit rate mode : Variable
Bit rate : 17.8 Mb/s
Maximum bit rate : 22.0 Mb/s
Width : 1 460 pixels
Height : 1 080 pixels
Display aspect ratio : 4:3
Frame rate mode : Constant
Frame rate : 25.000 FPS
Original frame rate : 50.000 FPS
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Interlaced
Scan type, store method : Separated fields
Scan order : Top Field First
Bits/(Pixel*Frame) : 0.451
Stream size : 21.2 MiB (99%)
Codec configuration box : avcC

ffmpeg "filtergraph join" to use copy of channels and preserve input channel configuration (format - s32_le)

Command that I am using is below, with that command I am getting 8 channel output.wav.
ffmpeg.exe -i one.wav -i two.wav -i three.wav -i four.wav \
-i five.wav -i six.wav -i seven.wav -i eight.wav \
-filter_complex '[0:0][1:0][2:0][3:0][4:0][5:0][6:0] \
[7:0]join=8:channel_layout=octagonal' output.wav
All input files one.wav, two.wav so on eight.wav are 32khz,s32le and one channel. but, output generated is output.wav which is s16le, 32khz.
I can make output s32le with below command,
ffmpeg.exe -i one.wav -i two.wav -i three.wav -i four.wav \
-i five.wav -i six.wav -i seven.wav -i eight.wav \
-filter_complex '[0:0][1:0][2:0][3:0][4:0][5:0][6:0] \
[7:0]join=8:channel_layout=octagonal' -acodec pcm_s32le output.wav
But, above command seems todo conversion from s16_le to s32_le (i.e one.wav doesn't match with output.wav first channel completely). However what I want is to directly copy data from input channels since audio format of all input files is same as expected audio format of output file channels (output.wav)
is there way to instruct filter_graph todo processing at pcm_s32le ?
Here is link to log with loglevel set to debug,
https://pastebin.com/ms4x1fLz
MediaInfo.exe one.wav
General
Complete name : one.wav
Format : Wave
File size : 6.50 MiB
Duration : 53 s 280 ms
Overall bit rate mode : Constant
Overall bit rate : 1 024 kb/s
Audio
Format : PCM
Format settings : Little / Signed
Codec ID : 1
Duration : 53 s 280 ms
Bit rate mode : Constant
Bit rate : 1 024 kb/s
Channel(s) : 1 channel
Sampling rate : 32.0 kHz
Bit depth : 32 bits
Stream size : 6.50 MiB (100%)
I believe you came to the wrong conclusion by using Audacity to compare. There should be no s16 conversion with your command using -acodec pcm_s32le. You can check by adding -loglevel debug to your command and refer to the auto_resampler lines in the log.
The input and output should match. Using the hash muxer to verify:
ffmpeg -loglevel error -i one.wav -c:a copy -f hash -
SHA256=e56af84aea634ba4686348a90b657e1536610bf977b3604a9eb5b2901ccdeea3
ffmpeg -loglevel error -i output.wav -af "channelsplit=channel_layout=octagonal:channels=FL" -c:a pcm_s32le -f hash -
SHA256=e56af84aea634ba4686348a90b657e1536610bf977b3604a9eb5b2901ccdeea3

avconv / ffmpeg webcam capture while using minimum CPU processing

I have a question about avconv (or ffmpeg) usage.
My goal is to capture video from a webcam and saving it to a file.
Also, I don't want to use too much CPU processing. (I don't want avconv to scale or re-encode the stream)
So, I was thinking to use the compressed mjpeg video stream from the webcam and directly saving it to a file.
My webcam is a Microsoft LifeCam HD 3000 and its capabilities are:
ffmpeg -f v4l2 -list_formats all -i /dev/video0
Raw: yuyv422 : YUV 4:2:2 (YUYV) : 640x480 1280x720 960x544 800x448 640x360 424x240 352x288 320x240 800x600 176x144 160x120 1280x800
Compressed: mjpeg : MJPEG : 640x480 1280x720 960x544 800x448 640x360 800x600 416x240 352x288 176x144 320x240 160x120
What would be the avconv command to save the Compressed stream directly without having avconv doing scaling or re-encoding.
For now, I am using this command:
avconv -f video4linux2 -r 30 -s 320x240 -i /dev/video0 test.avi
I'm not sure that this command is CPU efficient since I don't tell anywhere to use the mjpeg Compressed capability of the webcam.
Is avconv taking care of the configuration of the webcam setting before starting to record the file ? Is it always working of raw stream and doing scaling and enconding on the raw stream ?
Thanks for your answer
Reading the actual documentation™ is the closest thing to magic you'll get in real life:
video4linux2, v4l2
input_format
Set the preferred pixel format (for raw video) or a codec name. This option allows one to select the input format, when several are available.
video_size
Set the video frame size. The argument must be a string in the form WIDTHxHEIGHT or a valid size abbreviation.
The command uses -c:v copy to just copy the received encoding without touching it therefore achieving the lowest resource use:
ffmpeg -f video4linux2 -input_format mjpeg -video_size 640x480 -i /dev/video0 -c:v copy <output>

Resources