X server crashes when I try to change dual-display orientation - x11

Ultimately I am trying to mirror my displays, but no matter how I try, doing so will crash my system (at least the X server, and I can't seem to get it running again) so I have to reboot.
I recently set up a dual-monitor system running Fedora 25. I log in to my user with the GNOME on Xorg option selected.
So my output when I run xrandr is the following:
Screen 0: minimum 320 x 200, current 3840 x 1080, maximum 8192 x 8192
DP-1 connected 1920x1080+1920+0 (normal left inverted right x axis y axis) 531mm x 299mm
1920x1080 60.00*+
1280x1024 75.02 60.02
1152x864 75.00
1024x768 75.03 60.00
800x600 75.00 60.32
640x480 75.00 59.94
720x400 70.08
DP-2 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 531mm x 299mm
1920x1080 60.00*+
1280x1024 75.02 60.02
1152x864 75.00
1024x768 75.03 60.00
800x600 75.00 60.32
640x480 75.00 59.94
720x400 70.08
This all looks correct, but when I try xrandr --output DP-2 --same-as DP-1 one of my monitors goes blank, and the other monitor no longer responds to clicks or any other interaction.
The output of xrandr --output DP-2 --same-as DP-1 -v --dryrun is the following:
xrandr program version 1.5.0
crtc 1: disable
screen 0: 1920x1080 506x285 mm 96.25dpi
crtc 0: 1920x1080 60.00 +0+0 "DP-2"
crtc 1: 1920x1080 60.00 +0+0 "DP-1"
There are no config files under /etc/X11/ or /etc/X11/xorg.conf.d.

This was an issue with most of the machines in the work-group. When we upgraded to a higher version of Fedora, the issue was fixed.

Related

ScreenRecording a headless browser using xvfb with ffmpeg or jmf jar(java) shows distorted video in container, if the resolution greaterthan 1024x768

I am getting a proper video output, if i used to record a screen resolution for about 1024x768(max) or lesser. but
whenever i used to increase the resolution like "1600x900 or 1920x1080 or greater than
1024x768 (approx.)", i am getting distorted video.
distorted image(frame of a video) with 1600x900 resolution
https://i.stack.imgur.com/iajzC.png
1600x900 video information : https://i.stack.imgur.com/yutDq.png
proper image(frame of a video) with 1024x768 resolution
https://i.stack.imgur.com/NcUt1.png
1024x768 video information : https://i.stack.imgur.com/TeaW7.png
I am using Xvfb & [both ffmpeg and jmf jar (either)] to record a headless browser inside docker container.
I am getting a proper output, if i used to screen record an actual display(monitor), I am facing this issue only when i record the display inside docker (specifically headless browser) by using x11grab.
To Start the Xvfb and ScreenRecording
Xvfb :5 -screen 0 1600x900x16 &
ffmpeg -nostdin -hide_banner -nostats -loglevel panic -video_size
1600x900 -framerate 30 -f x11grab -i :5 output.mp4 &
if i replaced 1600x900 with 1024x768 or lesser than this, it is providing a proper video without any distortion.
Am I missing anything??
Please help!
Thanks for your time

Ffmpeg: 4K RGB->YUV realtime conversion

I'm trying to use Ffmpeg for creating a hevc realtime stream from a Decklink input. The goal is high quality HDR stream usage with 10 bits.
The Decklink SDI input is fed RGB 10 bits, which is well handled by ffmpeg with the decklink option -raw_format rgb10, which gets recognized by ffmpeg as 'gbrp10le'.
I have a Nvidia pascal-based card, which supports yuv444 10 bit (as 'yuv444p16le') and when when using '-c:v hevc_nvenc' the auto_scaler kicks in and converts to 'yuv444p16le', which I guess is the same conversion as giving '-pix_fmt yuv444p16le'.
This is working very well in 1920x1080 resolution, but in 4096x2160 resolution ffmpeg can't keep up realtime 24 or 25 fps, and I get input buffer overruns.
The culprit seems to be the RGB->YUV conversion in ffmpeg swscale because;
When piping the Decklink 4K RGB input with '-c:v copy' straight to /dev/null, there's is no problems with buffer underruns,
And when feeding the Decklink YUV and giving '-raw_format yuv422p10’ (no YUV444 input for decklink seems available for decklink in ffmpeg) I get no underrun and everything works well in 4K. Even if I set '-pix_fmt yuv444p16le'.
Any ideas how I could accomplish a 4K hevc in NVENC with the 10-bit RGB signal from the Decklink? Is there a way to make NVENC accept and use the RGB data without first converting to YUV? Or is there maybe a way to convert gbrp10le->yuv444p16le with cuda or scale_npp filter? I have compiled ffmpeg with npp and cuda, but I cannot figure out if I can get it to work with RGB? Whenever I try to do '-vf "hwupload_cuda"', auto_scaler kicks in and tries to convert to yuv on the cpu, which again creates underruns.
Another thing I guess could help is if there was a way to make the swscale cpu filter(or if there is another suitable filter?) use multiple threads? Right now it seems to only use one thread at a time, maxing out at 99% on my Ryzen 3950x (3,5GHz, 32 threads).
Example ffmpeg output:
$ ffmpeg -loglevel verbose -f decklink -raw_format rgb10 -i "Blackmagic Card 1" -c:v hevc_nvenc -preset medium -profile:v main10 -cbr 1 -b:v 20M -f nut - > /dev/null
--
Stream #0:1: Video: r210, 1 reference frame, gbrp10le(progressive), 4096x2160, 6635520 kb/s, 25 tbr, 1000k tbn, 1000k tbc
--
[graph 0 input from stream 0:1 # 0x4166180] w:4096 h:2160 pixfmt:gbrp10le tb:1/1000000 fr:25000/1000 sar:0/1
[auto_scaler_0 # 0x4168480] w:iw h:ih flags:'bicubic' interl:0
[format # 0x4166080] auto-inserting filter 'auto_scaler_0' between the filter 'Parsed_null_0' and the filter 'format'
[auto_scaler_0 # 0x4168480] w:4096 h:2160 fmt:gbrp10le sar:0/1 -> w:4096 h:2160 fmt:yuv444p16le sar:0/1 flags:0x4
[hevc_nvenc # 0x4139640] Loaded Nvenc version 11.0
--
Stream #0:0: Video: hevc (Rext), 1 reference frame (HEVC / 0x43564548), yuv444p16le(tv, progressive), 4096x2160 (0x0), q=2-31, 2000 kb/s, 25 fps, 51200 tbn
--
[decklink # 0x40f0900] Decklink input buffer overrun!:02.52 bitrate= 30471.3kbits/s speed=0.627x

Output image with correct aspect with ffmpeg

I have a mkv video with the following properties (obtained with mediainfo):
Width : 718 pixels
Height : 432 pixels
Display aspect ratio : 2.35:1
Original display aspect ratio : 2.35:1
I'd like to take screenshots of it at certain times:
ffmpeg -ss 4212 -i filename.mkv -frames:v 1 -q:v 2 out.jpg
This will produce a 718x432 jpg image, but the aspect ratio is wrong (the image is "squeezed" horizontally). AFAIK, the output image should be 1015*432 (with width=height * DAR). Is this calculation correct?
Is there a way to have ffmpeg output images with the correct size/AR for all videos (i.e. no "hardcoded" values)? I tried playing with the setdar/setsar filters without success.
Also, out of curiosity, trying to obtain SAR and DAR with ffmpeg produces:
Stream #0:0(eng): Video: h264 (High), yuv420p(tv, smpte170m/smpte170m/bt709, progressive),
718x432 [SAR 64:45 DAR 2872:1215], SAR 155:109 DAR 55645:23544, 24.99 fps, 24.99 tbr, 1k tbn, 49.98 tbc (default)
2872/1215 is 2.363, so a slightly different value than what mediainfo reported. Anyone knows why?
Without looking at the file, can't diagnose the reason for the distinct readings, but the generic method to get a square pixel result is
ffmpeg -ss 4212 -i filename.mkv -vf scale=iw*sar:ih -frames:v 1 -q:v 2 out.jpg
Accordin to the doc of FFmpeg
ffmpeg -ss 4212 -i filename.mkv -vf scale='trunc(ih*dar):ih',setsar=1/1 \
-frames:v 1 -q:v 2 out.jpg
making sure the resulting resolution is even (required by some codecs)

xrandr Size 1920x1080 not found in available modes ubuntu

I'm using a Dell XPS 15 9550 with 4k display and ubuntu as OS.
I need to use Matlab but I have (as always) an HighDPI issue. Currently I'm using r2017a version of Matlab.
To solve this problem I'm trying to use a little script to avoid the problem:
Myscript.sh
#!/bin/sh
#set scaling to x1.0 to remove the zoom used in HDPI screens
gsettings set org.gnome.desktop.interface scaling-factor 1
#Used in ubuntu machines
gsettings set com.ubuntu.user-interface scale-factor "{'HDMI1': 8, 'eDP1': 8}"
#applying full HD resolution
xrandr -s 1920x1080
# call your program
/usr/local/MATLAB/R2017a/bin/matlab
#wait for the process to terminate
wait
#now coming back to the original screen resolution and scaling
# set scaling to x2.0
gsettings set org.gnome.desktop.interface scaling-factor 2
#same as before
gsettings set com.ubuntu.user-interface scale-factor "{'HDMI1': 8, 'eDP1': 16}"
#back to original resolution
xrandr -s 3840x2160
When I launch it I receive this error from console:
"Size 1920x1080 not found in available modes"
So I've done:
cvt 1920 1080 60
Output:
# 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHz
Modeline "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync
After that:
xrandr --newmode "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync
and finally
xrandr --addmode eDP-1-1 1920x1080
I've found eDP-1-1 using xrandr -q. Here the output
Screen 0: minimum 8 x 8, current 3840 x 2160, maximum 16384 x 16384
eDP-1-1 connected primary 3840x2160+0+0 (normal left inverted right x axis y axis) 346mm x 194mm
3840x2160 60.00*+
2048x1536 60.00
1920x1440 60.00
1856x1392 60.01
1792x1344 60.01
1920x1200 59.95
1920x1080 59.93
1600x1200 60.00
1680x1050 59.95 59.88
1600x1024 60.17
1400x1050 59.98
1280x1024 60.02
1440x900 59.89
1280x960 60.00
1360x768 59.80 59.96
1152x864 60.00
1024x768 60.04 60.00
960x720 60.00
928x696 60.05
896x672 60.01
960x600 60.00
960x540 59.99
800x600 60.00 60.32 56.25
840x525 60.01 59.88
800x512 60.17
700x525 59.98
640x512 60.02
720x450 59.89
640x480 60.00 59.94
680x384 59.80 59.96
576x432 60.06
512x384 60.00
400x300 60.32 56.34
320x240 60.05
DP-1-1 disconnected (normal left inverted right x axis y axis)
HDMI-1-1 disconnected (normal left inverted right x axis y axis)
DP-1-2 disconnected (normal left inverted right x axis y axis)
HDMI-1-2 disconnected (normal left inverted right x axis y axis)
So I think I've done all things right, but still script doesn't work and console still give me the same error.
I'm done something wrong?
I had the same problem with my 4k screen. I did all the steps you described but instead of:
#applying full HD resolution
xrandr -s 1920x1080
I used this:
xrandr --output eDP-1-1 --mode 1920x1080
and it worked flawlessly.
Also, if you only have troubles in Matlab with your 4k screen, you can consider changing the scaling within Matlab by typing the following commands in the Matlab terminal:
s = settings;s.matlab.desktop.DisplayScaleFactor
s.matlab.desktop.DisplayScaleFactor.PersonalValue = 2
This setting will only take effect after matlab is restarted.
My main source for HiDPI screens in linux is Archwiki. They provide clear detailed information.

Use ffmpeg to watermark and scale an image on video

I want to be able to watermark videos with a logo image, which contains a website url.
The videos can be of different formats and dimension.
I'm trying to figure out a generic ffmpeg command to achieve it, so that i don't have to tweak the command depending on the video i have to process.
So far i got:
ffmpeg -i sample.mov -sameq -acodec copy -vf 'movie=logo.png [watermark]; [in][watermark] overlay=main_w-overlay_w-10:main_h-overlay_h-10 [out]' sample2.mov
In this way though the logo will look too big or too small with video of different size.
I've seen there is a scale option for avfilter, but I haven't figure out whether it's possible to resize the image logo based on the dimension of the input video, so that I can say to scale the logo to 1/3 of the video length for example, and keep the image ratio.
Any idea? doesn't need to be done in a single command, could even be a script.
thanks in advance.
In the meantime i came up with this script that does the job:
#!/bin/bash
VIDEO=$1
LOGO=$2
VIDEO_WATERMARKED=w_${VIDEO}
VIDEO_WIDTH=`ffprobe -show_streams $VIDEO 2>&1 | grep ^width | sed s/width=//`
echo The video width is $VIDEO_WIDTH
cp $LOGO logo.png
IMAGE_WIDTH=$((VIDEO_WIDTH/3))
echo The image width will be $IMAGE_WIDTH
mogrify -resize $IMAGE_WIDTH logo.png
echo logo.png resized
echo Starting watermarking
ffmpeg -i $VIDEO -sameq -acodec copy -vf 'movie=logo.png [watermark]; [in][watermark] overlay=main_w-overlay_w-10:main_h-overlay_h-10 [out]' $VIDEO_WATERMARKED
echo Video watermarked
The only thing i'm not certain about is how to keep the same video quality. I thought that "-sameq" would keep the same video quality, but the resulting video size is smaller.
I've noticed this:
INPUT
Duration: 00:01:25.53, start: 0.000000, bitrate: 307 kb/s
Stream #0:0(eng): Video: mpeg4 (Simple Profile) (mp4v / 0x7634706D),
yuv420p, 640x480 [SAR 1:1 DAR 4:3], 261 kb/s, 10 fps, 10 tbr, 3k tbn, 25 tbc
OUTPUT
encoder : Lavf53.20.0
Stream #0:0(eng): Video: h264 (avc1 / 0x31637661), yuv420p, 640x480 [SAR 1:
1 DAR 4:3], q=-1--1, 10 tbn, 10 tbc
whereas the audio information are identical.
Any advice on how to keep the original video quality?
thanks
Thanks for idea, Ae.!
Same thing using powershell:
$videoFilename = "..."
$logoFilename = "..."
$videoInfo = (& "$($ffmpeg)ffprobe.exe" -show_streams -of xml -loglevel quiet $videoFilename) | Out-String
$videoStreamInfo = Select-Xml -Content $videoInfo -XPath "/ffprobe/streams/stream[#codec_type='video' and #width and #height][1]"
$videoWidth = $videoStreamInfo.Node.width
$videoHeight = $videoStreamInfo.Node.height
# logo will be 10% orginal video width
$logoWidth = $videoWidth/10
# preparing arguments
$a = "-i", $videoFilename, "-i", $logoFilename, "-filter_complex", "[1]scale=$($logoWidth):$($logoWidth)/a [logo]; [0][logo]overlay=main_w-overlay_w-10:10", "-ss", "-y", "-loglevel", "error", $node.output
# logo actual height is cumputed by ffdshow`s scale filter at "$($logoWidth)/a". a - original video aspect ratio
# clear error stream for clear error handling
$error.Clear()
# execute ffmpeg
(& "$($ffmpeg)ffmpeg.exe" $a)
if($error.Count -gt 0){
Write-Output "error! $error"
}
here a can go without using 'mogrify' tool, only ffmpeg distribution.

Resources