The default resolution is 1280x720. Only one screen on my system (20.04.1-Ubuntu)
When I adjust the 'fractional scaling' to 125%. I get the different value about resolution.
First, I query the result with extension API 'XineramaQueryScreens', return
2048x1152
When I using the command:
$ xrandr |grep \* |awk '{print $1}'
it return:
1280x720
I want to get actual resolution. If I understand correctly, the actual resolution is always 1280x720, no matter how adjust 'factional scaling' .
Why there are different result?
'XineramaQueryScreens' return 2048x1152, it is not 125% times of default resolution 1280x720. why?
Related
I would like to partially colorize the output of any given bash command in my scripts, just to make it pretty and easy to catch important information displayed on the screen.
It's rather easy to echo colored results, but I failed with commands displaying some system measurements.
Let's use hdparm as an example.
sudo hdparm -t /dev/sda1
will result as follows:
/dev/sda1:
Timing buffered disk reads: 1284 MB in 3.00 seconds = 427.93 MB/sec
I would like "427.93 MB/sec" to be displayed in different color, let's say yellow.
How can I do that?
You can redirect command output into array, and then echo array items in colors you want. For example:
readarray -d= arr < <(sudo hdparm -t /dev/sda1);echo -e "${arr[0]} \e[31m${arr[1]}}\e[0m"
In Bash,
I am trying to match an image to a frame in ffmpeg. I also want to exit the ffmpeg process when the match is found. Here is a (simplified version) of the code currently:
ffmpeg --hide_banner -ss 0 -to 60 \
-i "video.mp4" -i "image.jpg" -filter_complex \
"blend=difference, blackframe" -f null - </dev/null 2>log.txt &
pid=$!
trap "kill $pid 2>/dev/null" EXIT
while kill -0 $pid 2>/dev/null; do
# (grep command to monitor log file)
# if grep finds blackframe match, return blackframe time
done
To my understanding, if the video actually contains a blackframe I will get a false-positive. How can I effectively mitigate this?
While this is unnecessary to answer the question, I would like to exit the ffmpeg process without having to use grep to constantly monitor the log file, instead using pure ffmpeg
Edit: I say this because while I understand the blend filter is computing the difference, I am getting a false positive on a blackframe in my video and I don't know why.
Edit: A possible solution to this issue is to not use blackframe at all, but psnr (Peak Signal to Noise Ratio) but normal usage is by comparing two videos frame by frame, and I don't know how to effectively use it with an image as input.
Use
ffmpeg -ss 0 -t 60 -copyts -i video.mp4 -i image.jpg -filter_complex "[0]extractplanes=y[v];[1]extractplanes=y[i];[v][i]blend=difference,blackframe=0,metadata=select:key=lavfi.blackframe.pblack:value=100:function=equal,trim=duration=0.0001,metadata=print:file=-" -an -v 0 -vsync 0 -f null -
If a match is found, it will print to stdout a line of the form,
frame:179 pts:2316800 pts_time:6.03333
lavfi.blackframe.pblack=100
else no lines will be printed. It will exit after the first match, if found, or till whole input is processed.
Since blackframe only looks at luma, I use extractplanes both to speed up blend and also avoid any unexpected format conversions blend may request.
blackframe threshold is set to 0, so all frames have the blackframe value metadata tagged. False positives are not possible since blend computes the difference. The difference between a black input frame and the reference frame is equal to the reference frame, unless the reference is a black frame, in which case, it is not a false positive.
The first metadata filter only passes through frames with blackframe value of 100. The trim filter stops a 2nd frame from passing through (except if your video's fps is greater than 10000). The 2nd metadata filter prints the selected frame's metadata.
I need to split a video into many smaller videos.
I have tried PySceneDetect and its 2 scene detection methods don't fit my need.
The idea is to trigger a scene cut/break every time the volume is very low, every time audio level is less than a given parameter. I think overall RMS dB volume level is what I mean.
The purpose is to split an mp4 video into many short videos, each smaller video with short dialog phrases.
So far I have a command to get the overall RMS audio volume level.
ffprobe -f lavfi -i amovie=01x01TheStrongestMan.mp4,astats=metadata=1:reset=1 -show_entries frame=pkt_pts_time:frame_tags=lavfi.astats.Overall.RMS_level,lavfi.astats.1.RMS_level,lavfi.astats.2.RMS_level -of csv=p=0
How can I get only the minimum values for RMS level and its corresponding frame or time?
And then how can I use ffmpeg to split the video in many videos on every frame that corresponds to a minimum RMS?
Thanks.
Use silencedetect audio filter and feed its debugging output to segment output format parameter.
Here is a ready-made script:
#!/bin/bash
IN=$1
OUT=$2
true ${SD_PARAMS:="-55dB:d=0.3"};
true ${MIN_FRAGMENT_DURATION:="20"};
export MIN_FRAGMENT_DURATION
if [ -z "$OUT" ]; then
echo "Usage: split_by_silence.sh input_media.mp4 output_template_%03d.mkv"
echo "Depends on FFmpeg, Bash, Awk, Perl 5. Not tested on Mac or Windows."
echo ""
echo "Environment variables (with their current values):"
echo " SD_PARAMS=$SD_PARAMS Parameters for FFmpeg's silencedetect filter: noise tolerance and minimal silence duration"
echo " MIN_FRAGMENT_DURATION=$MIN_FRAGMENT_DURATION Minimal fragment duration"
exit 1
fi
echo "Determining split points..." >& 2
SPLITS=$(
ffmpeg -nostats -v repeat+info -i "${IN}" -af silencedetect="${SD_PARAMS}" -vn -sn -f s16le -y /dev/null \
|& grep '\[silencedetect.*silence_start:' \
| awk '{print $5}' \
| perl -ne '
our $prev;
INIT { $prev = 0.0; }
chomp;
if (($_ - $prev) >= $ENV{MIN_FRAGMENT_DURATION}) {
print "$_,";
$prev = $_;
}
' \
| sed 's!,$!!'
)
echo "Splitting points are $SPLITS"
ffmpeg -v warning -i "$IN" -c copy -map 0 -f segment -segment_times "$SPLITS" "$OUT"
You specify input file, output file template, silence detection parametres and minimum fragment size, it writes multiple files.
Silence detection parameters may need to be tuned:
SD_PARAMS environment variable contains two parameters: noise tolerance level and minimum silence duration. Default value is -55dB:d=0.3.
Decrease the -55dB to e.g. -70dB if some faint non-silent sounds trigger spitting when they should not. Increase it to e.g. -40dB if it does not split on silence because of there is some noise in it, making it not completely silent.
d=0.3 is a minimum silence duration to be considered as a splitting point. Increase it if only serious (e.g. whole 3 seconds) silence should be considered as real, split-worthy silence.
Another environment variable MIN_FRAGMENT_DURATION defines amount of time silence events are ignored after each split. This sets minimum fragment duration.
The script would fail if no silence is detected at all.
There is a refactored version on Github Gist, but there was a problem with it for one user.
I'd like to use imagemagick or graphicsmagick to detect whether an image has basically no content.
Here is an example:
https://s3-us-west-2.amazonaws.com/idelog/token_page_images/120c6af0-73eb-11e4-9483-4d4827589112_embed.png
I've scoured Fred's imagemagick scripts, but I can't figure out if there is a way to do this:
http://www.fmwconcepts.com/imagemagick/
Easiest way would be to use -edge detection followed by histogram: & text:. This will generate a large list of pixel information that can be passed to another process for evaluation.
convert 120c6af0-73eb-11e4-9483-4d4827589112_embed.png \
-edge 1 histogram:text:- | cut -d ' ' -f 4 | sort | uniq -c
The above example will generate a nice report of:
50999 #000000
201 #FFFFFF
As the count of white pixels is less then 1% of black pixels, I can say the image is empty.
This can probably be simplified by passing -fx information to awk utility.
convert 120c6af0-73eb-11e4-9483-4d4827589112_embed.png \
-format '%[mean] %[max]' info:- | awk '{print $1/$2}'
#=> 0.00684814
If you are talking about the amount of opaque pixels vs the amount of transparent pixels, then the following will tell you the percentage of opaque pixels.
convert test.png -alpha extract -format "%[fx:100*mean]\n" info:
39.0626
Or if you want the percentage of transparent pixels, use
convert test.png -alpha extract -format "%[fx:100*(1-mean)]\n" info:
60.9374
To get the dimensions of a file, I can do:
$ mediainfo '--Inform=Video;%Width%x%Height%' ~/Desktop/lawandorder.mov
1920x1080
However, if I give a url instead of a file, it returns None:
$ mediainfo '--Inform=Url;%Width%x%Height%' 'http://url/lawandorder.mov'
(none)
How would I correctly pass a url to MediaInfo?
You can also use curl | head to partially download the file before running mediainfo.
Here's an example of getting the dimensions of a 12 MB file from the web, where only a small portion (less than 10 KB) from the start needs to be downloaded:
curl --silent http://www.jhepple.com/support/SampleMovies/MPEG-2.mpg \
| head --bytes 10K > temp.mpg
mediainfo '--Inform=Video;%Width%x%Height%' temp.mpg
To do this, I needed to re-compile from source using '--with-libcurl' option.
$ ./CLI_Compile.sh --with-libcurl
$ cd MediaInfo/Project/GNU/CLI
$ make install
Then I used this command to get video dimensions via http:
$ mediainfo '--Inform=Video;%Width%x%Height%' 'http://url/lawandorder.mov'
Note, this took a considerable amount of time to return the results. I'd recommend using ffmpeg if the file is not local.