ImageMagick convert and GNU parallel together - parallel-processing

I would like to speed up the following command:
convert -limit memory 64 -limit map 128 -antialias -delay 1x2 final/*.png movie.mp4
I have seen other blog posts where parallel and convert were used together so I am wondering how to make it work with the command above.

If downsizing is an option, yes, you can readily do that with GNU Parallel
parallel -j 8 convert {} -resize ... {} ::: *.png
where {} stands for the filename, and the files to be processed are listed after the :::.
-j gives the number of jobs to run in parallel.
I just created 100 PNGs of 10,000 x 8,000 and resized them to 2,000 x 1,200 sequentially in 8 minutes using
#!/bin/bash
for f in *.png; do
convert $f -resize 2000x1200! $f
done
then, the same original images again, but with GNU Parallel
parallel convert {} -resize 2000x1200! {} ::: *.png
and it took 3 minutes 40 seconds. Subsequently making those 100 PNGs into a movie took 52 seconds.

Related

How to use imagemagick "convert" to create Google Earth pyramid files

I have a large image that I am using imagemagick to convert into tiles for use in a Google Earth KML as explained here
instructions on image pyramid construction
The idea is to chop up the images into 4 pieces, then 16, then 64, etc.
To keep things simple, I made the image canvas 4096x4096 so that dividing it in will produce equal size files. The basic command is very simple. For example:
convert large.png -crop 512x512 tiles.png
The issue is the convert command creates file names sequentially, while google needs a format of row column. For instance if there were four files output, the file names should be:
tiles00.png
tiles01.png
tiles10.png
tiles11.png
I brute forced renaming scripts for up to 64 files, but before doing the 256 file case, I'd like to know if there is a simpler way to generate the file names. I'm using linux.
Here is one way in Imagemagick 6 using for loops.
lena.png
The lena.png image is 256x256. I choose 128x128 size tiles. So there will be a total of 2 rows and 2 columns for four output images.
infile="lena.png"
tx=128
ty=128
ncols=`convert -ping "$infile" -format "%[fx:floor(w/$tx)]" info:`
nrows=`convert -ping "$infile" -format "%[fx:floor(h/$ty)]" info:`
for ((j=0; j<nrows; j++)); do
offy=$((j*ty))
for ((i=0; i<ncols; i++)); do
offx=$((i*tx))
convert lena.png -crop ${tx}x${ty}+${offx}+${offy} +repage lena_tile${j}${i}.png
done
done
lena_tile00
lena_tile01
lena_tile10
lena_tile11
An alternate, more compact way is to use -set filename command with fx calculations to name the files in the image chain.
infile="lena.png"
tx=128
ty=128
ncols=`convert -ping "$infile" -format "%[fx:floor(w/$tx)]" info:`
nrows=`convert -ping "$infile" -format "%[fx:floor(h/$ty)]" info:`
convert "$infile" -crop ${tx}x${ty} -set filename:row_col "%[fx:floor(t/$nrows)]%[fx:mod(t,$ncols)]" "lena_tile%[filename:row_col].png"
See:
https://imagemagick.org/Usage/basics/#set
https://imagemagick.org/script/fx.php
Are you trying to make your own in order to learn about the process? If not, existing tools like dzsave can build complete pyramids for you very quickly in a single command. For example:
$ vipsheader wtc.jpg
wtc.jpg: 10000x10000 uchar, 3 bands, srgb, jpegload
$ /usr/bin/time -f %M:%e vips dzsave wtc.jpg x --layout google
211224:1.52
$ ls -R x | wc
2404 2316 15186
So that's making a google-style pyramid of 2400 tiles in directory x from a 10,000 x 10,000 pixel JPG image. It takes about 1.5s and 210mb of ram.
There's a chapter in the manual introducing dzsave:
http://libvips.github.io/libvips/API/current/Making-image-pyramids.md.html

Batch resize images when one side is too large (linux)

I know that image resizing on the command line is something ImageMagick and similar could do unfortunately I do only have very basic bash scripting abilities so I wonder if this is even possible:
check all directories and subdirectories for all files that are an image
check width and height of the image
if any of both exceeds X amount of pixels resize it to X while keeping aspect ratio.
replace old file with new file (old file shall be removed/deleted)
Thank you for any input.
Implementation might be not so trivial even for advanced users. As a one-liner:
find \ # 1
~/Downloads \ # 2
-type f \ # 3
-exec file \{\} \; \ # 4
| awk -F: '{if ($2 ~/image/) print $1}' \ # 5
| while IFS= read -r file_path; do \ # 6
mogrify -resize 1024x1024\> "$file_path"; \ # 7
done # 8
Lines 1-4 are an invocation of the find command:
Specify a directory to scan.
Specify you need files only.
Per each found item run file command. Example outputs per file:
/Downloads/391A6 625.png: PNG image data, 1024 x 810, 8-bit/color RGB, interlaced
/Downloads/STRUCTURED NODES IN UML 2.0 ACTIVITES.pdf: PDF document, version 1.4
Note how file names are delimited from their info by : and info about PNG contains image word. This also will be true for other image formats.
Use awk to filter only those files which have image word in their info. This gives us image files only. Here, -F: specifies that the delimiter is :. This gives us the variable $1 to contain the original file name and $2 for the file info. We search image word in file info and print file name if it's present.
This one is a bit tricky. Lines 6-8 read the output of awk line by line and invoke the mogrify command to resize images. Here we do not use piping and xargs, as if file paths contain spaces or other characters which must be escaped,
we will get xargs unterminated quote errors and it's a pain to handle that.
Invoke the mogrify command of ImageMagic. Unlike convert, which is also ImageMagic's command, mogrify changes files in-place without creating new ones. Here, 1024x1024\> tells to resize image to have max size of 1024x1024. The \> part tells to preserve aspect ratio, so that the final image will have the biggest side of 1024px. Other side will be smaller than that, unless the original image is square. Pay attention to the ;, as it's needed inside loops.
Note, it's safe to run mogrify several times over the same file: if a file's size already corresponds to your target dimensions, it will not be resized again. However, it will change file's modification time, though.
Additionally, you may need not only to resize images, but to compress them as well. Please, refer to my gist to see how this can be done: https://gist.github.com/oblalex/79fa3f85f05924017d25004496493adb
If your goal is just to reduce big images in size, e.g. bigger than 300K, you may:
find /path/to/dir -type f -size +300k
and as before combine it with mogrify -strip -interlace Plane -format jpg -quality 85 -define jpeg:extent=300KB "$FILE_PATH"
In such case new jpg files will be created for non-jpg originals and originals will need to be removed. Refer to the gist to see how this can be done.
You can do that with a bash unix shell script looping over your directories. You must identify all the file formats you want such as jpg and png, etc. Then for each directory, loop over each file of the given list of formats. Then use ImageMagick to resize the files.
cd
dirlist="path2/directory1 path2/directory2 ...."
for dir in $dirlist; do
cd "$dir"
imglist=`ls | grep -i ".jpg\|.png"`
for img in $imglist; do
convert $img -resize "200x200>" $img
done
done
See https://www.imagemagick.org/script/command-line-processing.php#geometry

Split single video input files into 30 second blocks

I am trying to split several video files (.mov) into 30 second blocks.
I do not need to specify where the 30 seconds start or finish.
EXAMPLE- A single 45 second video (VID1.mov) will be split into VID1_part1.mov (30 seconds), VID1_part2.mov (15 seconds). Ideally, I can remove audio too.
I made an attempt, using bash (osx), but was unsuccessful. It did not split the video into multiple parts- instead it just seemed to modify the original file (and made it into a length of 1-2 seconds):
find . -name '*.mov' -exec ffmpeg -t 30 -i \{\} -c copy \{\} \;
You can use FFmpeg's segment muxer for this.
ffmpeg -i input -c copy -segment_time 30 -f segment input%d.mov
Depending on where the video keyframes are, each segment won't start at mod 30 secs. You'll have to omit -c copy for that.
Also, FFmpeg does not do in-place editing. Your bash script seems to present the input name for output as well. That won't work.

How to shrink and optimize images?

I'm currently using jpegoptim on CentOS 6. It lets you set a quality and file size benchmark. However, it doesn't let you resize the images.
I have 5000 images of all file sizes and dimensions that I want to shrink to a max width and max file size.
For example, I'd like to get all images down to a maximum width of 500 pixels and 50 KB.
How can I shrink and optimize all of these images?
You can do this with ImageMagick, but it is hard to say explicitly which way to do it as it depends on whether all the files are in the same directory and also whether you have or can use, GNU Parallel.
Generally, you can reduce the size of a single image to a specific width of 500 like this:
# Resize image to 500 pixels wide
convert input.jpg -resize 500x result.jpg
where input.jpg and result.jpg are permitted to be the same file. If you wanted to do the height, you would use:
# Resize image to 500 pixels high
convert input.jpg -resize x500 result.jpg
since dimensions are specified as width x height.
If you only want to reduce files that are larger than 500 pixels, and not do any up-resing (increasing resolution), you add > to the dimension:
# Resize images wider than 500 pixels down to 500 pixels wide
convert image.jpg -resize '500x>' image.jpg
If you want to reduce the file size of the result, you must use a -define to guide the JPEG encoder as follows:
# Resize image to no more than 500px wide and keep output file size below 50kB
convert image.jpg -resize '500x>' -define jpeg:extent=50KB result.jpg
So, now you need to put a loop around all your files:
#!/bin/bash
shopt -s nullglob
shopt -s nocaseglob
for f in *.jpg; do
convert "$f" -resize '500x>' -define jpeg:extent=50KB "$f"
done
If you like thrashing all your CPU cores, do that using GNU Parallel to get the job done faster.
Note that if you have a file that is smaller than 500px wide, ImageMagick will not process it so if it is smaller than 500 pixels wide and also larger than 50kB, it will not get reduced in terms of filesize. To catch that unlikely edge case, you may need to run another find afterwards to find any files over 50kB and then run them through convert but without the -resize, something like this:
find . -type f -iname "*.jpg" -size -51200c -exec convert {} -define jpeg:extent=50KB {} \;

Batch convert DDS file to DDS without mipmaps

I have a Mount & Blade: Warband mod called 1257 AD. The mod itself is great, but all the textures have to be resaved to remove mipmaps from dds files, to remove glitches on GNU/Linux. And of course, I could do this manually, but it will took a lot of time(over 2000 textures), and is there any way for gimp to just open and save the file without mipmaps.
Also, last time I wanted to do this I used loop with Imagemagicks convert, but it kept mipmaps. So how do I do this kind of convert?
You should use the 'define' dds:mipmaps if you don't want to keep the mipmaps. Setting it to zero will disable the writing of mipmaps.
convert input.dds -define dds:mipmaps=0 output.dds
You can find a list of all dds defines here: http://www.imagemagick.org/script/formats.php.
If you want to convert them in place, use ImageMagick's mogrify, which is basically convert but does things in-place.
Using mogrify has the potential to irreversibly corrupt your images, so use it wisely and avoid using long strung-out convert-like command lines (use simple commands).
find . -type f -name "*.DDS" | xargs -L1 -I{} mogrify -define dds:mipmaps=0 "{}"
If you're sure you don't have spaces in your pathnames and you want a bit of a speed increase, then just do
find . -type f -name "*.DDS" | xargs mogrify -define dds:mipmaps=0

Resources