ImageMagick convert cache directory - macos

I was trying to use the convert command (OS X El Capitan) to convert a .mod video to .mp4 and it quickly filled up my disk space and I had to Control + C to halt it.
But after restart, the disk space is still filled and I don't know where the cache, i.e. the half converted video is and I don't know how to delete it. Can anyone help?
Thank you!

If you have set either of the two environment variables:
MAGICK_TEMPORARY_PATH
MAGICK_TMPDIR
then ImageMagick will use that directory for its temporary files. So, the first way to check is to run
env | grep -i MAGICK
and see if you have either set.
Failing that, or if you have no environment variables set, the easiest way I know to find where ImageMagick caches on disk is to turn on cache debugging and force ImageMagick to go to disk. So, we can turn on cache debugging with:
convert -debug cache ...
and we can force ImageMagick to go to disk by limiting the RAM it is allowed to use with:
convert -limit memory 100k ...
So, if we put that together:
convert -debug cache -limit memory 100k -size 1000x1000 xc:gray image.jpg
2016-05-19T13:25:46+01:00 0:00.000 0.000u 6.9.4 Cache convert[46510]: cache.c/SetPixelCacheExtent/3500/Cache
extend gray[0] (/var/tmp/magick-46510CYSKWOdhlrym[3], disk, 8MB)
2016-05-19T13:25:46+01:00 0:00.010 0.000u 6.9.4 Cache convert[46510]: cache.c/OpenPixelCache/3776/Cache
open gray[0] (/var/tmp/magick-46510CYSKWOdhlrym[-1], Map, 1000x1000 7.629MiB)
And, if you look carefully, you can see it is using /var/tmp on my Mac OSX - your system may be different, but this technique should show you what it is using.
Just as a test, I can set an environment variable and check ImageMagick is using it:
# Tell IM where to cache stuff on disk
export MAGICK_TEMPORARY_PATH=/tmp/TEMPPATH
# Force an operation that will require caching
convert -debug cache -limit memory 100k -size 1000x1000 xc:gray image.jpg
2016-05-19T14:09:51+01:00 0:00.000 0.000u 6.9.4 Cache convert[46584]: cache.c/SetPixelCacheExtent/3500/Cache
extend gray[0] (/tmp/TEMPPATH/magick-46584CivsEmIPjwv2[3], disk, 8MB)
2016-05-19T14:09:51+01:00 0:00.010 0.000u 6.9.4 Cache convert[46584]: cache.c/OpenPixelCache/3776/Cache
open gray[0] (/tmp/TEMPPATH/magick-46584CivsEmIPjwv2[-1], Map, 1000x1000 7.629MiB)
Keywords: ImageMagick, environment variables, tmp, TEMPDIR, TEMPPATH, cache, disk, cache

Related

Imagemagick Batch operation conditional on Filesize - How?

I'm running Imagemagick on a command line Ubuntu terminal in Windows 10 - using the built in facility in Windows 10 - the Ubuntu App.
I am a complete linux novice but have installed imagemagick in the above environment.
My task - Auto remove the black(ish) border and deskew the images of thousands of scanned 35mm slides.
I can successfully run commands such as
mogrify -fuzz 35% -deskew 80% -trim +repage *.tif
The problem is:-
The border is not crisply defined nor is completely black, hence the -fuzz. Some images are over-trimmed at a certain fuzz, while others are not trimmed enough.
So what I want to do is to have two passes at this, with different fuzz %, for these reasons:-
1st pass with a low Fuzz%. Many images will not be trimmed at all but I have found that the ones that are susceptible to over-trimming will trim Ok with low %
Since all the images start with an identical filesize, the ones that have trimmed Ok will have a lower filesize (note these are tifs not jpgs)
So what I need to do is set a file size condition for the second pass at higher fuzz% THAT IGNORES file sizes below a certain value and does not perform any operation.
In this way, with few errors, all the images will be trimmed correctly.
So the question
- How can I adjust the command line to have 2 passes and to ignore a lower file size on the second pass?
I have a horrible feeling the the answer will be a script. I have no idea how to construct or set up Ubuntu to run this so if so, please can you point me to help for that also!!
In ImageMagick, you could do something like the following:
Get the input filesize
Use convert to deskew and trim.
Then find the new file
Then compare the new to the old to compute the percentdifference to some percent threshold
If the percent difference is less than some threshold, then the processing did not trim enough
So reprocess with a higher fuzz value and write over the input; otherwise keep the first one only and do not write over the old one.
Unix syntax.
Choose two fuzz values
Choose a percent change threshold
Create a new empty directory to hold the output (results)
cd
cd desktop/Originals
fuzz1=20
fuzz2=40
threshpct=10
list=`ls`
for img in $list; do
filesize=`convert -ping $img -precision 16 -format "%b" info: | sed 's/[B]*$//'`
echo "filesize=$filesize"
convert $img -background black -deskew 40% -fuzz $fuzz1% ../results/$img
newfilesize=`convert -ping ../results/$img -precision 16 -format "%b" info: | sed 's/[B]*$//'`
test=`convert xc: -format "%[fx:100*($filesize-$newfilesize)/$filesize<$threshpct?1:0]" info:`
echo "newfilesize=$newfilesize; test=$test;"
[ $test -eq 1 ] && convert $img -background black -deskew 40% -fuzz $fuzz2% ../results/$img
done
The issue is that you need to be sure you set your TIFF compression for the output the same as for the input so that the file sizes are equivalent and presumably the new size is not larger than the old one as happens with JPG.
Note that the sed is used to remove the letter B (bytes) from the file size, so they can be compared as numerals and not strings. The -precision 16 forces "%b" to report as B and not KB or MB.

jpegoptim doesnt compress file

I have jpegoptim version 1.2.3, and PageSpeed Insights says that I can reduce my image by 36,6 KB (95 %). But using jpegoptim image.jpg --strip-all
image.jpg 147x196 24bit JFIF [OK] 39227 --> 39227 bytes (0.00%), skipped. does nothing.
jpegoptim could not compress lossless your image, you could add --max=60 to force lossy compression.
A lot of online JPEG optimizers (compressjpeg.com, tinyjpg.com) use lossy compressions.
I ran the image through Compress JPG and reduced it to 12K.

Determine bit depth of bmp file on os x

How can I determine the bit depth of a bmp file on Mac OS X? In particular, I want to check if a bmp file is a true 24 bit file, or if it is being saved as a greyscale (i.e. 8 bit) image. I have a black-and-white image which I think I have forced to be 24 bit (using convert -type TrueColor), but Imagemagick gives conflicting results:
> identify -verbose hiBW24.bmp
...
Type: Grayscale
Base type: Grayscale
Endianess: Undefined
Colorspace: Gray
> identify -debug coder hiBW24.bmp
...
Bits per pixel: 24
A number of other command-line utilities are no help, it seems:
> file hi.bmp
hi.bmp: data
> exiv2 hiBW24.bmp
File name : hiBW24.bmp
File size : 286338 Bytes
MIME type : image/x-ms-bmp
Image size : 200 x 477
hiBW24.bmp: No Exif data found in the file
> mediainfo -f hi.bmp
...[nothing useful]
If you want a commend-line utility try sips (do not forget to read the manpage with man sips). Example:
*terminal input*
sips -g all /Users/hg/Pictures/2012/03/14/QRCodeA.bmp
*output is:*
/Users/hg/Pictures/2012/03/14/QRCodeA.bmp
pixelWidth: 150
pixelHeight: 143
typeIdentifier: com.microsoft.bmp
format: bmp
formatOptions: default
dpiWidth: 96.000
dpiHeight: 96.000
samplesPerPixel: 3
bitsPerSample: 8
hasAlpha: no
space: RGB
I think the result contains the values you are after.
Another way is to open the image with the previewer preview.app and the open the info panel.
One of the most informative programs (but not easy to use) is exiftool by Phil Harvey http://www.sno.phy.queensu.ca/~phil/exiftool/ , which also works very well on MacOSX for a lot of file formats but maybe an overkill for your purpose.
I did this to investigate:
# create a black-to-white gradient and save as a BMP, then `identify` it to a file `unlim`
convert -size 256x256 gradient:black-white a.bmp
identify -verbose a.bmp > unlim
# create another black-to-white gradient but force 256 colours, then `identify` to a second file `256`
convert -size 256x256 gradient:black-white -colors 256 a.bmp
identify -verbose a.bmp > 256
# Now look at difference
opendiff unlim 256
And the difference is that the -colors 256 image has a palette in the header and has a Class:PseudoClass whereas the other has Class:Direct

jpg won't optimize (jpegtran, jpegoptim)

I have an image and it's a jpg.
I tried running through jpegtran with the following command:
$ jpegtran -copy none -optimize image.jpg > out.jpg
The file outputs, but the image seems un-modified (no size change)
I tried jpegoptim:
$ jpegoptim image.jpg
image.jpg 4475x2984 24bit P JFIF [OK] 1679488 --> 1679488 bytes (0.00%), skipped.
I get the same results when I use --force with jpegoptim except it reports that it's optimized but there is no change in file size
Here is the image in question: http://i.imgur.com/NAuigj0.jpg
But I can't seem to get it to work with any other jpegs I have either (only tried a couple though).
Am I doing something wrong?
I downloaded your image from imgur, but the size is 189,056 bytes. Is it possible that imgur did something to your image?
Anyway, I managed to optimize it to 165,920 bytes using Leanify (I'm the author) and it's lossless.

ImageMagick crop huge image

I am trying to create tiles from a huge image say 40000x40000
i found a script on line for imagemagick he crops the tiles. it works fine on small images like say 10000x5000
once i get any bigger it ends up using to much memory and the computer dies.
I have added the limit options but they dont seem to take affect
i have the monitor in there but it does not help as the script just slows down and locksup the machine
it seems to just goble up like 50gig of swap disk then kill the machine
i think the problem is that as it crops each tile it keeps them in memory. What i think i needs is for it to write each tile to disk as it creates it not store them all up in memory.
here is the script so far
#!/bin/bash
file=$1
function tile() {
convert -monitor -limit memory 2GiB -limit map 2GiB -limit area 2GB $file -scale ${s}%x -crop 256x256 \
-set filename:tile "%[fx:page.x/256]_%[fx:page.y/256]" \
+repage +adjoin "${file%.*}_${s}_%[filename:tile].png"
}
s=100
tile
s=50
tile
After a lot more digging and some help from the guys on the ImageMagick forum I managed to get it working.
The trick to getting it working is the .mpc format. Since this is the native image format used by ImageMagick it does not need to convert the initial image, it just cuts out the piece that it needs. This is the case with the second script I setup.
Lets say you have a 50000x50000 .tif image called myLargeImg.tif. First convert it to the native image format using the following command:
convert -monitor -limit area 2mb myLargeImg.tif myLargeImg.mpc
Then, run the bellow bash script that will create the tiles. Create a file named tiler.sh in the same folder as the mpc image and put the below script:
#!/bin/bash
src=$1
width=`identify -format %w $src`
limit=$[$width / 256]
echo "count = $limit * $limit = "$((limit * limit))" tiles"
limit=$((limit-1))
for x in `seq 0 $limit`; do
for y in `seq 0 $limit`; do
tile=tile-$x-$y.png
echo -n $tile
w=$((x * 256))
h=$((y * 256))
convert -debug cache -monitor $src -crop 256x256+$w+$h $tile
done
done
In your console/terminal run the below command and watch the tiles appear one at at time into your folder.
sh ./tiler.sh myLargeImg.mpc
libvips has an operator that can do exactly what you want very quickly. There's a chapter in the docs introducing dzsave and explaining how it works.
It can also do it in relatively little memory: I regularly process 200,000 x 200,000 pixel slide images using less than 1GB of memory.
See this answer, but briefly:
$ time convert -crop 512x512 +repage huge.tif x/image_out_%d.tif
real 0m5.623s
user 0m2.060s
sys 0m2.148s
$ time vips dzsave huge.tif x --depth one --tile-size 512 --overlap 0 --suffix .tif
real 0m1.643s
user 0m1.668s
sys 0m1.000s
You may try to use gdal_translate utility from GDAL project. Don't get scared off by the "geospatial" in the project name. GDAL is an advanced library for access and processing of raster data from various formats. It is dedicated to geospatial users, but it can be used to process regular images as well, without any problems.
Here is simple script to generate 256x256 pixel tiles from large in.tif file of dimensions 40000x40000 pixels:
#!/bin/bash
width=40000
height=40000
y=0
while [ $y -lt $height ]
do
x=0
while [ $x -lt $width ]
do
outtif=t_${y}_$x.tif
gdal_translate -srcwin $x $y 256 256 in.tif $outtif
let x=$x+256
done
let y=$y+256
done
GDAL binaries are available for most Unix-like systems as well as Windows are downloadable.
ImageMagick is simply not made for this kind of task. In situations like yours I recommend using the VIPS library and the associated frontend Nip2
VIPS has been designed specifically to deal with very large images.
http://www.vips.ecs.soton.ac.uk/index.php?title=VIPS

Resources