cygwin file not found on network windows drive, possible issue with spaces - windows

to try and not over complicate my issue i'm using cygwin and have two windows computers on a network sharing data. i'm trying to add a logo to some photos and are having a tough time with finding the logo file with cygwin in this code
for f in *.jpg; do
convert "$f" \
-gravity southeast \
-draw 'image over 0,0 0,0 "//deepfrogphotopc/L Xotio Passport/temp/logo2.png"' \
-write //deepfrogphotopc/L Xotio Passport/temp/og-rotate-logo/"$f" \
-resize 1200x1200 \
//deepfrogphotopc/L Xotio Passport/temp/1200px/"$f"
done
I have no problem when using cd to get to this location with cygwin
//deepfrogphotopc/L Xotio Passport/temp/
but then when using the above loop to watermark the files cygwin can't find the logo file and I assume it won't write to the correct directories after either because of the same problem. I actually got it working using
../logo2.png
but really want to use the actual address like I'm trying. Any help with this is very much appreciated!

Related

imagemagick -auto-level in ffmpeg

I've been looking for a solution to perform the equivalent of magick -auto-level in ffmpeg but am unable to find anything. There are some references stating I should first manually discover the levels using other software like GIMP, however, I'm looking for an automated and simpler solution. Any ideas how to address this?
I've tried the following - the first enhanced the image which was initially prettry dark, but the second over-exposed it, causing it to become mostly white:
convert img.jpg -auto-level img2.jpg
ffmpeg -i img.jpg -vf "normalize" -y img2.jpg
Note: I apologize I cannot share the image as it is restricted by privacy policy

imagemagick convert -contrast-stretch doesn't work after update to 6.9.x and up

I use convert to process my scans. I use convert as follows:
convert in.tiff +dither -colors 2 -colorspace gray -contrast-stretch 0 out.tiff
The input file is about 8.5MBytes. In V6.8.9-9 the output size is about 1.1MBytes. In 7.0.8-14 the output size stays at 8.5MBytes.
I have searched for problems with -contrast-stretchbut I couldn't find info on my topic. The same problem occurs with the -threshold option. With the old version size gets smaller with the new version size doesn't decrease.
This is on ubuntu 18.04 libtiff-tools is installed. Old Version is on ubuntu 16.04.
Am I missing something?
Regards
Thommy
I found a solution. With identify -verbose out.tif I saw that depth differed. With the new version it is 8/1-bit in the old version it is 1-bit. Searching on that topic I found that -depth 1 could be the solution. And indeed adding -depth 1 to the command line solved my problem.
Still I don't know why the old version worked w/o it. But I am back to what I want.
Thommy

Creating new eng.tessdata file for custom font in Tesseract giving error

Converted the PDF file into .tiff which is pretty straightforward
convert -depth 4 -density 300 -background white +matte eng.arial.pdf eng.arial.tiff
Then train tesseract for the .tiff file -
tesseract eng.arial.tiff eng.arial batch.nochop makebox
Then feed the .tiff file into tesseract -
tesseract eng.arial.tiff eng.arial.box nobatch box.train .stderr
Detect the Character set used -
unicharset_extractor *.box
But I am getting this error -
unicharset_extractor:./.libs/lt-unicharset_extractor.c:233: FATAL: couldn't find unicharset_extractor.
And it also happening for mftraining and combine_tessdata as well.
UPDATE
Ran unicharset_extractor on single box file and still doesn't work.
And it is not only with this command but also with mftraining, cntraining and combine_tessdata.

jpg won't optimize (jpegtran, jpegoptim)

I have an image and it's a jpg.
I tried running through jpegtran with the following command:
$ jpegtran -copy none -optimize image.jpg > out.jpg
The file outputs, but the image seems un-modified (no size change)
I tried jpegoptim:
$ jpegoptim image.jpg
image.jpg 4475x2984 24bit P JFIF [OK] 1679488 --> 1679488 bytes (0.00%), skipped.
I get the same results when I use --force with jpegoptim except it reports that it's optimized but there is no change in file size
Here is the image in question: http://i.imgur.com/NAuigj0.jpg
But I can't seem to get it to work with any other jpegs I have either (only tried a couple though).
Am I doing something wrong?
I downloaded your image from imgur, but the size is 189,056 bytes. Is it possible that imgur did something to your image?
Anyway, I managed to optimize it to 165,920 bytes using Leanify (I'm the author) and it's lossless.

ImageMagick crop huge image

I am trying to create tiles from a huge image say 40000x40000
i found a script on line for imagemagick he crops the tiles. it works fine on small images like say 10000x5000
once i get any bigger it ends up using to much memory and the computer dies.
I have added the limit options but they dont seem to take affect
i have the monitor in there but it does not help as the script just slows down and locksup the machine
it seems to just goble up like 50gig of swap disk then kill the machine
i think the problem is that as it crops each tile it keeps them in memory. What i think i needs is for it to write each tile to disk as it creates it not store them all up in memory.
here is the script so far
#!/bin/bash
file=$1
function tile() {
convert -monitor -limit memory 2GiB -limit map 2GiB -limit area 2GB $file -scale ${s}%x -crop 256x256 \
-set filename:tile "%[fx:page.x/256]_%[fx:page.y/256]" \
+repage +adjoin "${file%.*}_${s}_%[filename:tile].png"
}
s=100
tile
s=50
tile
After a lot more digging and some help from the guys on the ImageMagick forum I managed to get it working.
The trick to getting it working is the .mpc format. Since this is the native image format used by ImageMagick it does not need to convert the initial image, it just cuts out the piece that it needs. This is the case with the second script I setup.
Lets say you have a 50000x50000 .tif image called myLargeImg.tif. First convert it to the native image format using the following command:
convert -monitor -limit area 2mb myLargeImg.tif myLargeImg.mpc
Then, run the bellow bash script that will create the tiles. Create a file named tiler.sh in the same folder as the mpc image and put the below script:
#!/bin/bash
src=$1
width=`identify -format %w $src`
limit=$[$width / 256]
echo "count = $limit * $limit = "$((limit * limit))" tiles"
limit=$((limit-1))
for x in `seq 0 $limit`; do
for y in `seq 0 $limit`; do
tile=tile-$x-$y.png
echo -n $tile
w=$((x * 256))
h=$((y * 256))
convert -debug cache -monitor $src -crop 256x256+$w+$h $tile
done
done
In your console/terminal run the below command and watch the tiles appear one at at time into your folder.
sh ./tiler.sh myLargeImg.mpc
libvips has an operator that can do exactly what you want very quickly. There's a chapter in the docs introducing dzsave and explaining how it works.
It can also do it in relatively little memory: I regularly process 200,000 x 200,000 pixel slide images using less than 1GB of memory.
See this answer, but briefly:
$ time convert -crop 512x512 +repage huge.tif x/image_out_%d.tif
real 0m5.623s
user 0m2.060s
sys 0m2.148s
$ time vips dzsave huge.tif x --depth one --tile-size 512 --overlap 0 --suffix .tif
real 0m1.643s
user 0m1.668s
sys 0m1.000s
You may try to use gdal_translate utility from GDAL project. Don't get scared off by the "geospatial" in the project name. GDAL is an advanced library for access and processing of raster data from various formats. It is dedicated to geospatial users, but it can be used to process regular images as well, without any problems.
Here is simple script to generate 256x256 pixel tiles from large in.tif file of dimensions 40000x40000 pixels:
#!/bin/bash
width=40000
height=40000
y=0
while [ $y -lt $height ]
do
x=0
while [ $x -lt $width ]
do
outtif=t_${y}_$x.tif
gdal_translate -srcwin $x $y 256 256 in.tif $outtif
let x=$x+256
done
let y=$y+256
done
GDAL binaries are available for most Unix-like systems as well as Windows are downloadable.
ImageMagick is simply not made for this kind of task. In situations like yours I recommend using the VIPS library and the associated frontend Nip2
VIPS has been designed specifically to deal with very large images.
http://www.vips.ecs.soton.ac.uk/index.php?title=VIPS

Resources