I'm trying to concatenate multiple pdf files which basically are the pages of a photobook containing jpg images. For my output pdf file I wish to adjust the image resolution to 300 dpi and I want to keep the best quality. The commands I'm using are:
gswin64c.exe -dNOPAUSE -dBATCH ^-dDownsampleColorImages=true -dColorImageResolution=300 ^-dDownsampleGrayImages=true -dGrayImageResolution=300 ^-dDownsampleMonoImages=true -dMonoImageResolution=300 ^-sDEVICE=pdfwrite -dJPEGQ=100 -sOutputFile=out.pdf in1.pdf in2.pdf
However, it seems that -dJPEGQ=100 has no effect on the output. Changing this parameter leads to the same filesize and artifacts are visible in the images for all values. Running the command with the option -dPDFSETTINGS=/printer I get better results without artifacts, however this option should also result in 300 dpi. So what is the correct command to specify the quality of the jpg images in the output file?
The solution is adjusting the DCTEncode filter with follwing commands:
gswin64c.exe -sOutputFile=out.pdf -dNOPAUSE -dBATCH ^-sDEVICE=pdfwrite -dPDFSETTINGS=/prepress -c "<< /ColorACSImageDict << /VSamples [ 1 1 1 1 ] /HSamples [ 1 1 1 1 ] /QFactor 0.08 /Blend 1 >> /ColorImageDownsampleType /Bicubic /ColorConversionStrategy /LeaveColorUnchanged >> setdistillerparams" -f in1.pdf
which lead to a compressed file with satisfactory quality for me and can be adjusted to each individual needs.
Edit:
the .setpdfwrite argument is deprecated for recent ghostscript releases (> 9.50), so I removed it in the answer
Related
I'm trying to translate a set of .tif images to one multiband .tif following this tutorial. I'm not getting any errors but every time I run the script it generates a three-band image when I have a lot more input .tifs than that. I have tried the -r flag and it didn't change anything.
I have:
wdir="/Users/<mydir>"
creoleLA="$wdir/data/s2l2a_tiffs/T15RVP"
dhaka="$wdir/data/s2l2a_tiffs/T45QZG"
easthouston="$wdir/data/s2l2a_tiffs/T15RTN"
kolkata="$wdir/data/s2l2a_tiffs/T45QXF"
murcia="$wdir/data/s2l2a_tiffs/T30SXH"
wuhan="$wdir/data/s2l2a_tiffs/T50RKU"
#=============================================
# Main script
#=============================================
read -p 'ROI: ' roi
indir=${!roi}
outdir="$wdir/data/s2l2a_tiffs"
ls ${indir}*"_B"*.tif > "$outdir/btif_${roi}.txt"
gdalbuildvrt -separate -overwrite -input_file_list "$outdir/btif_${roi}.txt" "$outdir/S2L2A_${roi}.vrt"
gdal_translate -strict -ot uint16 "$outdir/S2L2A_${roi}.vrt" "$outdir/S2L2A_${roi}_mb.tif"
rm "$outdir/S2L2A_${roi}.vrt"
rm "$outdir/btif_${roi}.txt"
Where "$outdir/btif_${roi}.txt" is a textile of the GeoTIFF file paths like this:
/Users/<mydir>/data/s2l2a_tiffs/T15RVP_20200831T164849_B01_60m.tif
/Users/<mydir>/data/s2l2a_tiffs/T15RVP_20200831T164849_B02_60m.tif
...
I am processing Sentinel-2 imagery and using OSX 11.6.
I'm using convert utility from ImageMagick to convert raw image bytes to usable image format such as PNG. My raw files are generated by code, so there is no any headers, just pure pixels.
In order to convert my image I'm using command:
$ convert -depth 1 -size 576x391 -identify gray:image.raw image.png
gray:image.raw=>image.raw GRAY 576x391 576x391+0+0 1-bit Gray 28152B 0.010u 0:00.009
The width is fixed and pretty known for me. However I have to evaluate the height of the image from the file size each time which is annoying.
Without height specified or if wrong height is specified the utility compains:
$ convert -depth 1 -size 576 -identify gray:image.raw image.png
convert-im6.q16: must specify image size `image.raw' # error/gray.c/ReadGRAYImage/143.
convert-im6.q16: no images defined `image.png' # error/convert.c/ConvertImageCommand/3258.
$ convert -depth 1 -size 576x390 -identify gray:iphone.raw iphone.png
convert-im6.q16: unexpected end-of-file `image.raw': No such file or directory # error/gray.c/ReadGRAYImage/237.
convert-im6.q16: no images defined `image.png' # error/convert.c/ConvertImageCommand/3258.
So I wonder is there a way to automatically detect the image height based on the file/blob size?
A couple of ideas...
You may not be aware of the NetPBM format, but it is very simple and you may be able to change your software that creates the raw images so that it directly generates PBM format images which are readable and useable by OpenCV, Photoshop, GIMP, feh, eog and ImageMagick of course. It would not require any libraries or extra dependencies in your software, all you need to do is put a textual PBM header on the front, so your file looks like this:
P4
576 391
... YOUR EXISTING BINARY DATA ...
Do not forget to put newlines (i.e. linefeed character) after P4 and after 391.
You can try it for yourself and add a header onto one of your files like this and then view it with GIMP or other tool:
printf "P4\n576 391\n" > image.pbm
cat image.raw >> image.pbm
If you prefer a one-liner, just use a bash command grouping like this - which is equivalent to the 2 lines above:
{ printf "P4\n576 391\n"; cat image.raw; } > image.pbm
Be careful to have all the spaces and semi-colons exactly as I have them!
Another idea, just putting some meat on Fred's answer, might be the following one-liner which uses a bash arithmetic context and a bash command substitution, you can do this:
convert -depth 1 -size "576x$(($(stat -c "%s" image.raw)*8/576))" gray:image.raw image.png
Note that if you are on macOS, stat is a little different, so you may prefer the slightly less efficient, but more portable:
convert -depth 1 -size "576x$(($(wc -c < image.raw)*8/576))" gray:image.raw image.png
You have to know the -depth and width to compute the height for ImageMagick raw format. If depth is 1, then your image is binary (b/w). So height = 8 * file size (in B)/(width). 28152*8/391 = 576
I am creating an image on the fly using ghostscript. How do I write the my_image.png to /home/pi/Desktop location. I can get the image to print to screen, but not save the output to a location.
gs -sDEVICE=pngalpha -sOutputFile=- -q -r100 -g600x100 - >my_image.png
I changed the OutputFile line to
gs -sDEVICE=pngalpha -sOutputFile=/home/pi/my_image.png -q -r100 -g600x100 -
I know that image resizing on the command line is something ImageMagick and similar could do unfortunately I do only have very basic bash scripting abilities so I wonder if this is even possible:
check all directories and subdirectories for all files that are an image
check width and height of the image
if any of both exceeds X amount of pixels resize it to X while keeping aspect ratio.
replace old file with new file (old file shall be removed/deleted)
Thank you for any input.
Implementation might be not so trivial even for advanced users. As a one-liner:
find \ # 1
~/Downloads \ # 2
-type f \ # 3
-exec file \{\} \; \ # 4
| awk -F: '{if ($2 ~/image/) print $1}' \ # 5
| while IFS= read -r file_path; do \ # 6
mogrify -resize 1024x1024\> "$file_path"; \ # 7
done # 8
Lines 1-4 are an invocation of the find command:
Specify a directory to scan.
Specify you need files only.
Per each found item run file command. Example outputs per file:
/Downloads/391A6 625.png: PNG image data, 1024 x 810, 8-bit/color RGB, interlaced
/Downloads/STRUCTURED NODES IN UML 2.0 ACTIVITES.pdf: PDF document, version 1.4
Note how file names are delimited from their info by : and info about PNG contains image word. This also will be true for other image formats.
Use awk to filter only those files which have image word in their info. This gives us image files only. Here, -F: specifies that the delimiter is :. This gives us the variable $1 to contain the original file name and $2 for the file info. We search image word in file info and print file name if it's present.
This one is a bit tricky. Lines 6-8 read the output of awk line by line and invoke the mogrify command to resize images. Here we do not use piping and xargs, as if file paths contain spaces or other characters which must be escaped,
we will get xargs unterminated quote errors and it's a pain to handle that.
Invoke the mogrify command of ImageMagic. Unlike convert, which is also ImageMagic's command, mogrify changes files in-place without creating new ones. Here, 1024x1024\> tells to resize image to have max size of 1024x1024. The \> part tells to preserve aspect ratio, so that the final image will have the biggest side of 1024px. Other side will be smaller than that, unless the original image is square. Pay attention to the ;, as it's needed inside loops.
Note, it's safe to run mogrify several times over the same file: if a file's size already corresponds to your target dimensions, it will not be resized again. However, it will change file's modification time, though.
Additionally, you may need not only to resize images, but to compress them as well. Please, refer to my gist to see how this can be done: https://gist.github.com/oblalex/79fa3f85f05924017d25004496493adb
If your goal is just to reduce big images in size, e.g. bigger than 300K, you may:
find /path/to/dir -type f -size +300k
and as before combine it with mogrify -strip -interlace Plane -format jpg -quality 85 -define jpeg:extent=300KB "$FILE_PATH"
In such case new jpg files will be created for non-jpg originals and originals will need to be removed. Refer to the gist to see how this can be done.
You can do that with a bash unix shell script looping over your directories. You must identify all the file formats you want such as jpg and png, etc. Then for each directory, loop over each file of the given list of formats. Then use ImageMagick to resize the files.
cd
dirlist="path2/directory1 path2/directory2 ...."
for dir in $dirlist; do
cd "$dir"
imglist=`ls | grep -i ".jpg\|.png"`
for img in $imglist; do
convert $img -resize "200x200>" $img
done
done
See https://www.imagemagick.org/script/command-line-processing.php#geometry
I have a code.ps file converted to code.pdf , which I want to add to the end of every page in my test.pdf , i.e shrink the test.pdf's every page and add an image to the end of it .
I have written the following shell script to it , but it appends the code.pdf as a new page after every page of test.pdf ! ...Kindly help . Here is my code :-
#!/bin/sh
filename=test.pdf
pages="`pdftk $filename dump_data | grep NumberOfPages | cut -d : -f2`"
numpages=`for ((a=1; a <= $pages; a++)); do echo -n "A$a B1 "; done`
pdftk A=$filename B=code.pdf cat $numpages output $filename-alternated.pdf
exit 0
A simple example of stamping an image (image.[pdf,png] onto a multipage pdf (text.pdf) allowing for manual tweaking of the scaling and offsets using pdfjam and pdftk could be:
# scale and offset the text part
pdfjam --scale 0.8 --frame True --offset '0cm 2.5cm' text.pdf
# scale and offset the image
pdfjam --paper 'a4paper' --scale 0.3 --offset '7cm -12cm' image.pdf
# combine both
pdftk text-pdfjam.pdf stamp image-pdfjam.pdf output combined.pdf
This might look like
If you start with an image file (png, jpg) you can convert it to pdf using imagemagick like
convert image.png image.pdf
Of course, the scale factors and offsets have to be adjusted to your needs. I included the --frame option to highlight the scaling of the text.pdf part. The stamp option overlays the image, whereas the background option would underlay the image.