Reduce bit-depth of PNG files from the command line - macos

What command or series of commands could I execute from the CLI to recursively traverse a directory tree and reduce the bit-depth of all PNG files within that tree from 24bpp to 16bpp? Commands should preserve the alpha layer and should not increase the file size of the PNGs - in fact a decrease would be preferable.
I have an OSX based system at my disposal and am familiar with the find command so am really more keen to to locate a suitable PNG utility command.

Install fink
Say "fink install imagemagick" (might be "ImageMagick")
"convert -depth 16 old/foo.png new/foo.png"
If that did what you want, wrap it in a find call and be happy. If not, say "convert -help" and RTF-ImageMagick-M. :)
Optional: "fink install pngcrush" and run that as a second pass after the convert pass.

AFAIK the only PNG format that supports the alpha layer is PNG-24; Reducing the PNG to another format may require specifying a transparent color in a CLUT, which will not give you the output you want.
From the feature list on PNG's website:
8- and 16-bit-per-sample (that is, 24- and 48-bit) truecolor support
full alpha transparency in 8- and 16-bit modes, not just simple on-off transparency like GIF
... which I read to mean that anything other than PNG-24 or PNG-48 does not support full alpha transparency.

Related

How computer stores images or videos as data, and where, can i make operations on it?

You know, computer stores images as channels and pixels in those channels. And pixel values are like "00110101" which fills 8 bits at memory. I want to know truly where that bits stored at memory, and how can i make operations on them.
Thanks!
Well, the standard book is Digital Image Processing by Gonzalez and Woods.
Another book, where you can pick up the PDF for free is Image Processing in C by Dwayne Philips - PDF here.
First, you need to get a decent C compiler and development system - personally I use Mac OSX, but I guess you would want Visual Studio free edition on Windows.
Then you need to get started with some simple reading and writing of files and memory allocation. I would go with greyscale images of the NetPBM format - probably just PGM files - described here as they are the easiest. You can download the NetPBM programs and run them in a Windows Command Prompt and see how they work and try and implement them yourself in C. You can also download ImageMagick for Windows and try converting images from colour to greyscale and resizing them like this:
convert input.png -colorspace gray result.jpg
convert input.tif -resize 400x400 result.pgm
When you have got that, I would move on to colour PPM format and then maybe PNG and/or JPEG. Remember there are libraries for TIF/JPEG/PNG/BMP so don't be afraid to use them.
Finally, move on to displaying images yourself with Windows GDI etc.
Come back to StackOverflow if you get stuck - questions are free!
tl;dr wildly different with different encodings/filesystems/os'es/drivers
Well that depends on the image format. BMP is one of the easier formats, details on what these files look like can be found on for instance wiki
And to answer "where its stored", it is stored on permanent storage (hardrive/ssd), where exactly depends on the filesystem (FAT/NTFS/EXT etc).
When an image is to be displayed, its read into memory, where it can be manipulated and through some apis this data can be put into a memory region specifically meant to display the current images on you screen.

MiniMagick's "strip" function makes picture filesize bigger

I have used MiniMagick to compress JPEG files.
With strip function, I want to get rid of EXIF from image. So, I do:
image = MiniMagick::Image.open("my_picture.jpg")
image.strip
image.write("my_picture_small.jpg")
but sometimes the size of my_picture_small.jpg is bigger than my_picture.jpg.
However, when I don't use the strip function, like
image = MiniMagick::Image.open("my_picture.jpg")
# image.strip
image.write("my_picture_small.jpg")
my_picture_small.jpg's size is smaller.
That situation happened with some picture deal with Photoshop and in my CentOS computer, but run well with my Macbook. I don't know why stripping some information led to more storage.
Can anyone explain it?
Have found that ImageMagick will recompress image even if it with any arguments, such as
convert image.jpg new_image.jpg
new_image.jpg will be different from image.jpg more or less. If image.jpg is from a phone or camera or a image processing tools, the degree of difference is also different.
So compress images with MiniMagick or Rmagick that use ImageMagick as there system support, just do convert -strip image.jpg new_image.jpg may led to a unexpected result, avoid to use MiniMagick command if there is no need to greatly compress file.

Uncrush PNG image on ubuntu?

IPA image uses pngcrush to compress PNG image, but I want to uncrush a PNG image on Ubuntu.
Can anyone give me any idea?
The standard PNG utility pngcrush has been modified by Apple, which makes it produce technically invalid PNGs: a new chunk is inserted before the mandatory first chunk IHDR, RGB(A) order of pixel data is inverted, and RGB pixels get premultiplied with their alpha.
Hence, I'd rather call these PNGs "fried", rather than just "crushed".
Try my own pngdefry. The source code is written on a Mac OSX machine but it should be compilable for other OSes as well; it's pretty straightforward C code.

Generate all the files (.vtt + sprite) for the Tooltip Thumbnails options of Jwplayer

What is the best way to generate the ".VTT" file and the jpg sprite attached with it for the Tooltip Thumbnails of Jwplayer (http://www.jwplayer.com/blog/building-tooltip-thumbnails-with-encodingcom/- ?
I know how to make an image sprite with php, but i dont know how to make the screenshots of each video with the time in second.. I think there must be a server tool to do all the tasks it but i cant find it.
Thanks
I wrote a script to do this task. Given a video file (MP4 or M4v), generate thumbnail images, compress into a sprite, and generate a VTT file compatible with JWPlayer tooltip thumbnails. All of the image manipulation uses tools from ffmpeg, ImageMagick, and optionally sips and optipng. The WebVTT generation part, I had to write.
You will have to install ffmpeg & imagemagick, at a minimum to use this.
Github code is here: https://github.com/vlanard/videoscripts (under sprites/).
The basic gist is:
Create a bunch of thumbnails, e.g. every 45th second from a video
ffmpeg -i ../archive/myvideofile.mp4 -f image2 -bt 20M -vf fps=1/45 thumbs/myvideofile/tv%03d.png
Resize those thumbnails to be small, e.g. 100pixels wide
sips --resampleWidth 100 thumbs/myvideofile/tv001.png thumbs/myvideofile/tv002.png thumbs/myvideofile/tv003.png
OR if sips not available, use imageMagick utility:
mogrify -geometry 100x thumbs/myvideofile/tv001.png thumbs/myvideofile/tv002.png thumbs/myvideofile/tv003.png
Get the height & width dimensions of one of the thumbnails to use as the basis of our grid coordinates, using ImageMagick utility
identify -format "%g - %f" thumbs/myvideofile/tv001.png
which returns output like :
100x55+0+0 - tv001.png
from which we parse 100 and 55 as our Width & Height, and the general geometry of each thumbnail (W, H, X, Y)
We then generate our single spritemap from the individual thumbnails. We determine the target grid size (e.g. 2x2, 8x8) to suit the number of thumbnails we generated for this video, as well as passing in the sprite geometry, using an ImageMagick utility
montage thumbs/myvideofile/tv*.png -tile 2x2 -geometry 100x55+0+0 thumbs/myvideofile/myvideofile_sprite.png
Optionally we can run an extra compression step here to make the sprite smaller
optipng thumbs/myvideofile/myvideofile_sprite.png
We then generate a VTT file based on the number of thumbnails we created, using
the interval that we used to space out the thumbnails to label each time segment, and
using the known coordinates of each consecutive image within our sprite that maps to
the associated segment.
I've developed a Ruby gem to easily create .VTT file and sprite of thumbnails.
Thanks for inspiring #randalv!
You can take a look at it here:
https://github.com/scaryguy/jwthumbs
Usage
Instantiate your video file:
movie = Jwthumbs::Movie.new("YOUR_VIDEO.mp4")
Jwthumbs::Movie.new accepts second parameter as a options hash. You can configure several stuff at the same time you instantiate your video like this:
movie = Jwthumbs::Movie.new("YOUR_VIDEO.mp4", seconds_between: 60, sprite_name: "my_sprite_name.jpg")
or after you instentiated your video, you can use Jwthumbs::Movie file to configure things:
movie = Jwthumbs::Movie.new("YOUR_VIDEO.mp4")
movie.seconds_between = 60
movie.sprite_name = "my_sprite_name.jpg"
and then to create your thumbnails and .VTT file just run this command.
movie.create_thumbs!
I know this is already a few years old but I had the same problem and found a command line tool which generates sprites pretty fast and since 1.0.6 supports WebVTT creation out of the box. The name is mt and you can check it here.
Quoting from their documentation you can use it like this:
just run mt and provide any video file as args: mt video.avi
Some of the settings can be changed through runtime flags provided
directly to mt for more information just run mt --help
Option 1 :
You can use the encoding.com's API and tell them to export vtt file too
I recommend to read "How can I create time synced thumbnails for use in JW player?" explanation from encoding.com's Knowledge base
Option 2 :
use movie thumbnailer (mtn), this is a command line tools running on UNIX, Windows systems. But you will have to write a custom script to generate the VTT file corresponding
Super fast! Thanks to FFmpeg's libavcodec.
Command line program: canbe used on remote connections to co-location servers, or used in scripts.
Batch mode: recursively search directories for movie files. Run at lower priority (nice 10 on Linux, idle on Windows) by default.
To run at normal priority use -n option.
Thumbnails are group together in one jpeg file and can be saved individually too (-I
option).
Work fine with Unicode filenames in both Linux & Windows
(might need to change the font with -f fontfile).

Divide large image into A4 sized images

I would like to split a large PNG file into A4 pages so they can be printed out easily.
I would like to use a Linux command line script to do this:
shell> split-into-a4-sized-pages some-big.png
I assume you have ImageMagick & pdfposter installed.
A) convert your .png to .pdf (using ImageMagick)
convert input0.png input1.pdf
B) tile your image using pdfposter:
pdfposter -s4 input1.pdf out.pdf
this command enlarges input0 exactly 4 times, print on the default A4
media, and let pdfposter determine the number of pages required.
Try using imagemagick's crop to your desired size.
Say you have a 640x962 image:
and you want to crop it into 4 320x481 images:
Use:
convert pexels-adonyi-gábor-1400172.jpg -crop 240x240+0+0 cropped.jpg
convert pexels-adonyi-gábor-1400172.jpg -crop 320x481+320+0 cropped.jpg
convert pexels-adonyi-gábor-1400172.jpg -crop 320x481+0+481 cropped.jpg
convert pexels-adonyi-gábor-1400172.jpg -crop 320x481+320+481 cropped.jpg
Now you'd have to find out how many pixels fit into an A4 page in your printer, and the dimensions of the image, and it is a very simple script from here.
Photo by Adonyi Gábor from Pexels.
You can use convert of ImageMagick to scale the image; there are probably other tools in ImageMagick to clip the image if you want.
I don't know of any ready-made command line tool to do this. Unless you use it all the time, ImageMagick may take longer to figure out the right combination of commands and options, than to write a quickie program.
An easy way, if you know Python at all, is write a few-line program using PIL (Python Imaging Library). To read an image takes one line. To extract chunks of some width and height at specified location to save as new image files, is also easy. Add a couple for loops to scan rows and columns of A4-sized chunks, and you're done.
If you don't know Python, just about all quick-to-write programming languages have a similar capability. The GD library comes to mind; it has bindings for several languages.
NetPBM's pamdice will do the splitting into multiple pages. You'll have to set the -width and -height options according to the DPI of your desired A4 images.
And you'll also have to convert the input image to netpbm format first with pngtopam:
pngtopam big.png | pamdice -outstem tile -height h -width w
That will leave you will a bunch of files called tile_x_y.ppm
Convert each one of those to PNG with pnmtopng

Resources