Speeding up postscript image print - performance

I am developing an application that prints an image via generating postscript output and sending it to the printer. So I convert my image to jpg, then to ASCII85 string, append this data to postscript file and send it to the printer.
Output looks like:
%!
{/DeviceRGB setcolorspace
/T currentfile/ASCII85Decode filter def
/F T/DCTDecode filter def
<</ImageType 1/Width 3600/Height 2400/BitsPerComponent
8/ImageMatrix[7.809 0 0 -8.053 0 2400]/Decode
[0 1 0 1 0 1]/DataSource F>> image F closefile T closefile}
exec
s4IA0!"_al8O`[\!<E1.!+5d,s5<tI7<iNY!!#_f!%IsK!!iQ0!?(qA!!!!"!!!".!?2"B!!!!"!!!
---------------------------------------------------------------
ASCII85 data
---------------------------------------------------------------
bSKs4I~>
showpage
My goal now is to speed up this code. Now it takes about 14 seconds from sending .ps to the printer to the moment printer actually starts printing the page (for the 2MB file).
Why is it so slow?
Maybe I can reformat the image so printer doesn't need to perform an affine transform of the image?
Maybe i can use better image encoding?
Any tutorials, clues or advices would be valuable.

One reason its slow is because JPEG is an expensive compression filter. Try using Flate instead. Don't ASCII85 encode the image, send it as binary, that reduces transmission time and removes another filter. Note that jpeg is a lossy compression, so by 'converting to jpeg' you are also sacrificing quality.
You can reduce the amount of effort the printer goes to by creating/scaling the image (before creating the PostScript) so that each image sample matches one pixel in device space. On the other hand, if you are scaling an image up, this means you will need to send more image data to the printer. But usually these days the data connection is fast.
However this is usually hard to do and often defeated by the fact that the printer may not be able to print to the edge of the media, and so may scale the marking operations by a small amount, so that the content fits on the printable area. Its usually pretty hard to figure out if that's going on.
Your ImageMatrix is, well, odd..... It isn't a 1:1 scaling and floating point scale factors are really going to slow down the mapping from user space to device space. And you have a lot of samples to map.
You could also map the image samples into PostScript device space (so that bottom left is at 0,0 instead of top left) which would mean you wouldn't have to flip the CTM In the y axis.
But in short, trying to play with the scale factors is probably not worth it, and most printers optimise these transformations anyway.
The colour model of the printer is usually CMYK, so by sending an RGB image you are forcing the printer to do a colour conversion on every sample in the image. For your image that's more than 8.5 million conversions.

Related

Extracting a region of interest from an image file without reading the entire image

I am searching for a library (in any language) that is capable of reading a region of an image file (any format) without having to initially read that entire image file.
I have come across a few options such as vips, which does indeed not keep the entire image in memory, but still seems to need to read it entirely to begin with.
I realize this may not be available for compressed formats such as jpegs, but in theory it sounds like bmps or tiffs should allow for this type of reading.
libvips will read just the part you need, when it can. For example, if you crop 100x100 pixels from the top-left of a large PNG, it's fast:
$ time vips crop wtc.png x.jpg 0 0 100 100
real 0m0.063s
user 0m0.041s
sys 0m0.023s
(the four numbers are left, top, width, height of the area to be cropped from wtc.png and written to x.jpg)
But a 100x100 pixel region from near the bottom is rather slow, since it has to read and decompress the pixels before the pixels you want to get to the right point in the file:
$ time vips crop wtc.png x.jpg 0 9000 100 100
real 0m3.063s
user 0m2.884s
sys 0m0.181s
JPG and strip TIFF work in the same way, though it's less obvious since they are much faster formats.
Some formats support true random-access read. For example, tiled TIFF is fast everywhere, since libvips can use libtiff to read only the tiles it needs:
$ vips copy wtc.png wtc.tif[tile]
$ time vips crop wtc.tif x.jpg 0 0 100 100
real 0m0.033s
user 0m0.013s
sys 0m0.021s
$ time vips crop wtc.tif x.jpg 0 9000 100 100
real 0m0.037s
user 0m0.021s
sys 0m0.017s
OpenSlide, vips, tiled OpenEXR, FITS, binary PPM/PGM/PBM, HDR, RAW, Analyze, Matlab and probably some others all support true random access like this.
If you're interested in more detail, there's a chapter in the API docs describing how libvips opens a file:
http://libvips.github.io/libvips/API/current/How-it-opens-files.md.html
Here's crop plus save in Python using pyvips:
import pyvips
image = pyvips.Image.new_from_file(input_filename, access='sequential')
tile = image.crop(left, top, width, height)
tile.write_to_file(output_filename)
The access= is a flag that hints to libvips that it's OK to stream this image, in case the underlying file format does not support random access. You don't need this for formats that do support random access, like tiled TIFF.
You don't need to write to a file. For example, this will make a buffer object containing the file encoded as a JPG:
buffer = tile.write_to_buffer('.jpg', Q=85)
Or this will write directly to stdout:
target = pyvips.Target.new_from_descriptor(0)
tile.write_to_target('.jpg', Q=85)
The Q=85 is an optional argument to set the JPG Q factor. You can set any of the file save options.
ITK can do it with some formats. There is a method CanStreamRead which returns true for formats which support streaming, such as MetaImageIO. An example can be found here. You can ask more detailed questions on ITK's forum.
If have control over the file format, I would suggest you use tiled TIFF files. These are typically used in digital pathology whole slide images, with average sizes of 100kx30k pixels or so.
LibTiff makes it easy to read the tiles corresponding to a selected ROI. Tiles can be compressed without making it less efficient to read a small region (no need to decode whole scan lines).
The BMP format (uncompressed) is simple enough that you can write the function yourself.
TIFF is a little less easy, as there are so many subformats. But the TIFF library (TIFFlib) supports a "tile-oriented" I/O mode. http://www.libtiff.org/libtiff.html#Tiles
I don't know of such a library solution.
Low level, file-read access is format specific and in particular, file mapping is OS specific.
If you have access to the raw bytes then assuming you know the width, height, depth and number of channels etc. then calculating file offsets is trivial so just roll your own.
If you're transferring the extracted data over a network you might consider compressing the extracted ROI in-memory if it's relatively big before sending it over the network.

Please clarify the gif image format's intended behavior

If I have a gif89a which has multiple image blocks that are identical (and small, say 40x40 or 1600 pixels in size), should these continue to increase the final size of the gif file (assuming a sane encoder)?
I'm trying to understand how the LZW compression works. According to the W3C spec, I thought the entire data stream itself (consisting of multiple image blocks) should be be compressed, and thus repeating the same image frame multiple times would incur very little overhead (just the size of the symbol for the the repeated image block). This does not seem to be the case, and I've tested with several encoders (Gimp, Photoshop).
Is this to be expected with all encoders, or are these two just doing it poorly?
With gimp, my test gif was 23k in size when it had 240 identical image blocks, and 58k in size with 500 image blocks, which seems less impressive than my intuition is telling me (my intuition's pretty dumb, so I won't be shocked if/when someone tells me it's incredibly wrong).
[edit]
I need to expand on what it is I'm getting at, I think, to receive a proper answer. I am wanting to handcraft a gif image (and possibly write an encoder if I'm up to it) that will take advantage of some quirks to compress it better than would happen otherwise.
I would like to include multiple sub-images in the gif that are used repeatedly in a tiling fashion. If the image is large (in this case, 1700x2200), gif can't compress the tiles well because it doesn't see them as tiles, it rasters from the top left to the bottom right, and at most a 30 pixel horizontal slice of any given tile will be given a symbol and compressed, and not the 30x35 tile itself.
The tiles themselves are just the alphabet and some punctuation in this case, from a scan of a magazine. Of course in the original scan, each "a" is slightly different than every other, which doesn't help for compression, and there's plenty of noise in the scan too, and that can't help.
As each tile will be repeated somewhere in the image anywhere from dozens to hundreds of times, and each is 30 or 40 times as large as any given slice of a tile, it looks like there are some gains to be had (supposing the gif file format can be bent towards my goals).
I've hand-created another gif in gimp, that uses 25 sub-images repeatedly (about 700 times, but I lost count). It is 90k in size unzipped, but zipping it drops it back down to 11k. This is true even though each sub-image has a different top/left coordinate (but that's only what, 4 bytes up in the header of the sub-image).
In comparison, a visually identical image with a single frame is 75k. This image gains nothing from being zipped.
There are other problems I've yet to figure out with the file (it's gif89a, and treats this as an animation even though I've set each frame to be 0ms in length, so you can't see it all immediately). I can't even begin to think how you might construct an encoder to do this... it would have to select the best-looking (or at least one of the better-looking) versions of any glyph, and then figure out the best x,y to overlay it even though it doesn't always line up very well.
It's primary use (I believe) would be for magazines scanned in as cbr/cbz ebooks.
I'm also going to embed my hand-crafted gif, it's easier to see what I'm getting at than to read my writing as I stumble over the explanation:
LZW (and GIF) compression is one-dimensional. An image is treated as a stream of symbols where any area-to-area (blocks in your terminology) symmetry is not used. An animated GIF image is just a series of images that are compressed independently and can be applied to the "main" image with various merging options. Animated GIF was more like a hack than a standard and it wasn't well thought out for efficiency in image size.
There is a good explanation for why you see smaller files after ZIP'ing your GIF with repeated blocks. ZIP files utilize several techniques which include a "repeated block" type of compression which could do well with small (<32K) blocks (or small distances separating) identical LZW data.
GIF-generating software can't overcome the basic limitation of how GIF images are compressed without writing a new standard. A slightly better approach is used by PNG which uses simple 2-dimensional filters to take advantage of horizontal and vertical symmetries and then compresses the result with FLATE compression. It sounds like what you're looking for is a more fractal or video approach which can have the concept of a set of compressed primitives that can be repeated at different positions in the final image. GIF and PNG cannot accomplish this.
GIF compression is stream-based. That means to maximize compression, you need to maximize the repeatability of the stream. Rather than square tiles, I'd use narrow strips to minimize the amount of data that passes before it starts repeating then keep the repeats within the same stream.
The LZW code size is capped at 12 bits, which means the compression table fills up relatively quickly. A typical encoder will output a clear code when this happens so that the compression can start over, giving good adaptability to fresh content. If you do your own custom encoder you can skip the clear code and keep reusing the existing table for higher compression results.
The GIF spec does not specify the behavior when a delay time of 0 is given, so you're at the mercy of the decoder implementation. For consistent results you should use a delay of 1 and accept that the entire image won't show up immediately.

Expanding 8-bit color to 24-bit color

Setup
I have a couple hundred Sparkfun LED pixels (similar to https://www.sparkfun.com/products/11020) connected to an Arduino Uno and want to control the pixels from a PC using the built-in Serial-over-USB connection of the Arduino.
The pixels are individually adressable, each has 24 bits for the color (RGB). Since I want to be able to change the color of each pixel very quickly, the transmission of the data from the pc to the Arduino has to be very efficient (the further transmission of data from the Arduino to the pixels is very fast already).
Problem
I've tried simply sending the desired RGB-Values directly as is to the Arduino but this leads to a visible delay, when I want to for example turn on all LEDs at the same time. My straightforward idea to minimize the amount of data is to reduce the available colors from 24-bit to 8-bit, which is more than enough for my application.
If I do this, I have to expand the 8-bit values from the PC to 24-bit values on the Arduino to set the actual color on the pixels. The obvious solution here would be a palette that holds all available 8-bit values and the corresponding 24-bit colors. I would like to have a solution without a palette though, mostly for memory space reasons.
Question
What is an efficient way to expand a 8-bit color to a 24-bit one, preferrably one that preserves the color information accurately? Are there standard algorithms for this task?
Possible solution
I was considering a format with 2 bits for each R and B and 3 bits for G. These values would be packed into a single byte that would be transmitted to the Arduino and then be unpacked using bit-shifting and interpolated using the map() function (http://arduino.cc/en/Reference/Map).
Any thoughts on that solution? What would be a better way to do this?
R2B2G3 would give you very few colors (there's actually one more bit left). I don't know if it would be enough for your application. You can use dithering technique to make 8-bit images look a little better.
Alternatively, if you have any preferred set of colors, you can store known palette on your device and never send it over the wire. You can also store multiple palettes for different situations and specify which one to use with small integer index.
On top of that it's possible to implement some simple compression algorithm like RLE or LZW and decompress after receiving.
And there are some very fast compression libraries with small footprint you can use: Snappy, miniLZO.
Regarding your question “What would be a better way to do this?”, one of the first things to do (if not yet done) is increase the serial data rate. An Arduino Forum suggests using 115200 bps as a standard rate, and trying 230400 bps. At those rates you would need to write the receiving software so it quickly transfers data from the relatively small receive buffer into a larger buffer, instead of trying to work on the data out of the small receive buffer.
A second possibility is to put activation times into your data packets. Suppose F1, F2, F3... are a series of frames you will display on the LED array. Send those frames from the PC ahead of time, or during idle or wait times, and let the Arduino buffer them until they are scheduled to appear. When the activation time arrives for a given frame, have the Arduino turn it on. If you know in advance the frames but not the activation times, send and buffer the frames and send just activation codes at appropriate times.
Third, you can have multiple palettes and dynamic palettes that change on the fly and can use pixel addresses or pixel lists as well as pixel maps. That is, you might use different protocols at different times. Protocol 3 might download a whole palette, 4 might change an element of a palette, 5 might send a 24-bit value v, a time t, a count n, and a list of n pixels to be set to v at time t, 6 might send a bit map of pixel settings, and so forth. Bit maps can be simple 1-bit-per-pixel maps indicating on or off, or can be k-bits-per-pixel maps, where a k-bit entry could specify a palette number or a frame number for a pixel. This is all a bit vague because there are so many possibilities; but in short, define protocols that work well with whatever you are displaying.
Fourth, given the ATmega328P's small (2KB) RAM but larger (32KB) flash memory, consider hard-coding several palettes, frames, and macros into the program. By macros, I mean routines that generate graphic elements like arcs, lines, open or filled rectangles. Any display element that is known in advance is a candidate for flash instead of RAM storage.
Your (2, 3, 2) bit idea is used "in the wild." It should be extremely simple to try out. The quality will be pretty low, but try it out and see if it meets your needs.
It seems unlikely that any other solution could save much memory compared to a 256-color lookup table, if the lookup table stays constant over time. I think anything successful would have to exploit a pattern in the kind of images you are sending to the pixels.
Any way you look at it, what you're really going for is image compression. So, I would recommend looking at the likes of PNG and JPG compression, to see if they're fast enough for your application.
If not, then you might consider rolling your own. There's only so far you can go with per-pixel compression; size-wise, your (2,3,2) idea is about as good as you can expect to get. You could try a quadtree-type format instead: take the average of a 4-pixel block, transmit a compressed (lossy) representation of the differences, then apply the same operation to the half-resolution image of averages...
As others point out, dithering will make your images look better at (2,3,2). Perhaps the easiest way to dither for your application is to choose a different (random or quasi-random) fixed quantization threshold offset for each color of each pixel. Both the PC and the Arduino would have a copy of this threshold table; the distribution of thresholds would prevent posterization, and the Arduino-side table would help maintain accuracy.

avoid massive memory usage in openlayers with image overlay

I am building a map system that requires a large image (native 13K pixels wide by 20K pixels tall) to be overlayed onto an area of the US covering about 20 kilometers or so. I have the file size of the image in jpg format down to 23 MB and it loads onto the map fairly quickly. I can zoom in and out and it looks great. It's even located exactly where I need it to be (geographically). However, that 25 MB file is causing Firefox to consume an additional 1GB of memory!!! I am using Memory Restart extension on Firefox and without the image overlay, the memory usage is about 360 MB to 400 MB, which seems to be about the norm for regular usage, browsing other websites etc. But when I add the image layer, the memory usage jumps to 1.4 GB. I'm at a complete loss to explain WHY that is and how to fix it. Any ideas would be greatly appreciated.
Andrew
The file only takes up 23 MB as a JPEG. However, the JPEG format is compressed, and any program (such as FireFox) that wants to actually render the image has to uncompress it and store every pixel in memory. You have 13k by 20k pixels, which makes 260M pixels. Figure at least 3 bytes of color info per pixel, that's 780 MB. It might be using 4 bytes, to have each pixel aligned at a word boundary, which would be 1040 MB.
As for how to fix it, well, I don't know if you can, except by reducing the image size. If the image contains only a small number of colors (for instance, a simple diagram drawn in a few primary colors), you might be able to save it in some format that uses indexed colors, and then FireFox might be able to render it using less memory per pixel. It all depends on the rendering code.
Depending on what you're doing, perhaps you could set things up so that the whole image is at lower resolution, then when the user zooms in they get a higher-resolution image that covers less area.
Edit: to clarify that last bit: right now you have the entire photograph at full resolution, which is simple but needs a lot of memory. An alternative would be to have the entire photograph at reduced resolution (maximum expected screen resolution), which would take less memory; then when the user zooms in, you have the image at full resolution, but not the entire image - just the part that's been zoomed in (which likewise needs less memory).
I can think of two approaches: break up the big image into "tiles" and load the ones you need (not sure how well that would work), or use something like ImageMagick to construct the smaller image on-the-fly. You'd probably want to use caching if you do it that way, and you might need to code up a little "please wait" message to show while it's being constructed, since it could take several seconds to process such a large image.

Reducing the file size of a very large images, without changing the image dimensions

Consider an application handling uploading of potentially very large PNG files.
All uploaded files must be stored to disk for later retrieval. However, the PNG files can be up to 30 MB in size, but disk storage limitations gives a maximum per file size of 1 MB.
The problem is to take an input PNG of file size up to 30 MB and produce an output PNG of file size below 1 MB.
This operation will obviously be lossy - and reduction in image quality, colors, etc is not a problem. However, one thing that must not be changed is the image dimension. Hence, an input file of dimension 800x600 must produce an output file of dimension 800x600.
The above requirements outlined above are strict and cannot be changed.
Using ImageMagick (or some other open source tool) how would you go about reducing the file size of input PNG-files of size ~30 MB to a maximum of 1 MB per file, without changing image dimensions?
PNG is not a lossy image format, so you would likely need to convert the image into another format-- most likely JPEG. JPEG has a settable "quality" factor-- you could simply keep reducing the quality factor until you got an image that was small enough. All of this can be done without changing the image resolution.
Obviously, depending on the image, the loss of visual quality may be substantial. JPEG does best for "true life" images, such as pictures from cameras. It does not do as well for logos, screen shots, or other images with "sharp" transitions from light to dark. (PNG, on the other hand, has the opposite behavior-- it's best for logos, etc.)
However, at 800x600, it likely will be very easy to get a JPEG down under 1MB. (I would be very surprised to see a 30MB file at those smallish dimensions.) In fact, even uncompressed, the image would only be around 1.4MB:
800 pixels * 600 pixels * 3 Bytes / color = 1,440,000 Bytes = 1.4MB
Therefore, you only need a 1.4:1 compression ratio to get the image down to 1MB. Depending on the type of image, the PNG compression may very well provide that level of compression. If not, JPEG almost certainly could-- JPEG compression ratios on the order of 10:1 are not uncommon. Again, the quality / size of the output will depend on the type of image.
Finally, while I have not used ImageMagick in a little while, I'm almost certain there are options to re-compress an image using a specific quality factor. Read through the docs, and start experimenting!
EDIT: Looks like it should, indeed, be pretty easy with ImageMagick. From the docs:
$magick> convert input.png -quality 75 output.jpg
Just keep playing with the quality value until you get a suitable output.
Your example is troublesome because a 30MB image at 800x600 resolution is storing 500 bits per pixel. Clearly wildly unrealistic. Please give us real numbers.
Meanwhile, the "cheap and cheerful" approach I would try would be as follows: scale the image down by a factor of 6, then scale it back up by a factor of 6, then run it through PNG compression. If you get lucky, you'll reduce image size by a factor of 36. If you get unlucky the savings will be more like 6.
pngtopng big.png | pnmscale -reduce 6 | pnmscale 6 | pnmtopng > big.png
If that's not enough you can toss a ppmquant in the middle (on the small image) to reduce the number of colors. (The examples are netpbm/pbmplus, which I have always found easier to understand than ImageMagick.)
To know whether such a solution is reasonable, we have to know the true numbers of your problem.
Also, if you are really going to throw away the information permanently, you are almost certainly better off using JPEG compression, which is designed to lose information reasonably gracefully. Is there some reason JPEG is not appropriate for your application?
Since the size of an image file is directly related to the image dimensions and the number of colours, you seem to have only one choice: reduce the number of colours.
And ~30MB down to 1MB is a very large reduction.
It would be difficult to achieve this ratio with a conversion to monochrome.
It depends a lot on what you want at the end, I often like to reduce the number of colors while perserving the size. In many many cases the reduced colors does not matter. Here is an example of reducing the colors to 254.
convert -colors 254 in.png out.png
You can try the pngquant utility. It is very simple to install and to use. And it can compress your PNGs a lot without visible quality loss.
Once you install it try something like this:
pngquant yourfile.png
pngquant --quality=0-70 yourfile.png
For my demo image (generated by imagemagick) the first command reduces 350KB to 110KB, and the second one reduces it to 65KB.
Step 1: Decrease the image to 1/16 of its original size.
Step 2: Decrease the amount of colors.
Step 3: Increase the size of the image back to its original size.
I know you want to preserve the pixel size, but can you reduce the pixel size and adjust the DPI stored with the image so that the display size is preserved? It depends on what client you'll be using to view the images, but most should observe it. If you are using the images on the web, then you can just set the pixel size of the <img> tag.
It depends on they type of image, is it a real life picture or computer generated image,
for real life images png will do very little it might even not compress at all, use jpg for those images, it the image has a limited number of different colors (it can have a 24 bit image depth but the number of unique images will be low) png can compress quite nicely.
png is basicly an implementation of zip for images so if a lot of pixels are the same you can have a rather nice compression ratio, if you need lossless compression don't do resizing.
use optipng it reduce size without loss
http://optipng.sourceforge.net/
Try ImageOptim https://imageoptim.com/mac it is free and open source
If you want to modify the image size in ubuntu, you can try "gimp".
I have tried couple of image editing apps in ubuntu and this seemed to be the best among them.
Installation:
Open terminal
Type: sudo apt install gimp-plugin-registry
Give admin password. You'll need net connection for this.
Once installed, open the image with GIMP image editor. Then go to: File > Export as > Click on 'Export' button
You will get a small window, where check box on "Show preview in image window". Once you check this option, you will get to see the current size of the file along with Quality level.
Adjust the quality level to increase/decrease the file size.
Once adjusting is done, click on 'Export' button finally to save the file.
Right click on the image. Select open with paint. Click on resize. Click on pixel and change the horizontal to 250 or 200.
That's the only thing. It is the fastest way for those who are using Windows XP or Windows 7.

Resources