DeepZoom white images - Imagemagick Vips Cocoa - cocoa

i have a little cocoa osXa app that uses Vips DZSAVE and imagemagick to create the DeepZoom Tile from a big psb file.
the problem is that it works fine just till a undefined size. i'm managing correctly files about 60.000px X 50.000px 27Gb size but whit bigger files the app is generating a tile made by white images.
No data are written...
i have to manage images around 170.000px X 170.000px between 60 and 80 Gb.
i have tried Environment Variables to increase imagemagick cache limits but, no results...
someone has some ideas about the white output?

I'm the vips maintainer. Try at the command-line, something like:
vips dzsave huge.psb output_name --tile-size 256 --overlap 0 --vips-progress --vips-leak
and see what happens. If you run "top" at the same time you can watch memory use.
vips uses libMagick to load the psb files and my guess would be that this is hitting a memory limit somewhere inside ImageMagick.
Do you have to use psb? If you can use a format that vips can process directly it should work much better. Big TIFF or Openslide (if these are slide images) are both good choices. I regularly process 200,000 x 200,000 images with dzsave on a very modest laptop.

Related

Python graphviz taking huge amount of time during rendering the pdf

I have a large graph with many nodes and edges. The problem I am facing with the Graphviz python package is that rendering the file takes a lot of time.
There are other alternatives mentioned here and here. But the problem I am facing is that all of them work with the dot file, and these methods generate image files that do not look good; I mean, the formatting intended is not quite visible.
I want a pdf file to be generated. The large image files being generated are crashing my Linux. The default image viewer in Linux cannot handle them, or Mozilla Firefox, though it can open it, takes a tremendous amount of time for a portion of the image to become apparent.
Please can anyone help me generate a pdf file very fast which can be quickly viewed in usual pdf viewers or if an image, so can be easily viewed using usual image viewers?
I want the graphs generated to look something like this, this, and this. [These are the graphs rendered to pdf by python for a subgraph of the input].
For the entire graph, the situation of the dot file is like this, and the command:
$sfdp -x -Goverlap=scale -Tpng syscall > data.png
sfdp: graph is too large for cairo-renderer bitmaps. Scaling by 0.487931 to fit
tcmalloc: large alloc 3142361088 bytes == 0x558a701ce000 # 0x7f45c7679001 0x7f45c39101fa 0x7f45c39102ad 0x7f45c4a9b6df 0x7f45c4f92261 0x7f45c740f468 0x7f45c7411d53 0x558a6ee01092 0x7f45c6dc4c87 0x558a6ee0112a
It is returning the following data.png file, which I cannot view correctly on any image viewer on my Linux system. And also, it is not of the same format (the look of the graph, I mean) as generated by Graphviz render.
And for this dot file, even sfdp is taking considerable time...
Unsure why your device takes so much time other than its running around in circles like a headless chicken before falling over.
Your error feedback should give you a clue by reporting the file is too large for an image:
sfdp: graph is too large for cairo-renderer bitmaps. Scaling by 0.531958 to fit
sfdp: failure to create cairo surface: out of memory
Here it is as SVG, note the size is roughly 600 in square that's roughly 61,598 pixels x 51,767 pixels = roughly 3GB (your error says 3142361088 bytes cannot be Memory ALLOCated)
A large file by any standard, but as SVG its only 1.63 MB
sfdp -Goverlap=scale -x -Tsvg syscall -o data.svg
File: data.svg
File Size: 1.63 MB (1,707,939 Bytes)
Number of Pages: 1
Page Size: 641.64 x 539.24 in
You can open the svg in a browser and print to PDF HOWEVER even at 10% scale on A0 Landscape that requires 2 PAGES and you cant see lettering, thus at full scale it would be more than 100 of those poster pages
Add this to your input file: graph [nslimit=2 nslimit1=2 maxiter=5000] (values somewhat arbitrary)
And use this commandline dot -v -Tsvg ... (if svg works, then try pdf)
I think dot has the best chance of producing a graph you will like

Handle 150GB .jp2 image

I downloaded a 150GB satellite .jp2 image, of which I only want a small section at a time.
How would I go about tiling the image in manageable chunks? Just extracting a part of the image would also be enough.
As I'm only somewhat familiar with Python, I looked at the Pillow and OpenCV libraries, but without success as the image resolution exceeds their limits.
I also looked into Openslide for Python, but couldn't get rid of an error (Could not find module 'libopenslide-0.dll').
libvips can process huge images efficiently.
For example, with this 2.8gb test image:
$ vipsheader 9235.jp2
9235.jp2: 107568x79650 uchar, 4 bands, srgb, jp2kload
$ ls -l 9235.jp2
-rw-r--r-- 1 john john 2881486848 Mar 1 22:37 9235.jp2
I see:
$ /usr/bin/time -f %M:%e \
vips crop 9235.jp2 x.jpg 10000 10000 1000 1000
190848:0.45
So it takes a 1,000 x 1,000 pixel chunk out of a 110,000 x 80,000 pixel jp2 image in 0.5s and needs under 200mb of memory.
There are bindings for python, ruby, node, etc., so you don't have to use the CLI.
In python you could write:
import pyvips
image = pyvips.Image.new_from_file("9235.jp2")
tile = image.crop(10000, 10000, 1000, 1000)
tile.write_to_file("x.jpg")
It does depend a bit on the jp2 image you are reading. Some are untiled (!!) and it can be very slow to read out a section.
There are windows binaries as well, check the "download" page.
If you're on linux, vipsdisp can view huge images like this very quickly.
Grok JPEG 2000 toolkit
is able to decompress regions of extremely large images such as
the 150 GB image you linked to.
Sample command:
grk_decompress -i FOO.jp2 -o FOO.tif -d 10000,10000,15000,15000 -v
to decompress the region (10000,10000,15000,15000) to TIFF format.

Tile Images for Zoom: How do I get to zoom level 10 without starting with an image that is 262,144?

I am trying to create a zoomable image for web display and have come across multiple sources to get this to work; leaflet, openlayers, etc. I have seen and followed some good tutorials. Pedro's
However I am at a loss for understanding the best practices for creating my images in the first place? It seems that to achieve a zoom level of 10+ I need a really large image to begin with. Trying to do this in Adobe Illustrator or Photoshop seems like a bad idea? Illustrator only goes up to 16383 x 16833 and photoshop will go to 262,144 but is too much of a strain on my cpu.
As of right now I am using a tile slicing plug-in for photoshop and it is a slow process.
My questions are: Is the best way to get higher zoom levels by starting with a huge image? Or is there a way to slice an image, and then slice the slices of the image?
If I need to start with a humongous image is there a way to up-scale my image outside of a program like photoshop?
If I can slice slices what is the best method?
Thank you so much for your help and time, it is much appreciated!
-earl-
Yes, you need to start with a huge image. If your source is a vector drawing, you could save as something like PDF or SVG and do the high-resolution rendering in another program. See below for an example.
gdal2tiles is a nice thing and can do many projections, but it's slow for simple raster tile pyramids and needs a lot of memory. dzsave is faster and more efficient with RAM. On this laptop with a 25k x 25k RGB JPG file I see:
$ time gdal2tiles.py -p raster ../wac_nearside.jpg x
Generating Base Tiles:
0...10...20...30...40...50...60...70...80...90...100 - done.
Generating Overview Tiles:
0...10...20...30...40...50...60...70...80...90...100 - done.
real 3m51.728s
user 3m48.548s
sys 0m2.992s
peak memory 400mb
But with dzsave I see:
$ time vips dzsave ../wac_nearside.jpg y --suffix .png
real 0m36.097s
user 1m39.900s
sys 0m6.960s
peak memory 100mb
It would be faster still, but almost all the time is being spent in PNG write. dzsave will also do the centring for you automatically, so there's no need for the extra gdal_translate step.
As well as JPG files, vips can load PDF and SVG with a scale factor. For example:
$ time vips dzsave ../ISO_12233-reschart.pdf[dpi=5000] y --suffix .png
real 3m11.029s
user 8m58.520s
sys 0m35.504s
peak memory 850MB
Will render the ISO calibration chart at 5,000 DPI, producing an image of 78740 x 47244 pixels. vips memory use scales with image width, so you'd need about 1.5gb of ram for a 10,000 DPI render.

avoid massive memory usage in openlayers with image overlay

I am building a map system that requires a large image (native 13K pixels wide by 20K pixels tall) to be overlayed onto an area of the US covering about 20 kilometers or so. I have the file size of the image in jpg format down to 23 MB and it loads onto the map fairly quickly. I can zoom in and out and it looks great. It's even located exactly where I need it to be (geographically). However, that 25 MB file is causing Firefox to consume an additional 1GB of memory!!! I am using Memory Restart extension on Firefox and without the image overlay, the memory usage is about 360 MB to 400 MB, which seems to be about the norm for regular usage, browsing other websites etc. But when I add the image layer, the memory usage jumps to 1.4 GB. I'm at a complete loss to explain WHY that is and how to fix it. Any ideas would be greatly appreciated.
Andrew
The file only takes up 23 MB as a JPEG. However, the JPEG format is compressed, and any program (such as FireFox) that wants to actually render the image has to uncompress it and store every pixel in memory. You have 13k by 20k pixels, which makes 260M pixels. Figure at least 3 bytes of color info per pixel, that's 780 MB. It might be using 4 bytes, to have each pixel aligned at a word boundary, which would be 1040 MB.
As for how to fix it, well, I don't know if you can, except by reducing the image size. If the image contains only a small number of colors (for instance, a simple diagram drawn in a few primary colors), you might be able to save it in some format that uses indexed colors, and then FireFox might be able to render it using less memory per pixel. It all depends on the rendering code.
Depending on what you're doing, perhaps you could set things up so that the whole image is at lower resolution, then when the user zooms in they get a higher-resolution image that covers less area.
Edit: to clarify that last bit: right now you have the entire photograph at full resolution, which is simple but needs a lot of memory. An alternative would be to have the entire photograph at reduced resolution (maximum expected screen resolution), which would take less memory; then when the user zooms in, you have the image at full resolution, but not the entire image - just the part that's been zoomed in (which likewise needs less memory).
I can think of two approaches: break up the big image into "tiles" and load the ones you need (not sure how well that would work), or use something like ImageMagick to construct the smaller image on-the-fly. You'd probably want to use caching if you do it that way, and you might need to code up a little "please wait" message to show while it's being constructed, since it could take several seconds to process such a large image.

Reducing the file size of a very large images, without changing the image dimensions

Consider an application handling uploading of potentially very large PNG files.
All uploaded files must be stored to disk for later retrieval. However, the PNG files can be up to 30 MB in size, but disk storage limitations gives a maximum per file size of 1 MB.
The problem is to take an input PNG of file size up to 30 MB and produce an output PNG of file size below 1 MB.
This operation will obviously be lossy - and reduction in image quality, colors, etc is not a problem. However, one thing that must not be changed is the image dimension. Hence, an input file of dimension 800x600 must produce an output file of dimension 800x600.
The above requirements outlined above are strict and cannot be changed.
Using ImageMagick (or some other open source tool) how would you go about reducing the file size of input PNG-files of size ~30 MB to a maximum of 1 MB per file, without changing image dimensions?
PNG is not a lossy image format, so you would likely need to convert the image into another format-- most likely JPEG. JPEG has a settable "quality" factor-- you could simply keep reducing the quality factor until you got an image that was small enough. All of this can be done without changing the image resolution.
Obviously, depending on the image, the loss of visual quality may be substantial. JPEG does best for "true life" images, such as pictures from cameras. It does not do as well for logos, screen shots, or other images with "sharp" transitions from light to dark. (PNG, on the other hand, has the opposite behavior-- it's best for logos, etc.)
However, at 800x600, it likely will be very easy to get a JPEG down under 1MB. (I would be very surprised to see a 30MB file at those smallish dimensions.) In fact, even uncompressed, the image would only be around 1.4MB:
800 pixels * 600 pixels * 3 Bytes / color = 1,440,000 Bytes = 1.4MB
Therefore, you only need a 1.4:1 compression ratio to get the image down to 1MB. Depending on the type of image, the PNG compression may very well provide that level of compression. If not, JPEG almost certainly could-- JPEG compression ratios on the order of 10:1 are not uncommon. Again, the quality / size of the output will depend on the type of image.
Finally, while I have not used ImageMagick in a little while, I'm almost certain there are options to re-compress an image using a specific quality factor. Read through the docs, and start experimenting!
EDIT: Looks like it should, indeed, be pretty easy with ImageMagick. From the docs:
$magick> convert input.png -quality 75 output.jpg
Just keep playing with the quality value until you get a suitable output.
Your example is troublesome because a 30MB image at 800x600 resolution is storing 500 bits per pixel. Clearly wildly unrealistic. Please give us real numbers.
Meanwhile, the "cheap and cheerful" approach I would try would be as follows: scale the image down by a factor of 6, then scale it back up by a factor of 6, then run it through PNG compression. If you get lucky, you'll reduce image size by a factor of 36. If you get unlucky the savings will be more like 6.
pngtopng big.png | pnmscale -reduce 6 | pnmscale 6 | pnmtopng > big.png
If that's not enough you can toss a ppmquant in the middle (on the small image) to reduce the number of colors. (The examples are netpbm/pbmplus, which I have always found easier to understand than ImageMagick.)
To know whether such a solution is reasonable, we have to know the true numbers of your problem.
Also, if you are really going to throw away the information permanently, you are almost certainly better off using JPEG compression, which is designed to lose information reasonably gracefully. Is there some reason JPEG is not appropriate for your application?
Since the size of an image file is directly related to the image dimensions and the number of colours, you seem to have only one choice: reduce the number of colours.
And ~30MB down to 1MB is a very large reduction.
It would be difficult to achieve this ratio with a conversion to monochrome.
It depends a lot on what you want at the end, I often like to reduce the number of colors while perserving the size. In many many cases the reduced colors does not matter. Here is an example of reducing the colors to 254.
convert -colors 254 in.png out.png
You can try the pngquant utility. It is very simple to install and to use. And it can compress your PNGs a lot without visible quality loss.
Once you install it try something like this:
pngquant yourfile.png
pngquant --quality=0-70 yourfile.png
For my demo image (generated by imagemagick) the first command reduces 350KB to 110KB, and the second one reduces it to 65KB.
Step 1: Decrease the image to 1/16 of its original size.
Step 2: Decrease the amount of colors.
Step 3: Increase the size of the image back to its original size.
I know you want to preserve the pixel size, but can you reduce the pixel size and adjust the DPI stored with the image so that the display size is preserved? It depends on what client you'll be using to view the images, but most should observe it. If you are using the images on the web, then you can just set the pixel size of the <img> tag.
It depends on they type of image, is it a real life picture or computer generated image,
for real life images png will do very little it might even not compress at all, use jpg for those images, it the image has a limited number of different colors (it can have a 24 bit image depth but the number of unique images will be low) png can compress quite nicely.
png is basicly an implementation of zip for images so if a lot of pixels are the same you can have a rather nice compression ratio, if you need lossless compression don't do resizing.
use optipng it reduce size without loss
http://optipng.sourceforge.net/
Try ImageOptim https://imageoptim.com/mac it is free and open source
If you want to modify the image size in ubuntu, you can try "gimp".
I have tried couple of image editing apps in ubuntu and this seemed to be the best among them.
Installation:
Open terminal
Type: sudo apt install gimp-plugin-registry
Give admin password. You'll need net connection for this.
Once installed, open the image with GIMP image editor. Then go to: File > Export as > Click on 'Export' button
You will get a small window, where check box on "Show preview in image window". Once you check this option, you will get to see the current size of the file along with Quality level.
Adjust the quality level to increase/decrease the file size.
Once adjusting is done, click on 'Export' button finally to save the file.
Right click on the image. Select open with paint. Click on resize. Click on pixel and change the horizontal to 250 or 200.
That's the only thing. It is the fastest way for those who are using Windows XP or Windows 7.

Resources