Using PDF files as images - image

In the past i have used PDF images of vector files in an NSImage, the advantage being that i can scale them without losing quality. I know that people usually use jpg and png files, why is this? Do PDF files significantly reduce performance or is there some other reason?
Thank you in advance,
Ben

It depends on what's in your PDF file. If there's enough going on in it, then yeah, a raster image may be faster. The trade-off is, of course, scalability—you end up needing to create 1x and 2x variants for every destination size, or create an icon family (if appropriate), instead of just using one image for everything.
But I think most people create raster resources because that's the sort of tool they're used to: Photoshop, Pixelmator, or Acorn. Not many people use vector editors or write their art in PostScript. (And the field of vector editors available on the Mac is pretty weak.)
My recommendation for a few years now has been an app called Opacity. It's vector-focused, but can export raster images in multiple sizes, PDFs, and even source code.

I use PDF files too, for precisely the same reason that they scale automatically. Apple do the same (look inside the Xcode.app bundle - you won't find much other than .pdf files).
There is no reason to use .jpg or .png files at all.

Related

Best image file format for book pages

I wanted to scan Book pages and combine the images to an pdf "ebook" (just for me), but the file sizes get really huge. Even .jpg resulted in an pdf file with 60mb+ in size.
Do you have any idea how I can compress it any further? I.e. which file format I could choose for this specific purpose? (The book contains pictures and written text.)
Thank you for your help.
I tried to save it as .jpg and other file formats like .png, but didnt get small enough for the file to be easy handled, without loosing to much resolution.
Images are expensive things.
Ignoring compression you’re looking at 3bytes per pixel of data.
If you want to keep images you could reduce this by turning your images into greyscale. That reduces it to 1byte per pixel (again ignoring compression).
Or you could turn it into black and white. Which would be 1 but per pixel.
Or, alternatively, you could use OCR to translate your image into actual text which is a much more efficient way of storing books.

How to optimize images for SEO & Google's Pagespeed & Improve web-saving

Pretty much with every Pagespeed test I do for all my website I get the comment "Optimize images by lossless compressing image X" which often increases my page rank a lot.
I already save EVERY image with 'save for web' with Photoshop, but I was wondering how I could "Optimize images by compressing lossless" even more. As far as I know I'm already doing everything I can.
Really wondering..
Off-topic, but I noticed that Google's PageSpeed uses a Retina device to check, since all my Retina images got loaded instead of the regular ones. Since these are larger than the area I got a 1/100 score on the mobile segment. Haha.
This was a real issue with many of my sites, however I use the free version of kraken to 'loosely compress' all of my images and this passes the Google Test, thus boosting rankings!
https://kraken.io/web-interface
I must have used this for well over 10,000 images already!
The images you create in programs like Photoshop and Illustrator look amazing but often the file sizes are very large. This is because the images are made in a format that makes them easier to manipulate in different ways. If you put these files on your website it would be very slow to load. Optimizing your images for the web means saving or compiling your images in a web-friendly format depending on what the image contains.
How does it work?
There are two forms of compression that we need to understand, Lossy and Lossless.
Images saved in a lossy format will look slightly different than the original image when uncompressed. Keep in mind that this is only visible at a very close look. Lossy compression is good for web, because images use a small amount of memory, but can be sufficiently like the original image.
Images saved in lossless format retain all the information needed to produce the original image. For this reason, these images carry a lot more data and in return are a much large file size.
We also can optimize images for the web by saving them as the appropriate dimensions. Resizing the image on the webpage itself using CSS is helpful but the issue is the web browser will still download the entire original file, then resize it and display it.
Can you imagine taking a poster size image and using it as a thumbnail? The little 20px by 20px image would take as long to load as the original poster when we could just be loading a 20px image the whole time.
How to Optimize Images?
In simple terms optimizing your image works by removing all the unnecessary data that is saved within the image to reduce the file size of the image based on where it is being used in your website. Optimizing images for the web can reduce your total page load size by up to 80%.
Full optimization of images can be quite an art to perfect as there are such a wide variety of images you might be dealing with. Here are the most common ways to optimize your images for the web.
Reduce the white space around images – some developers use whitespace for padding which is a big no-no. Crop your images to remove any whitespace around the image and use CSS to provide padding.
Use proper file formats. If you have icons, bullets, or any graphics that don’t have too many colors use a format such as GIF and save the file with lower amounts of colors. If you have more detailed graphics then use JPG file format to save your images and reduce the quality.
Save your images in the proper dimensions. If you are having to use HTML or CSS to resize your images, stop right there. Save the image in the desired size to reduce the file size.
To resize your images you will have to use some form of program. For basic compression, you can use a simple editing program such as GIMP. For more advanced optimization you will have to save specific files in Photoshop, Illustrator, or Fireworks.

Are there any benefits to using bitmaps?

I'm porting some CF 2.0 VB.Net apps to a newer version of a handset that has twice the screen resolution. So I have to double the dimensions of everything otherwise it all gets squished up into the top LH corner of the screen.
One screen had a bitmap which was 250K in size, and after I doubled the dimensions naturally it blew out to one MB. This isn't real good on a handheld, so I fired up irfanview and converted it to a .GIF. The .GIF was only 60KB in size, with no discernible change in the quality of the image.
To me, it seems a no-brainer : Convert all Bitmaps to Gif (or JPG) and get the same results for a fraction of the disk space (and probably quicker form loading times).
But does anyone know of a situation where you would use a bitmap in preference to a GIF/JPEG? I cannot find any.
I really can't think of any realistic example where you would prefer an bitmap to a GIF. Since GIF is a lossless format you loose no information when storing images. So after reading the file in your app you will have the same image data as if you have read a bitmap. And like you said: The file will be smaller and thus will probably will be read faster from disk.
JPEG is different because it's a lossy format, meaning you will lose information when storing images in it. You will need to decide if the loss of information is meaningful in your app.
Bitmaps would be preferable if and only if reading files from disk where faster than decompressing the file in memory.
And to be precise you would prefer bitmaps when storing images in main memory, so you can work easily on the data in your code. Which is actually what you most likely already have when you have loaded a file using an image library.
To cut a long story shorts, a BMP is stored as a series of pixels along with their colour. This is useful if you want to do such things as pattern recognition, movement detection and such like.
Bitmaps are typically used for their convenience - you can knock one up in paint without having specialist graphics software.

Quality for images in LaTeX documents

What are some of the points that I need to follow if I want to have good quality images in a LaTeX document. These images are mostly screenshots of a software application or flow charts.
Below are two such images.
Flow Chart
Screenshot
Thanx
For diagrams, the rule is to use vector formats as much as you can — PDF, EPS or native LaTeX packages. When using vector graphics, the picture does not loose resolution and can be scaled freely. For a flow chart, I would either export it from the drawing application as a PDF, or use PGF/Tikz to produce it from LaTeX (see also examples). If your drawing application does not have a PDF export, consider using one that does — e.g., UMLet.
If you can't use vector graphics (e.g., because it is a screenshot), make sure you use high-enough resolution to begin with. If it is an academic paper, the publisher usually has guidelines for this.
If you use PDFLatex you can use png images and in those cases you definately should use png over jpeg. PNG compression is not lossy, so you get the best quality at the expense of file size.
The second important point is to create the images with sufficient resolution, for printing it should be about 300-600 dpi, higher is better but the filesize of the images and the resulting document will increase. For documents that will only be looked at a screen you can use a lower resolution, about 72-100 dpi should be enough.
For diagrams you should create vector graphics (eps or pdf) if possible, that way you do not lose any quality.
For screenshots, there is not much to do, but for flow charts, I'd suggest to create them in PDF format (vectorized) and to compile your LaTeX source with pdflatex.
for the flowchart i'd suggest TikZ, then your chart is directly typeset in TeX. Here's an example: http://www.texample.net/tikz/examples/simple-flow-chart/
Screenshots are pretty much a lost cause. I've had a good experience saving them as PDF and then embedding them, but you want to make sure you're on a high-res capture to begin with.
Charts are very easy. Most graphics programs (e.g., Vizio, OmniGraffle) will let you save it as EPS or PDF, and scaling works fairly well.

Self-describing file format for gigapixel images?

In medical imaging, there appears to be two ways of storing huge gigapixel images:
Use lots of JPEG images (either packed into files or individually) and cook up some bizarre index format to describe what goes where. Tack on some metadata in some other format.
Use TIFF's tile and multi-image support to cleanly store the images as a single file, and provide downsampled versions for zooming speed. Then abuse various TIFF tags to store metadata in non-standard ways. Also, store tiles with overlapping boundaries that must be individually translated later.
In both cases, the reader must understand the format well enough to understand how to draw things and read the metadata.
Is there a better way to store these images? Is TIFF (or BigTIFF) still the right format for this? Does XMP solve the problem of metadata?
The main issues are:
Storing images in a way that allows for rapid random access (tiling)
Storing downsampled images for rapid zooming (pyramid)
Handling cases where tiles are overlapping or sparse (scanners often work by moving a camera over a slide in 2D and capturing only where there is something to image)
Storing important metadata, including associated images like a slide's label and thumbnail
Support for lossy storage
What kind of (hopefully non-proprietary) formats do people use to store large aerial photographs or maps? These images have similar properties.
It seems like starting with TIFF or BigTIFF and defining a useful subset of tags + XMP metadata might be the way to go. FITS is no good since it is basically for lossless data and doesn't have a very appropriate metadata mechanism.
The problem with TIFF is that it just allows too much flexibility, but a subset of TIFF should be acceptable.
The solution may very well be http://ome-xml.org/ and http://ome-xml.org/wiki/OmeTiff.
It looks like DICOM now has support:
ftp://medical.nema.org/MEDICAL/Dicom/Final/sup145_ft.pdf
You probably want FITS.
Arbitrary size
1--3 dimensional data
Extensive header
Widely used in astronomy and endorsed by NASA and the IAU
I'm a pathologist (and hobbyist programmer) so virtual slides and digital pathology are a huge interest of mine. You may be interested in the OpenSlide project. They have characterized a number of the proprietary formats from the large vendors (Aperio, BioImagene, etc). Most seem to consist of a pyramidal zoomed (scanned at different microscopic objectives, of course), large tiff files containing multiple tiled tiffs or compressed (JPEG or JPEG2000) images.
The industry standard is DICOM Sup 145; getting vendors to adopt it though has been sluggish, but inventing yet another format would probably not be helpful.
PNG might work for you. It can handle large images, metadata, and the PNG format can have some interlacing, so you can get up to (down to?) an n/8 x n/8 downsampled image pretty easily.
I'm not sure if PNG can do rapid random access. It is chunked, but that might not be enough.
You could represent sparse data with the transparency channel.
JPEG2000 might be worth a look, some interesting efforts from National libraries in this space.

Resources