I have successfully been using CGImageSource to read an image in (mostly JPEG) and CGImageDestination to write it back out. It works but my image comes out a lot smaller than it was before (a 3.9Mb image will become a 2.1 Mb image).
I have been playing around with kCGImageDestinationLossyCompressionQuality and while it does affect the size of the file, I don't understand the scale it uses.
E.g. that same 3.7Mb file will change its size to:
1.9Mb with a compression quality of 0.7
2.4Mb with a compression quality of 0.8
3.0Mb with a compression quality of 0.9
7.4Mb with a compression quality of 1.0
I tried everything (going to 6 decimals using dichotomy) to find the sweet spot to get back to that magic 3.9Mb but it jumps from 3.3Mb to 7.4Mb with seemingly no way to get it to stay in between these 2 numbers.
Is there any other Objective-C library I can use to modify EXIF data that leaves the compression (and thus the file size) alone?
Using CGImageDestination you are creating a new image, and it's very difficult that the size of the newly created image matches the original one. (depends on the compression level).
If you need just to modify the metadata in the original picture, without changing the image information, you should use another library. I know two of them which can be used in a cocoa app:
libexif C++ library. Supports writing jpgs. LGPL license.
exiv2 C++. Supports writing to a lot of formats. But it neeeds a
payed license if you are using it in a commercial app.
Another option is to use exiftool. It's a perl scripts that has become the de-facto standard for changing metadata information. You could include it in the resources folder of your app, and invoke it using NSTask to change the metadata of the pictures. Quite easy to do, and by far the best tool of the three. (Only for mac, not sure if you're targetting iphone or Mac)
Related
I use gostscript to convert text to outlines with the following code :gswin32c.exe -sDEVICE=pdfwrite -sOutputFile=output.pdf -dQUIET -dNOPAUSE -dBATCH -dNoOutputFonts -f test_new.pdf,it works.But i got a very small output file from 2.5M to 70kb.Then i find the picture became blurred in pdf.
Add -dPDFSETTINGS=/default,This will have the same result.
I's better to use -dPDFSETTINGS=/printer or -dPDFSETTINGS=/prepress,but 300dpi is not enough for me(or for my boss).
Is there any way to keep the original resolution of the picture.
Or how to set a higher dpi for images in output pdf.
The test file is here.
Thanks in advance.
The answer to your question is 'yes' (but see later). Don't use PDFSETTINGS, that sets lots of things all in one go. If you want control then you need to specify each setting individually.
Rather than use this shotgun approach you need to read the documentation, decide which controls affect areas you want to change, and alter those controls only.
However, image downsampling is not your problem. If you don't use -dPDFSETTINGS then PDF file written by Ghostscript contains an image at exactly the same resolution as the image in the original file.
Your problem is that the image is being written with JPEG compression, and JPEG is a lossy compression, so you are losing fidelity. Note that in the original file the image is written uncompressed, which is why its so large.
It looks like the original image was a JPEG, and the free PDF editor you are using has realised that so it saved the image uncompressed (I may be giving it too much credit here, it may save all images uncompressed). Applying JPEG to an image which has already been quantised simply amplifies the artefacts.
Instead you need to specify that you want images compressed with Flate, which is a lossless compression. The documentation for the pdfwrite controls can be found here, you need to change AutoFilterColorImages and ColorImageFilter.
Note that by not applying JPEG quantisation (a second time) and DCT encoding, the compression is less than your first experience. For me the output file comes in at just over 600Kb (leaving the font in place, and the text as text, would be a couple of Kb smaller). However the image is identical, as expected.
Since you are clearly using Ghostscript in a commercial environment, can I just point you at the licence and ask you to check that your usage is compatible with the AGPL, bearing in mind that this covers software as a service usage as well.
You know, computer stores images as channels and pixels in those channels. And pixel values are like "00110101" which fills 8 bits at memory. I want to know truly where that bits stored at memory, and how can i make operations on them.
Thanks!
Well, the standard book is Digital Image Processing by Gonzalez and Woods.
Another book, where you can pick up the PDF for free is Image Processing in C by Dwayne Philips - PDF here.
First, you need to get a decent C compiler and development system - personally I use Mac OSX, but I guess you would want Visual Studio free edition on Windows.
Then you need to get started with some simple reading and writing of files and memory allocation. I would go with greyscale images of the NetPBM format - probably just PGM files - described here as they are the easiest. You can download the NetPBM programs and run them in a Windows Command Prompt and see how they work and try and implement them yourself in C. You can also download ImageMagick for Windows and try converting images from colour to greyscale and resizing them like this:
convert input.png -colorspace gray result.jpg
convert input.tif -resize 400x400 result.pgm
When you have got that, I would move on to colour PPM format and then maybe PNG and/or JPEG. Remember there are libraries for TIF/JPEG/PNG/BMP so don't be afraid to use them.
Finally, move on to displaying images yourself with Windows GDI etc.
Come back to StackOverflow if you get stuck - questions are free!
tl;dr wildly different with different encodings/filesystems/os'es/drivers
Well that depends on the image format. BMP is one of the easier formats, details on what these files look like can be found on for instance wiki
And to answer "where its stored", it is stored on permanent storage (hardrive/ssd), where exactly depends on the filesystem (FAT/NTFS/EXT etc).
When an image is to be displayed, its read into memory, where it can be manipulated and through some apis this data can be put into a memory region specifically meant to display the current images on you screen.
I am looking for lightweight image processing tool that will resize images to JPEG in YCbCr = 4:4:4, that is, no chromatic subsampling. I am using this to generate square thumbnails.
I need 4:4:4 because I am not sure about the quality of 4.2.2 or 4.1.1 as they will have greater amount of artifacts, which will affect the quality of my thumbnails.
It will be run from a web server (ASP.Net MVC 3). Command-line tool, standalone application and libraries are all acceptable since it will run in separate processes anyway.
Anything out there except ImageMagick? I think it is too bulky.
Thanks a lot for answering.
Windows Imaging Component allows you to choose your own JPEG chroma subsampling option. See Encoder Options: JPEG Codec specific options: JpegYCrCbSubsampling
This is a very heavy-handed approach and requires a somewhat difficult learning curve. You may want to look for other options before delving into WIC.
IrfanView has an option to remove color subsampling. In JPEG save dialog, check "Disable color subsampling". This should store the JPEG with all channels in full resolution.
However, looking at its command-line option manual, I don't find any mention of this option. So, it may not be possible to automate or programmatically use this function.
I am a bit confused about what the best approach is to resize a JPEG file on disk and save the resized JPEG as a new file to disk (on Mac OS X with Cocoa). There are a number of threads about resizing, but I am wondering what approach to use. Do I need to use Core Graphics for this or is this framework "too much" for a simple operation as a resize? Any pointers are welcome as I am a bit lost.
Core Graphics isn't “too much”; it's the right way to do it.
There is a Cocoa solution:
Create an image of the desired size (the destination image).
Lock focus on it.
Draw the source image into it.
Unlock focus on it.
Export it to desired file format.
Write that data somewhere.
But that destroys metadata.
The Core Graphics solution is not a whole lot different:
Use an image source to load the image and its metadata.
Create a bitmap context of the desired size with the source image's color space. (The hard part here is making sure that the destination context matches the source image as closely as possible while still being in one of the supported pixel formats.)
Draw the source image into it.
Capture the contents of the context.
Use an image destination to write the image and metadata to a file.
And the Core Graphics solution ensures that as little information as possible is lost along the way. (You may want to adjust the DPI metadata, if present.)
Install and use ImageMagick.
I'm trying to load some images using Bitmap.getBitmapResource(), but it takes about 2 or 3 seconds per image to load. I'm testing on the Storm, specifically. The odd thing is, when I install OS 5.0, the loading goes in a snap, no delay at all.
Should I be looking at the format used? Or where the files are stored? I've tried both 24- and 8-bit PNGs, with transparency. The files are stored in a subdirectory in the COD, so getBitmapResource is passed a path, like "images/img1.png" instead of just "img1.png".
Is any of this making things slower?
If you're looking for the most efficient format for storing image data within your application binary, the recommendation is PNG with the 565 colorspace. The BlackBerry Theme Studio toolkit has the ability to load any PNG and export it in this format. Its the best one because its what the BlackBerry uses internally.
Try to use EncodedImage, see Is it better to use Bitmap or EncodedImage in BlackBerry?
In case you need Bitmap class, try also bmp (don't forget to turn off "convert image files to .png" option in BB project settings)