I am looking for lightweight image processing tool that will resize images to JPEG in YCbCr = 4:4:4, that is, no chromatic subsampling. I am using this to generate square thumbnails.
I need 4:4:4 because I am not sure about the quality of 4.2.2 or 4.1.1 as they will have greater amount of artifacts, which will affect the quality of my thumbnails.
It will be run from a web server (ASP.Net MVC 3). Command-line tool, standalone application and libraries are all acceptable since it will run in separate processes anyway.
Anything out there except ImageMagick? I think it is too bulky.
Thanks a lot for answering.
Windows Imaging Component allows you to choose your own JPEG chroma subsampling option. See Encoder Options: JPEG Codec specific options: JpegYCrCbSubsampling
This is a very heavy-handed approach and requires a somewhat difficult learning curve. You may want to look for other options before delving into WIC.
IrfanView has an option to remove color subsampling. In JPEG save dialog, check "Disable color subsampling". This should store the JPEG with all channels in full resolution.
However, looking at its command-line option manual, I don't find any mention of this option. So, it may not be possible to automate or programmatically use this function.
Related
I use gostscript to convert text to outlines with the following code :gswin32c.exe -sDEVICE=pdfwrite -sOutputFile=output.pdf -dQUIET -dNOPAUSE -dBATCH -dNoOutputFonts -f test_new.pdf,it works.But i got a very small output file from 2.5M to 70kb.Then i find the picture became blurred in pdf.
Add -dPDFSETTINGS=/default,This will have the same result.
I's better to use -dPDFSETTINGS=/printer or -dPDFSETTINGS=/prepress,but 300dpi is not enough for me(or for my boss).
Is there any way to keep the original resolution of the picture.
Or how to set a higher dpi for images in output pdf.
The test file is here.
Thanks in advance.
The answer to your question is 'yes' (but see later). Don't use PDFSETTINGS, that sets lots of things all in one go. If you want control then you need to specify each setting individually.
Rather than use this shotgun approach you need to read the documentation, decide which controls affect areas you want to change, and alter those controls only.
However, image downsampling is not your problem. If you don't use -dPDFSETTINGS then PDF file written by Ghostscript contains an image at exactly the same resolution as the image in the original file.
Your problem is that the image is being written with JPEG compression, and JPEG is a lossy compression, so you are losing fidelity. Note that in the original file the image is written uncompressed, which is why its so large.
It looks like the original image was a JPEG, and the free PDF editor you are using has realised that so it saved the image uncompressed (I may be giving it too much credit here, it may save all images uncompressed). Applying JPEG to an image which has already been quantised simply amplifies the artefacts.
Instead you need to specify that you want images compressed with Flate, which is a lossless compression. The documentation for the pdfwrite controls can be found here, you need to change AutoFilterColorImages and ColorImageFilter.
Note that by not applying JPEG quantisation (a second time) and DCT encoding, the compression is less than your first experience. For me the output file comes in at just over 600Kb (leaving the font in place, and the text as text, would be a couple of Kb smaller). However the image is identical, as expected.
Since you are clearly using Ghostscript in a commercial environment, can I just point you at the licence and ask you to check that your usage is compatible with the AGPL, bearing in mind that this covers software as a service usage as well.
Whenever I run pagespeed test , it gives possible optimization of images in bytes & percentage like,
Compressing and resizing https://example.com/…ts/AMP.jpg?6750368613317441460 could save 530KiB (91% reduction).
Compressing https://example.com/…AMP.png?12287830358450898504 could save 4.4KiB (31% reduction).
I am using ImageMagick to compress the images.
I have tried convert AMP.gif_or_png -strip [-alpha Remove] OUTPUT.png for png images and
convert INPUT.jpg -sampling-factor 4:2:0 -strip [-quality 85] [-interlace JPEG] [-colorspace RGB] OUTPUT.jpg
for jpg images, but none of the above commands gives me the same reduction as suggested by google pagespeed.
So, let me know if i am missing any parameters or I have passed wrong values of parameters.
The pack of compressed contents are available on google pagespeed page but,I want to compress images using ImageMagick
or any other sources .
If you are looking for a commercial tool, JPEGmini can be used. You can also use imagemin if you are going to use Grunt task runner. You can also use command lines tools that are provided with imagemin such as jpeg-tran and opti-png, and they are open source as well.
May be, there is no tool available to do your task dynamically,You should do some calculations or if your doing your task with any language then there are many inbuilt classes are available so you can use these classes to compress images, like in java Imagescalr, Thumbnail or ImageWriteParam or you can go with Matlab also.
Compressing and resizing https://example.com/…ts/AMP.jpg?6750368613317441460 could save 530KiB (91% reduction).
530 KiB reduction is quite a lot. Verify that the image size is congruous. I mean, if you have a 400x200 image and you show it at 200x100, then serving it at the correct resolution (or resolutions) could be what PageSpeed is after.
For PNG images, often color reduction is possible: if you have a 12 colors image (e.g. a schematic), having it in 24-bit, 8-bit or 4-bit format makes a significant difference, while changing nothing in what people see once it's displayed. Good call removing unneeded alpha channel, though.
The rest you can do with tools like pngopt, pngcrush or advpng. JPEG quality parameter or suitable tools (tinyjpg, or google 'JPEG compression optimizer') can be used to improve JPEG size. There are some tools that are capable of selectively encoding different areas of the image, or rewrite a PNG palette to leverage zLib compression features.
Another possibility with JPEG is using the progressive format, that allows quickly displaying a raw image, and refine it iteratively. More overall bandwidth but also more apparent speed (less browser support also; check it out).
It is not automatic that any of this can be done with ImageMagick - after all, ImageMagick is not directly tasked with file manipulation but with image manipulation. It may well be that its file-compression functions are not as complete or as advanced as other tools'.
You can download a compression pack from that page with your images and code optimized to their liking...it is pretty much the best image compression available.
I would recommend thumbor.org. It's a open source imaging service which you can simply start using a docker container on Amazon Elastic Beanstalk. It has some pretty nice features like smart cropping and face detection.
To start it, just create a elastic beanstalk environment with docker as predefined configuration. Then you provide a JSON-File with following content in the application version tab.
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "apsl/thumbor"
},
"Ports": [
{
"ContainerPort": "8000"
}
]
}
You can then configure thumbor with elastic beanstalk environment variables. For optimizing JPG you should add the jpegtran optimizer.
OPTIMIZERS=['thumbor.optimizers.jpegtran']
We use it at Storyblok.com to optimize the images and Google Pagespeed is happy with the result: https://www.storyblok.com/docs/Guides/how-to-resize-images
You know, computer stores images as channels and pixels in those channels. And pixel values are like "00110101" which fills 8 bits at memory. I want to know truly where that bits stored at memory, and how can i make operations on them.
Thanks!
Well, the standard book is Digital Image Processing by Gonzalez and Woods.
Another book, where you can pick up the PDF for free is Image Processing in C by Dwayne Philips - PDF here.
First, you need to get a decent C compiler and development system - personally I use Mac OSX, but I guess you would want Visual Studio free edition on Windows.
Then you need to get started with some simple reading and writing of files and memory allocation. I would go with greyscale images of the NetPBM format - probably just PGM files - described here as they are the easiest. You can download the NetPBM programs and run them in a Windows Command Prompt and see how they work and try and implement them yourself in C. You can also download ImageMagick for Windows and try converting images from colour to greyscale and resizing them like this:
convert input.png -colorspace gray result.jpg
convert input.tif -resize 400x400 result.pgm
When you have got that, I would move on to colour PPM format and then maybe PNG and/or JPEG. Remember there are libraries for TIF/JPEG/PNG/BMP so don't be afraid to use them.
Finally, move on to displaying images yourself with Windows GDI etc.
Come back to StackOverflow if you get stuck - questions are free!
tl;dr wildly different with different encodings/filesystems/os'es/drivers
Well that depends on the image format. BMP is one of the easier formats, details on what these files look like can be found on for instance wiki
And to answer "where its stored", it is stored on permanent storage (hardrive/ssd), where exactly depends on the filesystem (FAT/NTFS/EXT etc).
When an image is to be displayed, its read into memory, where it can be manipulated and through some apis this data can be put into a memory region specifically meant to display the current images on you screen.
We are using a directshow interface to capture images from a video stream. These images are presented in a fixed size window.
Once we have captured an image we store it as a bitmap. Downstream we have the ability to add annotation to the image, for example letters in a fixed size font.
In one of our desktop environments, the annotation has started appearing at half the size that it normally appears at. This implies that the image we are merging the text onto has dimensions that are maybe twice as large.
The system that this happens on is a shared resource as in some unknown individual has installed software on the system that differs from our baseline.
We have two approaches - the 1st is to reimage the system to get our default text size behaviour back. The 2nd is to figure out how directshow manages image dimensions so that we can set the scaling on the image correctly.
A survey of the directshow literature indicates that the above is not a trivial task. The original work was done by another team that did not document what they did. Can anybody point us in the direction of what directshow object we want to deal with to properly size the sampled image?
DirectShow - as a framework - does not deal with resolutions directly. Your video source (such as capture hardware) is capable of providing video feed in certain resolution which you possibly can change. You normally use IAMStreamConfig as described in Configure the Video Output Format in order to choose capture resolution.
Sometimes you cannot affect capture resolution and you need to resample the image in whatever dimensions you captured it. There is no stock filter for this, however Media Foundation provides a suitable Video Resizer DSP which does most of the task. Unfortunately it does not fit DirectShow pipeline smoothly, so you need fitting and/or custom filter for resizing.
When filters connect in DirectShow, they have an AM_MEDIA_TYPE. Here you will find a VIDEOINFOHEADER with a BITMAPINFOHEADER and this header has a biWidth and biHeight.
Try to build the FilterGraph manually (with GraphEdit or GraphStudioNext) and inspect these fields.
I have successfully been using CGImageSource to read an image in (mostly JPEG) and CGImageDestination to write it back out. It works but my image comes out a lot smaller than it was before (a 3.9Mb image will become a 2.1 Mb image).
I have been playing around with kCGImageDestinationLossyCompressionQuality and while it does affect the size of the file, I don't understand the scale it uses.
E.g. that same 3.7Mb file will change its size to:
1.9Mb with a compression quality of 0.7
2.4Mb with a compression quality of 0.8
3.0Mb with a compression quality of 0.9
7.4Mb with a compression quality of 1.0
I tried everything (going to 6 decimals using dichotomy) to find the sweet spot to get back to that magic 3.9Mb but it jumps from 3.3Mb to 7.4Mb with seemingly no way to get it to stay in between these 2 numbers.
Is there any other Objective-C library I can use to modify EXIF data that leaves the compression (and thus the file size) alone?
Using CGImageDestination you are creating a new image, and it's very difficult that the size of the newly created image matches the original one. (depends on the compression level).
If you need just to modify the metadata in the original picture, without changing the image information, you should use another library. I know two of them which can be used in a cocoa app:
libexif C++ library. Supports writing jpgs. LGPL license.
exiv2 C++. Supports writing to a lot of formats. But it neeeds a
payed license if you are using it in a commercial app.
Another option is to use exiftool. It's a perl scripts that has become the de-facto standard for changing metadata information. You could include it in the resources folder of your app, and invoke it using NSTask to change the metadata of the pictures. Quite easy to do, and by far the best tool of the three. (Only for mac, not sure if you're targetting iphone or Mac)