I'm having problems opening up certain jpeg files (ones from Facebook and Instagram, and some Samsung phones) in Photoshop. I've read that if I use mogrify -comment test insert-image-here.jpg, it will "handle" the file and somehow it'll open up in Photoshop. And surprisingly, it works very well.
However, I recently "mogrified" an image using the same above command, only to have the filesize go down by 0.71mb, which was alarming as I don't want to recompress my images in jpeg. I then mogrified it ten more times, but I didn't see any obvious visual losses. I tried "mogrifying" a small 170kb image 20 times, and the filesize initially decreased, then increased every subsequent iteration. I compared the files by swapping between them quickly, but didn't see any quality loss.
What is mogrify doing that is decreasing the filesize, seemingly without reducing the quality? Is there a quality option that I can add to make mogrify not reduce the filesize?
This is a similar question from another user Why does the size of my image decrease when I add a comment to an image? but I cannot discern any quality loss whatsoever, so running my image through at 95 or 70% quality 20 times would be immediately noticeable.
Here is the link to the image that I am using as a test: http://ocicat.wildrain.tripod.com/sitebuildercontent/sitebuilderpictures/aragon.jpg
Edit:
I ran two more images through the mogrify command 1000 times (one of white noise which I didn't include.) I still don't see any quality loss--is JPEG compression that unnoticeable (maybe my eyes are failing me.) Interestingly, the final file size and the original file size of these two images are the same.
Zero iterations:
1000 iterations:
I do not understand why you would have trouble opening a JPG from those sources. But some viewers will not handle CMYK jpg files properly. I would be surprised if Photoshop has that problem. If you use Imagemagick to add a comment, then it will decompress your file and recompress it. Imagemagick will use the -quality value in the file if it can find it and make the output the same. However, if it cannot find the quality value in the file, then it will compress at value 92. That could cause a decrease in the file size if it was at 100 as input but was recompressed at 92. The next time you do the same it will continue to use 92. However, there might be some loss of effective quality because JPG is lossy. But at 92 it probably will not be visually noticeable. You could try convert in place of mogrify and see if that is any different. Also there is no need to add a comment. Just reading the input and saving it again will decompress and recompress it in Imagemagick. See http://www.imagemagick.org/script/command-line-options.php#quality
Your image is sRGB and has a quality of 75 according to Imagemagick
identify -verbose Aragon.jpg
Format: JPEG (Joint Photographic Experts Group JFIF format)
Mime type: image/jpeg
Class: DirectClass
Geometry: 9600x14400+0+0
Resolution: 2400x2400
Print size: 4x6
Units: PixelsPerInch
Type: TrueColor
Endianess: Undefined
Colorspace: sRGB
Depth: 8-bit
...
Compression: JPEG
Quality: 75
Orientation: Undefined
Properties:
date:create: 2017-08-18T21:26:37-07:00
date:modify: 2017-08-18T21:26:37-07:00
jpeg:colorspace: 2
jpeg:sampling-factor: 2x2,1x1,1x1
signature: bad15aa674dc45312d47627b620c895ee76b1fa4457b55bf1acca85883de5963
Artifacts:
filename: aragon.jpg
verbose: true
So the CMYK issue is not present.
Is this file before or after you processed it with Imagemagick mogrify?
If this file was originally quality 75 and was recompressed at 92 or some higher value than 75, then it might increase in file size.
If you do not want the file size to decrease then recompress at 75 or higher. 100 would give you the least compression, but may increase the file size.
Other factors could be a change is -sampling-factor for the JPG. See http://www.imagemagick.org/script/command-line-options.php#sampling-factor. Also there could be a difference in the compression tables. Imagemagick use libjpg to read and decompress JPGs. The original JPG may have been compressed using other tools.
Another factor might be the introduction or removal of a color profile. Imagemagick should not be changing that automatically.
My best suggestion is to check the quality of the input and what quality is assigned to the output. Also check the input colorspace. Use identify -verbose yourimage to see what may have changed.
Unfortunately, I do not know exactly what is happening. I can only tell you some of the factors that may be involved.
I used Imagemagick 6.9.9.7 Q16 Mac OSX to convert your file.
convert aragon.jpg aragon2.jpg
The input has Filesize: 5.94157MiB. The output has Filesize: 5.23264MiB. Both files have the same quality 75. So there is a slight change in file size due to decompression and recompression, probably due to actual loss in quality due to the fact that JPG has a lossy compression. Or perhaps due to a change in compression tables used. Doing it once more yields a Filesize: 5.23289MiB. So a very slight increase. Doing
convert aragon.jpg -quality 100 aragon4.jpg
Yields a Filesize: 13.968MiB, since we have asked it to use a larger compression quality than the input, so the file size will increase dramatically even though it is a lossy compression.
Related
I wonder why the jpeg compression in recent MATLAB and Octave versions has gone so strong that it causes noticable compression artifacts:
Octave 3 jpeg image with size of 41 KB with no artifacts:
MATLAB 9 jpeg image with size of 26 KB with artifacts:
Octave 5 jpeg image with size of 23 KB with artifacts:
Here is the code to plot:
description = strcat('version-', num2str(version));% find out MATLAB/Octave version
x=1:2; % simple var
figure; % plot
plot(x, x);
title(description);
print(strcat("test_jpeg_size_", description ,'.jpg'), '-djpeg'); % write file
Do you know a possibility to tell MATLAB and Octave to do a weaker jpeg compression. I cannot find anything like this on https://de.mathworks.com/help/matlab/ref/print.html.
I know that I could plot png files and use imagemagick to convert it to jpeg with a given quality but this would be a workaround with additional tools. Or I could png files in the first place but the real images have no compression advantages for png (like like simple one here) and I would have to change a lot of other stuff.
This used to be documented*, I was surprised to not find it in the documentation pages. I tested it with the latest version of MATLAB (R2019b) and it still works:
The -djpeg option can take a quality value between 0 and 100, inclusive. The device option becomes -djpeg100 or -djpeg80, or whatever value you want to use.
print(strcat("test_jpeg_size_", description ,'.jpg'), '-djpeg100');
* Or at least I remember it being documented... The online documentation goes back to R13 (MATLAB 6.5), and it's not described in that version of the documentation nor in a few random versions in between that and the current version.
However, I strongly recommend that you use PNG for line drawings. JPEG is not intended for line drawings, and makes a mess of them (even at highest quality setting). PNG will produce better quality with a much smaller file size.
Here I printed a graph with -djpeg100 and -dpng, then cut out a small portion of the two files and show them side by side. JPEG, even at 100 quality, makes a mess of the lines:
Note that, in spite of not having any data loss, the PNG file is about 10 times smaller than the JPEG100 file.
You can go for
f = getframe(gcf);
imwrite(f.cdata, 'Fig1.jpg')
where imwrite takes the following options
Compression (compression scheme)
Quality (quality of JPEG-compressed file from 0 to 100)
See the doc of imwrite.
I have this image (photo taken by me on SGS 9 plus): Uncompressed JPG image. Its size is 4032 x 3024 and its weight is around 3MB. I compressed it with TinyJPG Compressor and its weight was 1.3MB. For PNG images I used Online-Convert and I saw webp images much more smaller even than compressed with TinyPNG. I expected something similar, especially that I read an article JPG to WebP – Comparing Compression Sizes where WEBP is much smaller that compressed JPG.
But when I convert my JPG to WEBP format in various online image convertion tools, I see 1.5-2MB size, so file is bigger than my compressed JPG. Am I missing something? WEBP should not be much smaller than compressed JPG? Thank you in advance for every answer.
These are lossy codecs, so their file size mostly depends on quality setting used. Comparing just file sizes from various tools doesn't say anything without ensuring images have the same quality (otherwise they're incomparable).
There are a couple of possibilities:
JPEG may compress better than WebP. WebP has problems with blurring out of the details, low-resolution color, and using less than full 8 bits of the color space. In the higher end of quality range, a well-optimized JPEG can be similar or better than WebP.
However, most of file size differences in modern lossy codecs are due to difference in quality. The typical difference between JPEG and WebP at the same quality is 15%-25%, but file sizes produced by each codec can easily differ by 10× between low-quality and high-quality image. So most of the time when you see a huge difference in file sizes, it's probably because different tools have chosen different quality settings (and/or recompression has lost fine details in the image, which also greatly affects file sizes). Even visual difference too small for human eye to notice can cause noticeable difference in file size.
My experience is that lossy WebP is superior below quality 70 (in libjpeg terms) and JPEG is often better than WebP at quality 90 and above. In between these qualities it doesn't seem to matter much.
I believe WebP qualities are inflated about 7 points, i.e., to match JPEG quality 85 one needs to use WebP quality 92 (when using the cwebp tool). I didn't measure this well, this is based on rather ad hoc experiments and some butteraugli runs.
Lossy WebP has difficulties compressing complex textures such as leafs of trees densely, whereas JPEGs difficulties are with thin lines against flat borders, like a telephone line hanging against the sky or computer graphics.
When working with JPEG image properties (resolution, sampling, etc.) and you export the final product are you ALWAYS double dipping into 'jpegification'?
From my understanding when you load a JPEG image into an image manipulation tool (GIMP, Photoshop, ImageMagick, etc.) it goes like so:
Import JPEG
Decode JPEG into easier workable format (Bitmap)
Manipulate the pixels
Export back into JPEG (REDOING JPEG QUANTIZATION AGAIN, even if you copy the original JPEG parameters it's a double dip)
Am I correct in this?
Thanks!
Any areas of the image that have changed would have to be quantized again anyway.
In theory, an application could keep the quantized values lying around then use them again. However,
That would require 3 times as much memory. The quantized values require 16 bits to store (+8 bits for the pixel value).
If you changed the sampling or quantization tables, the quantized values would have to be recalculated.
There would be very few cases where it would make sense to hang on to the quantized DCT values.
I think it depends on what you do after reading the image... but I think you can check for yourself for any particular operation and whether it has re-quantised by using this function in ImageMagick
identify -format "%#\n" image.jpg
bb1f099c2e597fdd2e7ab3d273e52ffde7229b9061154c970d23b171df3aca89
which calculates the checksum (or signature as IM calls it) of the pixels - disregarding the header information.
So, if I create a file of random noise, like this
convert -size 1000x1000 xc:gray +noise gaussian image.jpg
and get the checksum of the data, like this
identify -format "%#\n" image.jpg
84474ba583dbc224d9c1f3e9d27517e11448fcdc167d8d6a1a9340472d40a714
I can then use jhead to change the comment in the header, like this
jhead -cl "Comment" image.jpg
Modified: image.jpg
and yet the checksum remains unchanged so I would say jhead has NOT re-quantised the data.
I guess my point is that your statement that images are ALWAYS re-quantised is not 100% accurate and it depends on what you actually do to the image, and further, that I am showing a way you can readily check for yourself whether any processing has actually caused requantisation. HTH !!!
I'm making a website with a single image as a background (with different backgrounds for subpages). So far I have established that the image should be about 1920x1080, possibly with 1.77:1 aspect ratio and a jpg for PCs. Now I want to reduce the image file size without losing quality.
1) First my problem. I have encountered the most bizarre thing in photoshop. When I upload an image 4272x2848 that weights 521 KB into photoshop and save it without changing anything, its size increases to... 1,52 MB ??? After I cut down the size to ~1920x1080 the size is still ~800 KB. Also the image before uploading has 96 DPI, after it is uploaded it changes to 72 DPI. (What sorcery is this?)
2) What is an acceptable image file size with that resolution?
3) Should I use save for web? This increases the size or reduces the quality from what I have experimented.
4) I found this image size reducer website: https://kraken.io/web-interface It reduces the size and I think the image quality does not change.
5) http://www.filedropper.com/pancakes - the image from question #1. (The image will probably be changed in the near future so this one is more of a case study).
Thanks!
JPEG being lossy, every time you load, then save, a separate JPEG algorithm is applied again. I believe the default for Photoshop is High quality, which an 8 on their dialog. So if you have an original JPEG that was originally saved as a low or medium quality (say a 4-6 on the Photoshop dialog), if you then open that in Photoshop, and go with the default "High/8" quality save, then the JPEG algorithm is applied on the perceptual image, meaning you saved a lower quality perceptual image at a higher quality algorithm's amount of data.
This is a major reason that I've moved away from JPEG. If JPEG is required I always try to start with either a RAW, BMP, TIFF, or PNG image, and then save off a JPEG version from that, then if I need to make any changes I go back to the full "original" [lossless] format, make the changes then save the JPEG again. I try to never edit an image that is already saved as JPEG, because you're always going to lose a small amount of quality (mostly the JPEG algorithm is good enough that the loss of quality isn't perceptual, but the file size can change none-the-less).
Consider an application handling uploading of potentially very large PNG files.
All uploaded files must be stored to disk for later retrieval. However, the PNG files can be up to 30 MB in size, but disk storage limitations gives a maximum per file size of 1 MB.
The problem is to take an input PNG of file size up to 30 MB and produce an output PNG of file size below 1 MB.
This operation will obviously be lossy - and reduction in image quality, colors, etc is not a problem. However, one thing that must not be changed is the image dimension. Hence, an input file of dimension 800x600 must produce an output file of dimension 800x600.
The above requirements outlined above are strict and cannot be changed.
Using ImageMagick (or some other open source tool) how would you go about reducing the file size of input PNG-files of size ~30 MB to a maximum of 1 MB per file, without changing image dimensions?
PNG is not a lossy image format, so you would likely need to convert the image into another format-- most likely JPEG. JPEG has a settable "quality" factor-- you could simply keep reducing the quality factor until you got an image that was small enough. All of this can be done without changing the image resolution.
Obviously, depending on the image, the loss of visual quality may be substantial. JPEG does best for "true life" images, such as pictures from cameras. It does not do as well for logos, screen shots, or other images with "sharp" transitions from light to dark. (PNG, on the other hand, has the opposite behavior-- it's best for logos, etc.)
However, at 800x600, it likely will be very easy to get a JPEG down under 1MB. (I would be very surprised to see a 30MB file at those smallish dimensions.) In fact, even uncompressed, the image would only be around 1.4MB:
800 pixels * 600 pixels * 3 Bytes / color = 1,440,000 Bytes = 1.4MB
Therefore, you only need a 1.4:1 compression ratio to get the image down to 1MB. Depending on the type of image, the PNG compression may very well provide that level of compression. If not, JPEG almost certainly could-- JPEG compression ratios on the order of 10:1 are not uncommon. Again, the quality / size of the output will depend on the type of image.
Finally, while I have not used ImageMagick in a little while, I'm almost certain there are options to re-compress an image using a specific quality factor. Read through the docs, and start experimenting!
EDIT: Looks like it should, indeed, be pretty easy with ImageMagick. From the docs:
$magick> convert input.png -quality 75 output.jpg
Just keep playing with the quality value until you get a suitable output.
Your example is troublesome because a 30MB image at 800x600 resolution is storing 500 bits per pixel. Clearly wildly unrealistic. Please give us real numbers.
Meanwhile, the "cheap and cheerful" approach I would try would be as follows: scale the image down by a factor of 6, then scale it back up by a factor of 6, then run it through PNG compression. If you get lucky, you'll reduce image size by a factor of 36. If you get unlucky the savings will be more like 6.
pngtopng big.png | pnmscale -reduce 6 | pnmscale 6 | pnmtopng > big.png
If that's not enough you can toss a ppmquant in the middle (on the small image) to reduce the number of colors. (The examples are netpbm/pbmplus, which I have always found easier to understand than ImageMagick.)
To know whether such a solution is reasonable, we have to know the true numbers of your problem.
Also, if you are really going to throw away the information permanently, you are almost certainly better off using JPEG compression, which is designed to lose information reasonably gracefully. Is there some reason JPEG is not appropriate for your application?
Since the size of an image file is directly related to the image dimensions and the number of colours, you seem to have only one choice: reduce the number of colours.
And ~30MB down to 1MB is a very large reduction.
It would be difficult to achieve this ratio with a conversion to monochrome.
It depends a lot on what you want at the end, I often like to reduce the number of colors while perserving the size. In many many cases the reduced colors does not matter. Here is an example of reducing the colors to 254.
convert -colors 254 in.png out.png
You can try the pngquant utility. It is very simple to install and to use. And it can compress your PNGs a lot without visible quality loss.
Once you install it try something like this:
pngquant yourfile.png
pngquant --quality=0-70 yourfile.png
For my demo image (generated by imagemagick) the first command reduces 350KB to 110KB, and the second one reduces it to 65KB.
Step 1: Decrease the image to 1/16 of its original size.
Step 2: Decrease the amount of colors.
Step 3: Increase the size of the image back to its original size.
I know you want to preserve the pixel size, but can you reduce the pixel size and adjust the DPI stored with the image so that the display size is preserved? It depends on what client you'll be using to view the images, but most should observe it. If you are using the images on the web, then you can just set the pixel size of the <img> tag.
It depends on they type of image, is it a real life picture or computer generated image,
for real life images png will do very little it might even not compress at all, use jpg for those images, it the image has a limited number of different colors (it can have a 24 bit image depth but the number of unique images will be low) png can compress quite nicely.
png is basicly an implementation of zip for images so if a lot of pixels are the same you can have a rather nice compression ratio, if you need lossless compression don't do resizing.
use optipng it reduce size without loss
http://optipng.sourceforge.net/
Try ImageOptim https://imageoptim.com/mac it is free and open source
If you want to modify the image size in ubuntu, you can try "gimp".
I have tried couple of image editing apps in ubuntu and this seemed to be the best among them.
Installation:
Open terminal
Type: sudo apt install gimp-plugin-registry
Give admin password. You'll need net connection for this.
Once installed, open the image with GIMP image editor. Then go to: File > Export as > Click on 'Export' button
You will get a small window, where check box on "Show preview in image window". Once you check this option, you will get to see the current size of the file along with Quality level.
Adjust the quality level to increase/decrease the file size.
Once adjusting is done, click on 'Export' button finally to save the file.
Right click on the image. Select open with paint. Click on resize. Click on pixel and change the horizontal to 250 or 200.
That's the only thing. It is the fastest way for those who are using Windows XP or Windows 7.