PIL output picture bigger than input - image

I am currently working on an automatic script for changing contrast on all the pictures inside a folder, using PIL (python). The problem is each output picture is bigger than input one... Here is my script:
from PIL import Image, ImageEnhance
import piexif
path="C:/User/pictures/"
all_files=["picture1.jpg", "picture2.jpg", "picture3.jpg"]
for i in range(len(all_files)):
im_path=all_files[i]
im = Image.open(path+im_path)
#load exif data
exif_dict = piexif.load(im.info['exif'])
exif_bytes = piexif.dump(exif_dict)
dpi = im.info["dpi"]
#image brightness enhancer
contraster = ImageEnhance.Contrast(im)
im_output = contraster.enhance(factor)
im_output.save(new_path+im_path, format="JPEG", quality=100, dpi=dpi, exif=exif_bytes, subsampling=0)
For example, my incoming jpg picture was 8.08Mo, and my new one is 15.8Mo, even if I chose a 0% change of contrast...
Thanks a for answering, have a nice week-end.

You have specified quality=100 against the recommendations of the library authors.
The image quality, on a scale from 0 (worst) to 95 (best). The default is 75. Values above 95 should be avoided; 100 disables portions of the JPEG compression algorithm, and results in large files with hardly any gain in image quality.

Related

OpenCV imwrite gives washed-out result for jpeg images

I am using OpenCV 3.0 and whenever I read an image and write it back the result is a washed-out image.
code:
cv::Mat img = cv::imread("dir/frogImage.jpg",-1);
cv::imwrite("dir/result.jpg",img);
Does anyone know whats causing this?
Original:
Result:
You can try to increase the compression quality parameter as shown in OpenCV Documentation of cv::imwrite :
cv::Mat img = cv::imread("dir/frogImage.jpg",-1);
std::vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_JPEG_QUALITY);
compression_params.push_back(100);
cv::imwrite("dir/result.jpg",img, compression_params);
Without specifying the compression quality manually, quality of 95% will be applied.
but 1. you don't know what jpeg compression quality your original image had (so maybe you might increase the image size) and 2. it will (afaik) still introduce additional minor artifacts, because after all it is a lossy compression method.
UPDATE your problem seems to be not because of compression artifacts but because of an image with Adobe RGB 1998 color format. OpenCV interprets the color values as they are, but instead it should scale the color values to fit the "real" RGB color space. Browser and some image viewers do apply the color format correctly, while others don't (e.g. irfanView). I used GIMP to verify. Using GIMP you can decide on startup how to interpret the color values by format, either getting your desired or your "washed out" image.
OpenCV definitely doesn't care about such things, since it's not a photo editing library, so neither on reading nor on writing, color format will be handled.
This is because you are saving the image as JPG. When doing this the OpenCV will compress the image.
try to save it as PNG or BMP and no difference will be exist.
However, the IMPORTANT QUESTION : I am loading the image as jpg and saving it as JPG. So, how there is a difference?!
Yes, this is because there is many not identical compression/decompression algorithms for JPG.
if you want to get into some details see this question:
Reading jpg file in OpenCV vs C# Bitmap
EDIT:
You can see what I mean exactly here:
auto bmp(cv::imread("c:/Testing/stack.bmp"));
cv::imwrite("c:/Testing/stack_OpenCV.jpg", bmp);
auto jpg_opencv(cv::imread("c:/Testing/stack_OpenCV.jpg"));
auto jpg_mspaint(cv::imread("c:/Testing/stack_mspaint.jpg"));
cv::imwrite("c:/Testing/stack_mspaint_opencv.jpg", jpg_mspaint);
jpg_mspaint=(cv::imread("c:/Testing/stack_mspaint_opencv.jpg"));
cv::Mat jpg_diff;
cv::absdiff(jpg_mspaint, jpg_opencv, jpg_diff);
std::cout << cv::mean(jpg_diff);
The Result:
[0.576938, 0.466718, 0.495106, 0]
As #Micha commented:
cv::Mat img = cv::imread("dir/frogImage.jpg",-1);
cv::imwrite("dir/result.bmp",img);
I was always annoyed when mspaint.exe did the same to jpeg images. Especially for the screenshots...it ruined them everytime.

PyGame Poor Image Quality & Gradient Banding

I noticed that when displaying jpg or png images they look alot like a GIF file in that there is limited colors and "banding".
You can see the original and a screenshot attached. Kinda hard to tell scaled down but you can see it.
Actually better example. See the banding around the circle?
Here is my code:
#pygame code to render an image
import pygame, os
import time
image = 'gradient-test.png' #located in same folder as this file:resized in Photoshop
pygame.init() #I assume you did this?
SCREEN = pygame.display.set_mode((1366, 768))
pygame.mouse.set_pos((1366, 768))
picture = pygame.image.load(image)
SCREEN.blit(picture,(0,0))
pygame.display.update()
time.sleep(5)
It seems like a problem with the pixel format of your surface. You can add the following lines to your script to see if there's a difference between the pixel format of your image surface and your screen surface:
print 'picture', picture.get_bitsize()
print 'screen', SCREEN.get_bitsize()
It's good practice and recommended to always change the pixel format of any new surface to the pixel format of your screen surface by calling convert():
convert()
change the pixel format of an image
convert(Surface) -> Surface
Creates a new copy of the Surface with the pixel format changed ...
If no arguments are passed the new Surface will have the same pixel format as the display Surface. This is always the fastest format for blitting. It is a good idea to convert all Surfaces before they are blitted many times.
It's simple:
picture = pygame.image.load(image).convert() # added convert() call
Also, you can try to set the color depth of your screen manually, like:
SCREEN = pygame.display.set_mode((1366, 768),0, 32) # use 32-bit color depth

SSRS can't properly render *some* images within PDF

I have a report that renders images (jpg) that have been collected from various sources. This works fine within the report viewer, and when exporting via Excel.
However, when exporting to PDF, about 5% of the images are rendered incorrectly as can be seen below, with the original on the left, and what is rendered on the right;
I find that if I open up one of these images in mspaint, and just click save, on the next report-run the image is now rendered correctly.
Are there any rules as to what image properties/format are valid for SSRS to render the image correctly within a PDF? Essentially I'd like to somehow find these images that will render incorrectly before the report is run and fix them prior...
Current Workaround
I never ended up getting SSRS to display the the problem images as they were, however, determining before running the report which images would be included in the non-displayable set so they could be converted to a supported format (automatically) was also a solution.
In my case, all images were supplied via users uploading to a website, so I was able to identify and convert images as they arrived. For all existing images, I was able to run a script that identified the problem images and convert them.
Identifying problem images
From the thousands of images I had, I was able to determine that the images that wouldn't render correctly had the following properties:
Image had CMYK colorspace or;
Image had extended color profiles or;
Both of the above
Converting an image
I was originally using the standard .NET GDI (System.Drawing) to manipulate images however the API is often prone to crashes (OutOfMemoryException) when dealing with images that have extra data. As such, I switched to using ImageMagick where for each of the identified images I:
Stripped the color profiles and;
Converted to RGB
Note that the conversion to RGB from CMYK without stripping the color profiles was not enough to get all images to render properly.
I ended up just doing those items on every image byte stream I received from users (without first identifying the problem) before saving an uploaded image to disk. After which, I never had the rendering problem again.
Because of the way the output looks I would say those JPEG images have CMYK colorspace but the SSRS assumes they use RGB colorspace and sets the wrong colorspace in PDF.
If you can post a JPEG image and a sample PDF I can give you more details.
I've had exactly the same problem with an image rendering correctly on screen but appearing like the one in the question when I exported the report to PDF. Here's how I solved it.
The Problem
The first clue was this article I came across on MSDN. It seems that regardless of the original image density, the PDF renderer in SSRS resizes all images to 96 DPI. If the original size of the image is larger than the size of the page (or container), then you will get this problem.
The Solution
The solution is to resize the source image such that it will fit on your page. The requires a little calculation depending on your page size and margin settings.
In my case, I'm using A4 paper size, which is 21cm by 29.7cm. However, my left margin is 1.5cm, and my right margin is 0.5cm, for a total inner width of 19cm. I allow an extra 0.5 cm as a margin of error, so I use an inner width of 18.5cm.
21 cm - 1.5 cm - 0.5 cm - 0.5 cm = 18.5 cm
As noted before, the resolution generated by the PDF renderer is 96 DPI (dots per inch). For those of us not in the United States or Republic of Liberia, that's 37.79 DPC (dots per centimetre). So, to get our width:
18.5 cm * 37.79 dpc = 699 pixels
Your result may be different depending on (1) the paper size you are using, and (2) the left and right margins.
As the page is higher than it is wide, we need only resize the width while keeping the image proportional. If you're using a paper size which is wider than it is tall, you'd use the length instead.
So now open the source image in Paint (or your image editor of choice), and proportionally resize the image to the desired width (or length) in pixels, save it, import it into your container, and size the image visually with respect to the container. It should look the same on screen, and now render correctly to PDF.
This is an issue reported to Microsoft Connect.
From SSRS 2008 How to get the best image quality possible?:
The image behavior you see in PDF is a result of some image conversions that the PDF renderer does, based on how the PDF specification requires that serialize images into PDF.
We know it's not ideal, and we classify the loss of image quality as a product issue. Therefore, it's difficult to really say what to do to get the best quality image.
Anecdotally, I have heard that customers have good results when the original image is a BMP

matplotlib.pyplot.imsave backend

I'm working in Spyder with matplotlib.pyplot and want to save numpy array to images.
The documentation of imsave() says, that the format to which I can save depends on the backend. So what exactly is the backend? I seem to be able to save .tiff images, but f.e. I want them to be saved as 8-bit tiffs instead of RGB-Tiffs. Any Idea where I can change that?
If you are trying to save an array as a tiff (with no axis markers ect) from mat, you might be better off using PIL.
(this assumes you are using ipython --pylab, so rand is defined)
write:
import PIL.Image as Image
im = Image.new('L',(100,100))
im.putdata(np.floor(rand(100,100) * 256).astype('uint8').ravel())
im.save('test.tif')
The ravel() is important, putdata expects a sequence (ie, 1D) not an array.
read :
im2 = Image.open('test.tif')
figure()
imshow(im2)
and the output file:
$ tiffinfo test.tif
TIFF Directory at offset 0x8 (8)
Image Width: 100 Image Length: 100
Bits/Sample: 8
Compression Scheme: None
Photometric Interpretation: min-is-black
Rows/Strip: 100
Planar Configuration: single image plane

HowTo put region of Pyglet image into PIL Image?

My app reads frames from video (using pyglet), extracts a 'strip' (i.e. region - full width by 1 to 6 video lines), then pastes each strip (appends it) into an image that can later be written out an *.png file.
Problem is Pyglet GL images are stored in graphics memory, so are very limited re size.
My current klunky workaround is build up the to-be png image as small tiles (i.e. within the pyglet GL size limit), when the tile is full, write it out to a *png file, read that back in as a PIL Image file, then paste/append the tile image to the final PIL Image * png file.
I reckon it should be possible to convert the pyglet-GL strip to a PIL Image-friendly format, and paste/append that straight into the final PIL Image, thus having no need for the tile business, which greatly complicates things and has performance impact.
But I can't find how to get the strip data into a form that I can paste into a PIL Image.
Have seen many requests re converting the other way, but I've never run with the herd.
I know it's bad form to reply to one's own question, but I worked out an answer which may be useful to others, so here it is...
nextframe = source.get_next_video_frame() # get first video frame
while nextframe != None:# returns None at EOF
#
### extract strip data, conver to PIL Image-friendly form, paste itn into travo Image
#
strip = nextframe.get_region(0, (vid_frame_h//2)-(strip_h//2), strip_w, strip_h) # Extract strip from this frame
strip_image_data = strip.get_image_data()
strip_data = strip_image_data.get_data(strip_image_data.format, strip_image_data.pitch)
strip_Image = Image.frombuffer("RGB", (strip_w, strip_h), strip_data)
travo_image.paste(strip_Image, (0, travo_filled_to, strip_w, travo_filled_to + strip_h)) # paste strip image into travo_image
travo_filled_to = travo_filled_to + strip_h # update travo_filled pointer
nextframe = source.get_next_video_frame() # get next video frame (EOF returns None)
end of "while nextframe != None:"

Resources