Sometimes when I see a lower resolution load before I see the full image. This is not done in jQuery as it also happens when you load the image stand-alone. I have not a single idea how they do this, but I guess it's something server side.
My question is, how would I do this on my own server?
The images are stored in "progressive" or "interlaced" JPEG format, meaning that the low-res image data is stored before the high-res data (in layman's terms). You can encode any JPEG as interlaced. You can even do it on the server using imagemagick.
Here's Jeff Atwood talking about this.
Related
I'm trying to figure out what is the best way to "clean" images that are coming from non authorized source (app visitors) before opening them, similar to Whatsapp.
Scanning each image with anti virus is probably not so efficient in a large scale, So i came to assumption that rewriting each incoming image by compressing it using jpeg could results a clean image without a malicous code inside it.
From what i read so far the JPEG compression should destroy any hidden content and reorder the data structure of the image which will results a safe image.
WTYT? Am i on the right path to overcome this issue?
There is no code in a JPEG stream. In fact, I don't know of any image format that directs the decoder execute code.
The worst think I can thing of would be to have a JPEG stream that, say embedded malicious code in a thumbnail, COM, or APPn marker. Then another application would look for that image and load the code.
Even this requires something else to get on your system to execute the JPEG "code" and it would be a lot of trouble for something that could be accomplished much easier.
I'm building a web application (RoR) that manages images that are in raw image format. I need to create thumbnail/web versions of these images to be displayed on the site. Currently, I'm using imagemagick, which delegates to dcraw to produce the jpeg thumbnail. The problem I'm running into is that the thumbnail deviates from the look of the original; the image gets darker and the white balance is sometimes heavily shifted.
I'm assuming that the raw format default setting can't be read by dcraw, and thus it's left guessing how to parameterize the raw conversion. I can play around with customizing these setting, but it seems getting it right on one image causes others to be further off the mark.
Is there a better way to do this in order to get a result that more closely mimics the what I might see in a raw viewer like photoshop, or even Mac OSX preview? Given that Mac OS X supports a variety of digital camera raw formats, is there anyway to utilize the OS's ability to render preview images (especially considering that result is what is expected).
The raw images that I'm using are 3FRs and fffs (both from Hasselblad).
I can post samples if people are interested.
Thanks
Look at "sips" and "Resizing images using the command line" to get you started.
I use Abbyy FineReader for ScanSnap to OCR a couple of scanned PDF files. The software claims it retains the original PDF images. The PDF file sizes pre-OCR and post-OCR are almost identical, which is good.
After the software is done, all PDF images appear anti-aliased in Acrobat X. Page navigation is much slower than before, and when I zoom in/out, the images first go to what looks like the pre-anti-aliasing version before quickly changing to anti-aliased images.
Left: Scanned PDF / Right: after OCR with Abbyy
I would like to get the original images without anti-aliasing back. Interestingly, when I open a single page from the anti-aliased PDF in Photoshop, there is no anti-aliasing and the image looks like the left one.
My limited PDF programming experience leads me to believe that Abbyy likely sets some kind of anti-alias flag for each image during OCR processing. How do I un-set this flag?
Any pointers to useful ideas would be much appreciated.
After the software is done, all PDF images appear anti-aliased in Acrobat X. Page navigation is much slower than before, and when I zoom in/out, the images first go to what looks like the pre-anti-aliasing version before quickly changing to anti-aliased images.
Actually in the original file 2013_11_15_22_51_31.pdf contains a JPEG image while the OCR'ed file 2013_11_15_22_51_31_OCR.pdf contains a JPEG2000 image.
Comparing them in third party viewers, it becomes clear that the image in the OCR'ed file is not inherently anti-alias'ed. Furthermore there is no evident flag in the PDF instructing PDF viewers to apply anti-aliasing to the JPEG2000 image. Thus, Adobe Reader seems to automatically render JPEG and JPEG2000 images differently, applying anti-aliasing to the latter but not to the former.
Comparing both images in detail, though, it becomes clear that these images are not identical but instead the image in the OCR'ed PDF is slightly rotated.
I assume Abbyy FineReader recognized that the original scanned image is not correctly oriented. Thus, it rotated it slightly to correct this orientation.
Thus, replacing the image in the OCR'ed version with the one from the original one is no option: Due to the rotation the OCR information would partially be somewhat off.
What you might want to try is to recode the JPEG2000 image to JPEG and replace the image in the OCR'ed version with this recoded one. This will mean some loss of quality but most likely you can get rid of the anti-aliasing this way.
Be aware, though, that the JPEG2000 image is slightly larger than the JPEG image to accomodate for the rotation.
PS: As #VadimR pointed out, there is indeed an /Interpolate true entry in the image dictionary of the OCR-ed version I missed when looking at the file. This does not seem to be the major issue slowing down the rendering.
There is /Interpolate true entry in image dictionary of OCR-ed version, and that's what causes 'anti-aliasing'. Whether that (and not JPEG2000 instead of JPEG compression) is a cause of slow-down, you check on large enough files.
To un-set this key, the best would be to turn it off while creating a file, and if that's not possible, to write and run a small program in suitable language.
But, since your file doesn't sport 'compressed objects' and offending key is in plain view inside a file, in the spirit of 'job done quickly' you can simply process your file e.g. like this:
perl -M-encoding -0777pe "s!/Interpolate true!' 'x17!ge" <in.pdf >out.pdf
In a webappp I am currently creating the user has to provide images that get stored server side in a database. To minimize server load I am handling image resizing client-side courtesy of the HTML5 Canvas and getting the user to pre-approve the quality of the resized image.
The issue I have run into is this - the file size of the resized image is big. If I resize the same image with Paint.NET I can get a perfectly decent light weight 8 bit PNG image. Even the 32 bit Paint.NET image is smaller than the one that turns up on the server via toDataURL. I tried playing around with the toDataURL quality parameter but changing it has no effect whatsoever - exactly the same data size.
I should mention tha t I am testing with Chrome 20.0.1132.57 m and that the only browsers that are relevant to the app are the desktop versions of Chrome and Safari.
I know I could do some server side image processing but I want to avoid that if possible. Question - what, if anything can I do to cut down on the image file size sent out from the browser?
Browsers may happily ignore any quality parameter given for the toDataUrl and such. I don't believe honoring it is mandatory by the specification.
The only way to control the quality exactly would be
Write your own PNG compressor in JS or use something you can steal from the internets https://github.com/imaya/CanvasTool.PngEncoder
Dump <canvas> data to ArrayBuffer
Pass this to WebWorker
Let WebWorker compress it using your PNG compressor library
I believe there exist JPEG/PNG encoding and decoding solutions already.
Alternative you may try canvas.mozGetAsFile() / canvas.toBlob(), but I'll believe browsers still won't honour quality parameters.
https://developer.mozilla.org/en/DOM/HTMLCanvasElement/
I'm using an image loader (DevIL) for image loading. Im just wondering if the image format (the uncompressed format in memory) loaded from files (.jpg, .png, .bmp etc) is determined by the image loading program itself, or is some way contingent upon the actual image file.
All of the images I have looked at so far seem to be loaded into the RGBA / UNSIGNED_BYTE format. However I am wondering if I can always rely on this. Is it conceivable that an image might actually be loaded into the RGBA / FLOAT format instead? (NOTE: i am hoping that the loaded image format will always be the same, i want to rely on it:)
I can't find any docs in DevIL that explains this point, so I'm hoping anyone experienced with imaging / image loading could give me an answer just based on their experience / common sense.
Thanks
I don't know DevIL, but nearly any imaging library is going to provide you with an image object that has some concept of Pixel Format. The pixel format tells you how the image is laid out in memory. Looking quickly at the docs, I see that IlTexImage has a property called Format which can be one of IL_COLOUR_INDEX, IL_RGB, IL_RGBA, etc. The docs say
The format of the image data. Formats accepted are listed here and
are self-explanatory