Why does this JPG render differently with each refresh? - image

A little background: a coworker was creating some "glitch-art" from, using this link. He deleted some bytes from a jpeg image, and created the result:
http://jmelvnsn.com/prince_fielder.jpg
The thing that's blowing my mind here, is that chrome is rendering this image differently on each refresh. I'm not sure I understand how the image-rendering code is non-deterministic. What's going on?
EDIT>> I really wish stackoverflow would stop redirecting my url to their imgur url.

Actually it's interesting to know that the JPG standard it's not a standard about imaging techniques or imaging algorithms, it's more like a standard about a container.
As far as I know if you respect the jpeg standard you can decode/encode a jpeg with X number of different techniques and algorithms, that's why it's hard to support JPEG/JPG, from a programmer prospective a JPG can be a million things and it's really hard to handle that kind of fragmentation, often times you are forced to simply jump on the train offered by some library and hope that your users wouldn't experience a trouble with it.
There is no standard way to encode or decode a JPEG image/file ( including the algorithms used in this processes ), considering this the apparent "weird" result offered by your browser is 100% normal.

Related

Google Page Insights: did anybody ever optimize images with lossless compression by the amount Google suggests?

I ran my page through Google's Page Insights to receive the suggestion to optimize my page's images. There are many JPEG photos on it. Google's Page Insights suggest that for some of them there could be savings of up to 50% (for some even over 60%) by using lossless compression. I am not sure which technologies Google bases these calculation on. Had anybody luck matching these numbers with real (lossless) compression results?
I already did some research, tried out "WP Smush.it" and "wwww image optimizer" plugins, both with very similar results being nowhere near for Google suggest would be possible. In fact, for jpeg images, where Google Page Insight mentioned some 53% filesize savings being possible with lossless compression, these tools just managed to save about 3%. I tried this with several of the images, so I would be very interested, if somebody else experienced similar problems? How does Google Page Insights do these kind of calculations? Is it just educated guessing or are they using different compression algorithms that us humans are not allowed to know? ;)
Hi i got the same problem. I managed to reduce the difference from what Google gets to 3% with a software named "RIOT" for windows. Still looking for one that would reproduce the same results Google insight gets.
A solution that doesn't involve linux or being a programmer that is.

Pseudocode (or code) for main compression algorithms

I'm really interested in image and video compression, but its hard for me to find a main source to start implementing the major algorithms.
What I want is just a source of information to begin the implementation of my own codec. I want to implement it from scratch (for example, for jpeg, implement my own Huffman, cosine conversion ...). All I need is a little step by step guide showing me which steps are involved in each algorithm.
I'm interested mainly on image compression algorithms (by now, JPEG) and video compression algorithms (MPEG-4, M-JPEG, and maybe AVI and MP4).
Can anyone suggest me an on-line source, with a little more information than wikipedia? (I checked it, but information is not really comprehensive)
Thank you so much :)
Start with JPEG. You'll need the JPEG standard. It will take a while to go through, but that's the only way to have a shot at writing something compatible. Even then, the standard won't help much with deciding on how and how much you quantize the coefficients, which requires experimentation with images.
Once you get that working, then get the H.264 standard and read that.
ImpulseAdventure site has fantastic series of articles about basics of JPEG encoding.
I'm working on an experimental JPEG encoder that's partly designed to be readable and easy to change (rather than obfuscated by performance optimizations).

Why worry about minifying JS and CSS when images are typically the largest sized HTTP request?

I think my question says it all. With many successful sites using lots of user inputted images (e.g., Instagram), it just seems, in my perhaps naive opinion, that source code minification might be missing the boat.
Regardless of the devices one is coding for (mobile/desktop/both), wouldn't developers time be better spent worrying about the images they are serving up rather than their code sizes?
For optimizing for mobile browsers slower speeds, I was thinking it would probably be best to have multiple sized images and write code to serve up the smallest ones if the user is on a phone.
Does that sound reasonable?
First, I don't know that many developers do "worry" about minification of scripts. Plenty of public facing websites don't bother.
Second, minification is the removal of unnecessary whitespace whereas decreasing the size of an image usually entails reducing it's quality so there is some difference.
Third, I believe that if it weren't so easy to implement a minification step into a deployment process it would be even less popular. It doesn't save much bandwidth true but if all it takes is a few minutes to configure a deployment script to do it then why not?
100KB of data for a medium-sized JS library is no light-weight. You should optimize your site as best as possible. If by minifying a script you can get it to half of its size, and then by gzipping it, save another third, why wouldn't you?
I don't know how you do your minifying, but there are many scripts that automate the process, and will often bundle all of your JS into one package on the fly. This saves bandwidth, and hassle.
My philosophy is always "use what you need". If it is possible to save bandwidth for you and your users without compromising anything, then do it. If you can compress your images way down and they still look good, then do it. Likewise, if you have some feature that your web application absolutely must have, and it takes quite a bit of space, use it anyway.

Quick, multi-OS, command line conversion of JPEG-2000 to JPEG

I am working on a web script that handles image processing using ImageMagick. It takes relevant parameters, executes an ImageMagick command at the command line or shell depending on OS, and passes the raw image data back to the script. The language of the web script is obviously not pertinent.
Simple use cases include:
convert -resize 750 H:/221136.png - which just resizes the input image to 750 width and outputs the raw data to the console. More complex use cases involve rotating, resizing, cropping/panning, and drawing.
The script works great and is quite fast for PNG, GIF, and JPEG inputs, even at fairly large (4000x5000 resolutions). Unfortunately my input data also includes JPEG-2000. A 10-15 Megabyte JPEG2000 takes a truly insane amount of time for ImageMagick to process, in the order of 10-15 seconds. It is not suitable for live, on the fly conversion.
I know quick conversion of JPEG-2000 to JPEG for web output is possible, because a piece of Enterprise software I work with does it fairly on-the-fly. I'm not sure which library they use--the DLL/so they use is DL80JP2KLib.dll/.so. Looking it up, it seems that a company called DataLogic makes this, but they don't seem to have any obviously relevant programs on their site.
Ideally I'm looking for a solution (plug-in?) that would either enable ImageMagick to convert these high resolution JPEG-2000 images on-the-fly like it does with PNG, GIF, or JPEG... or a separate command utility that I can run in advance of ImageMagick to convert the JPEG-2000 to an intermediate format that ImageMagick can process quickly.
The servers that will run this script have 32 gigs of RAM and beefy processors. Assume that speed of conversion is more important than resource usage efficiency. Assume also that while I need some semblance of quality, image lossyness is not an urgent thing. Licensing requirements and/or price are not important, except that I need to be able to test it myself for speed on a few sample files before we buy. The ideal solution is also (relatively) OS independent
I tried an application from Kakadu Software and it's fairly quick, in the order of 3-4 seconds, but that's still not fast enough. If it's not possible to get below, say, one second, I will look at batch converting files in advance.
I have uploaded a representative file (JPEG-2000, ~8MB) to MediaFire:
http://www.mediafire.com/?yxv0j6vdwx0k996
I found exact image to be much faster in the past.
http://www.exactcode.de/site/open_source/exactimage/
Mark Tyler (original author of mtPaint) once split out the excellent graphics handling parts into a separate library (mtpixel ...since abandoned as a separate project, but included in mtcelledit # its Google code home)

If I wanted to build a system that tracked changes in an image, where would I start?

Say I wanted to build a system that functions like git, but for images - where would I start?
For instance, say I wanted to just have 1 image (the original) stored on disk + the diff. When the second image needs to be viewed, I rebuild it based on the original + the diff (that way I don't store two images on disk at the same time).
Can I do that in Ruby and where would I start?
Anyone can provide a nice overview, I would appreciate it. Or even some links on where to get started.
Thanks.
P.S. Assume that I have a solid grasp of Ruby (or can learn). Are there other languages I would need to know, if so...which would work best assuming that I want my solution to be OS-agnostic and work seamlessly on at least Windows & Mac.
Take a look at Version Control for Graphics I would start looking at the source code for the projects mentioned and learn from them. The issue is that some formats will shift bytes around even if you made a small change in the image, this results in a situation that is less than ideal for VCS due to the fact that even though you might still have the same image, the program sees a 90 percent change and stores useless data.
The first question that comes to my mind is: will the image size increase in the future? (or will my image change in a sensible way?) If no you could just track the colour of the pixels.
If the image is going to change its size you should think to create a more complex scenario that behaves differently.
Searching on the internet I've also found this library: it could be useful to manipulate images and/or get information from them.

Resources