benchmark for using base64 string instead of image reducing http requests - performance

I am looking at replacing source of my images currently set to a image file in my css to a base64 string. Instead of the browser needing to make several calls, one for the CSS file and then one for each image, base64 embedding means that all of the images are embedded within the CSS file itself.
So I am currently investigating introducing this. However I have an issue I would like some advice on, a known problem with this approach. That is in my tests a base64 encoded string image is somewhere around 150% the size of a regular image. This means it’s unusable for large images. While I am not too concerned regarding larger images, I am not sure when I should and shouldn't use it.
Is there a benchmark I should use, as in if the base64 more than 150% larger I should not use it etc?
What are others views on this and what from your own experiences may help with the decision of when to and not to use it?

Base64 encoding always uses 4 output bytes for every 3 input bytes. It works by using essentially 6 bits of each output byte, mapped to characters that are safe to use. So you'll always see a consist 133% increase for anything you base64 encode, rounded up for the last chunk of 4 bytes. You can use gzip compression of your responses to gain some of this loss back.

This works in only handful of browsers. I would not recommend it. Especially for mobile browsers.
Images get cached on browser if you configure webserver properly. So, images don't get downloaded over and over again. They come from cache and thus super fast. There are various easy performance configuration you can do on your webserver to make this work over the base64 encoding of images embedded in CSS.
Take a look at this for some easy ways to boost website performance:
http://omaralzabir.com/making_best_use_of_cache_for_high_performance_website/

You are hopefully serving your HTML and CSS files gzipped. I tested this on a JPEG photo: I base64 encoded and gzipped it and the result was pretty close to the original image file size. So no difference there.
If you're doing it right, you end up with less requests per page but with approximately the same page size with base64 encoding.
The problem is with caching when you change something. Let's say you have 10 images embedded in a single CSS file. If you make any change to the CSS styles or to any single image, the users need to download that whole CSS file with all the embedded images again. You really need to judge yourself if this works for your site.

Base64 encoding requires very close to 4/3 of the original number of bytes, so a fair amount less than 150%, more like 133%.
I can only suggest that you benchmark this yourself and see whether your particular needs are better satisfied with the more complex approach, or whether you're better served sticking with the norm.

Related

Images storage performance react native (base64 vs uri path)

I have an app to create reports with some data and images (min 1 img, max 6). This reports keeps saved on my app, until user sent it to API (which can be done at the same day that he registered a report, or a week later).
But my question is: What's the proper way to store this images (I'm using Realm), is it saving the path (uri) or a base64 string? My current version keeps the base64 for this images (500 ~~ 800 kb img size), and then after my users send his reports to API, I deleted this base64 hash.
I was developing a way to save the path to the image, and then I display it. But image-picker uri returned is temporary. So to do this, I need to copy this file to another place, then save the path. But doing it, I got (for kind of 2 or 3 days) 2x images stored on phone (using memory).
So before I develop all this stuff, I was wondering, will it (copy image to another path then save path) be more performant that save base64 hash (to store at phone), or it shouldn't make much difference?
I try to avoid text only answers; including code is best practice but the question about storing images comes up frequently and it's not really covered in the documentation so I thought it should be addressed at a high level.
Generally speaking, Realm is not a solution for storing blob type data - images, pdf's etc. There are a number of technical reasons for that but most importantly, an image can go well beyond the capacity of a Realm field. Additionally it can significantly impact performance (especially in a sync'ing use case)
If this is a local only app, storing the images on disk in the device and keep a reference to where they are (their path) stored in Realm. That will enable the app to be fast and responsive with a minimal footprint.
If this is a sync'd solution where you want to share images across devices or with other users, there are several cloud based solutions to accommodate image storage and then store a URL to the image in Realm.
One option is part of the MongoDB family of products (which also includes MongoDB Realm) called GridFS. Another option is a solid product we've leveraged for years is called Firebase Cloud Storage.
Now that I've made those statements, I'll backtrack just a bit and refer you to this article Realm Data and Partitioning Strategy Behind the WildAid O-FISH Mobile Apps which is a fantastic article about implementing Realm in a real-world use application and in particular how to deal with images.
In that article, note they do store the images in Realm for a short time. However, one thing they left out of that (which was revealed in a forum post) is that the images are compressed to ensure they don't go above the Realm field size limit.
I am not totally on board with general use of that technique but it works for that specific use case.
One more note: the image sizes mentioned in the question are pretty small (500 ~~ 800 kb img size) and that's a tiny amount of data which would really not have an impact, so storing them in realm as a data object would work fine. The caveat to that is future expansion; if you decide to later store larger images, it would require a complete re-write of the code; so why not plan for that up front.

Saving a PNG to a Redis Server

I'm trying to save a png generated by Canvas2Image to a Redis server and then display it again as an image.
I can't think of any way to do this and by searching Google I can't find any solution. Does anybody know how to do this?
This is for a website I'm making where anybody can draw on a canvas in real time.
Redis has a binary safe protocol, and most standard instructions are fine with arbitrary binary data as both keys as values. There is no need to base-64 (or otherwise) encode, as long as your library supports the binary-safe aspect. For example, with StackExchange.Redis (for .NET) you can pass a byte[] as the value to StringSet, and the result of StringGet can be cast to a byte[].
Then the only question becomes: how to get the binary of the png; but that should just be standard IO.
It's possible to encode a PNG as a base64 byte encoded string. Redis can then store the string like any other string.
If you'd like users to be able to draw in real time on the same image, it might be more effective to maintain the image as an SVG and share the image via client to client web sockets.

Why does OpenURI treat files under 10kb in size as StringIO?

I fetch images with open-uri from a remote website and persist them on my local server within my Ruby on Rails application. Most of the images were shown without a problem, but some images just didn't show up.
After a very long debugging-session I finally found out (thanks to this blogpost) that the reason for this is that the class Buffer in the open-uri-libary treats files with less than 10kb in size as IO-objects instead of tempfiles.
I managed to get around this problem by following the answer from Micah Winkelspecht to this StackOverflow question, where I put the following code within a file in my initializers:
require 'open-uri'
# Don't allow downloaded files to be created as StringIO. Force a tempfile to be created.
OpenURI::Buffer.send :remove_const, 'StringMax' if OpenURI::Buffer.const_defined?('StringMax')
OpenURI::Buffer.const_set 'StringMax', 0
This works as expected so far, but I keep wondering, why they put this code into the library in the first place? Does anybody know a specific reason, why files under 10kb in size get treated as StringIO ?
Since the above code practically resets this behaviour globally for my entire application, I just want to make sure that I am not breaking anything else.
When one does network programming, you allocate a buffer of a reasonably large size and send and read units of data which will fit in the buffer. However, when dealing with files (or sometimes things called BLOBs) you cannot assume that the data will fit into your buffer. So, you need special handling for these large streams of data.
(Sometimes the units of data which fit into the buffer are called packets. However, packets are really a layer 4 thing, like frames are at layer 2. Since this is happening a layer 7, they might better be called messages.)
For replies larger than 10K, the open-uri library is setting up the extra overhead to write to a stream objects. When under the StringMax size, it just includes the string in the message, since it knows it can fit in the buffer.

FPDF with lots of images, any way to speed it up?

I'm using FPDF with PHP and need to print an order manifest. This manifest will have up to 200-300 products with images. Creating it at this point is quite slow, and the images are stored on AmazonS3. Any idea if this could be sped up?
Right now just with images of about 15X15 mm it generates a file size of about 16mb and takes 3 1/2 to 4 minutes, which without the images is only about 52k and comes up almost instantly.
Of course, it may just be downloading that many images about which there's not really much I can do.
I suggest you to try img2pdf.
While this module offers much less options for interacting with PDFs compared with fpdf, if you are only interested in combining images into a PDF file, this is probably the best module you can use. It is fast. It is easy to use.
Here is an example code:
import img2pdf
filename = "mypdf.pdf"
images = ["image1.jpg", "image2.jpg"]
with open(filename,"wb") as f:
f.write(img2pdf.convert(images))
I used it to combine 400 images - it only took a second or so.
I found the extension i mentioned in my comment above:
http://fpdf.org/en/script/script76.php
this seems to reduce the time it takes a little for myself, you may have better results as your document is much larger than mine.

Fastest? ClientBundle vs plain URL images

Right now a large application I'm working on downloads all small images separately and usually on demand. About 1000 images ranging from 20 bytes to 40kbytes. I'm trying to figure out if there will be any client performance improvements by using a ClientBundle for the smaller most used ones.
I'm putting the 'many connections high latency' issue for the side now and just concentrate on javascript/css/browser performance.
Some of the images are used directly within CSS. Are there any performance improvements by "spriting" them vs using as usual?
Some images are created as new Image(url). Is it better to leave them this way, move them into CSS and apply styles dinamically or load from a ClientBundle?
Some actions have a result a setURL on an image. I've seen that the same code can be done with ClientBundle and it will probably set the dataURI for that image. Will doing improve performance or is it faster this way?
I'm specifically talking about runtime more than startup time, since this is an application which sees long usage times and all images will probably be cached in the first 10 minutes, so round-trip is not an issue (for now).
Short answer is not really (for FF, chrome, safari, opera) BUT sometimes for IE (<9)!!!
Lets look at what client bundle does
Client bundle packages every image into one ...bundle... so that all you need is one http connection to get all of them... and it requires only one freshness lookup the next time you load your application. (rather than n times, n being the number of your tiny images.. really wasteful.)
So its clear that client bundle greatly improves your apps load time.
Runtime Performance
There maybe times when one particular image fails to get downloaded or gets lost over the internet. If you make 1000 connections, the probability of something going wrong increases (however little). FF, Chrome, Safari, Opera simply put the image not found logo and move on with the running. IE <9 however, will keep trying to get those particular images, using up one connection of the two its allowed. That really impacts performance in IE.
Other than that, there will be some performance improvement if you keep loading new widgets asynchronously and they end up downloading images at a later stage.
Jai

Resources