Saving a PNG to a Redis Server - image

I'm trying to save a png generated by Canvas2Image to a Redis server and then display it again as an image.
I can't think of any way to do this and by searching Google I can't find any solution. Does anybody know how to do this?
This is for a website I'm making where anybody can draw on a canvas in real time.

Redis has a binary safe protocol, and most standard instructions are fine with arbitrary binary data as both keys as values. There is no need to base-64 (or otherwise) encode, as long as your library supports the binary-safe aspect. For example, with StackExchange.Redis (for .NET) you can pass a byte[] as the value to StringSet, and the result of StringGet can be cast to a byte[].
Then the only question becomes: how to get the binary of the png; but that should just be standard IO.

It's possible to encode a PNG as a base64 byte encoded string. Redis can then store the string like any other string.
If you'd like users to be able to draw in real time on the same image, it might be more effective to maintain the image as an SVG and share the image via client to client web sockets.

Related

Is Gzip compressed binary data or uncompressed text safe to transmit over https, or should it be base 64 encoded as the final step before sending it?

My question is in the title, this provides context to help you understand my confusion. Everything is sent over https.
My understanding of base 64 encoding is that it is a way of representing binary data as text, such that the text is safe to transmit across networks or the internet because it avoids anything that might be interpreted as a control code by the various possible protocols that might be involved at some point.
Given this understanding, I am confused why everything sent to over the internet is not base 64 encoded. When is it safe not to base 64 encode something before sending it? I understand that not everything understands or expects to receive things in base 64, but my question is why doesn't everything expect and work with this if it is the only way to send data without the possibility it could be interpreted as control codes?
I am designing an Android app and server API such that the app can use the API to send data to the server. There are some potentially large SQLite database files the client will be sending to the server (I know this sounds strange, yes it needs to send the entire database files). They are being gzipped prior to uploading. I know there is also a header that can be used to indicate this: Content-Encoding: gzip. Would it be safe to compress the data and send it with this header without base 64 encoding it? If not, why does such a header exist if it is not safe to use? I mean, if you base 64 encode it first and then compress it, you undo the point of base 64 encoding and it is not at that point base 64 encoded. If you compress it first and then base 64 encode it, that header would no longer be valid as it is not in the compressed format at that point. We actually don't want to use the header because we want to save the files in a compressed state, and using the header will cause the server to decompress it prior to our API code running. I'm only asking this to further clarify why I am confused about whether it is safe to send gzip compressed data without base 64 encoding it.
My best guess is that it depends on if what you are sending is binary data or not. If you are sending binary data, it should be base 64 encoded as the final step before uploading it. But if you are sending text data, you may not need to do this. However it still seems to my logic, this might still depends on the character encoding used. Perhaps some character encodings can result in sending data that could be interpreted as a control code? If this is true, which character encodings are safe to send without base 64 encoding them as the final step prior to sending it? If I am correct about this, it implies you should only use the that gzip header if you are sending compressed text that has not been base 64 encoded. Does compressing it create the possibility of something that could be interpreted as a control code?
I realize this was rather long, so I will repeat my primary questions (the title) here: Is either Gzip compressed binary data or uncompressed text safe to transmit, or should it be base 64 encoded as the final step before sending it? Okay I lied there is one more question involved in this. Would sending gzip compressed text always be safe to send without base 64 encoding it at the end, no matter which character encoding it had prior to compression?
My understanding of base 64 encoding is that it is a way of representing binary data as text,
Specifically, as text consisting of characters drawn from a 64-character set, plus a couple of additional characters serving special purposes.
such that the text is safe to transmit across networks or the internet because it avoids anything that might be interpreted as a control code by the various possible protocols that might be involved at some point.
That's a bit of an overstatement. For two endpoints to communicate with each other, they need to agree on one protocol. If another protocol becomes involved along the way, then it is the responsibility of the endpoints for that transmission to handle any needed encoding considerations for it.
What bytes and byte combinations can successfully be conveyed is a matter of the protocol in use, and there are plenty that handle binary data just fine.
At one time there was also an issue that some networks were not 8-bit clean, so that bytes with numeric values greater than 127 could not be conveyed across those networks, but that is not a practical concern today.
Given this understanding, I am confused why everything sent to over the internet is not base 64 encoded.
Given that the understanding you expressed is seriously flawed, it is not surprising that you are confused.
When is it safe not to base 64 encode something before sending it?
It is not only safe but essential to avoid base 64 encoding when the recipient of the transmission expects something different. The two or more parties to a given transmission must agree about the protocol to be used. That establishes the acceptable parameters of the communication. Although Base 64 is an available option for part or all of a message, it is by no means the only one, nor is it necessarily the best one for binary data, much less for data that are textual to begin with.
I understand that not everything understands or expects to receive things in base 64, but my question is why doesn't everything expect and work with this if it is the only way to send data without the possibility it could be interpreted as control codes?
Because it is not by any means the only way to avoid data being misinterpreted.
They are being gzipped prior to uploading. I know there is also a header that can be used to indicate this: Content-Encoding: gzip. Would it be safe to compress the data and send it with this header without base 64 encoding it?
It would be expected to transfer such data without base-64 encoding it. HTTP(S) handles binary data just fine. The Content-Encoding header tells the recipient how to interpret the message body, and if it specifies a binary content type (such as gzip) then binary data conforming to that content type are what the recipient will expect.
My best guess is that it depends on if what you are sending is binary data or not.
No. These days, for all practical intents and purposes, it depends only on what application-layer protocol you are using for the transmission. If it specifies that some or all of the message is to be base-64 encoded (according to a particular base-64 scheme, as there are more than one) then that's what the sender must do and how the receiver will interpret the message. If the protocol does not specify that, then the sender must not perform base-64 encoding. Some protocols afford the sender the option to make this choice, but those also provide a way for the sender to indicate inside the transmission what choice has been made.
Is either Gzip compressed binary data or uncompressed text safe to transmit, or should it be base 64 encoded as the final step before sending it?
Neither is inherently unsafe to transmit on today's networks. Whether data are base-64 encoded for transmission is a question of agreement between sender and receiver.
Okay I lied there is one more question involved in this. Would sending gzip compressed text always be safe to send without base 64 encoding it at the end, no matter which character encoding it had prior to compression?
The character encoding of the uncompressed text is not a factor in whether a gzipped version can be safely and successfully conveyed. But it probably matters for the receiver or anyone to whom they forward that data to understand the uncompressed text correctly. If you intend to accommodate multiple character encodings then you will want to provide a way to indicate which applies to each text.

Images storage performance react native (base64 vs uri path)

I have an app to create reports with some data and images (min 1 img, max 6). This reports keeps saved on my app, until user sent it to API (which can be done at the same day that he registered a report, or a week later).
But my question is: What's the proper way to store this images (I'm using Realm), is it saving the path (uri) or a base64 string? My current version keeps the base64 for this images (500 ~~ 800 kb img size), and then after my users send his reports to API, I deleted this base64 hash.
I was developing a way to save the path to the image, and then I display it. But image-picker uri returned is temporary. So to do this, I need to copy this file to another place, then save the path. But doing it, I got (for kind of 2 or 3 days) 2x images stored on phone (using memory).
So before I develop all this stuff, I was wondering, will it (copy image to another path then save path) be more performant that save base64 hash (to store at phone), or it shouldn't make much difference?
I try to avoid text only answers; including code is best practice but the question about storing images comes up frequently and it's not really covered in the documentation so I thought it should be addressed at a high level.
Generally speaking, Realm is not a solution for storing blob type data - images, pdf's etc. There are a number of technical reasons for that but most importantly, an image can go well beyond the capacity of a Realm field. Additionally it can significantly impact performance (especially in a sync'ing use case)
If this is a local only app, storing the images on disk in the device and keep a reference to where they are (their path) stored in Realm. That will enable the app to be fast and responsive with a minimal footprint.
If this is a sync'd solution where you want to share images across devices or with other users, there are several cloud based solutions to accommodate image storage and then store a URL to the image in Realm.
One option is part of the MongoDB family of products (which also includes MongoDB Realm) called GridFS. Another option is a solid product we've leveraged for years is called Firebase Cloud Storage.
Now that I've made those statements, I'll backtrack just a bit and refer you to this article Realm Data and Partitioning Strategy Behind the WildAid O-FISH Mobile Apps which is a fantastic article about implementing Realm in a real-world use application and in particular how to deal with images.
In that article, note they do store the images in Realm for a short time. However, one thing they left out of that (which was revealed in a forum post) is that the images are compressed to ensure they don't go above the Realm field size limit.
I am not totally on board with general use of that technique but it works for that specific use case.
One more note: the image sizes mentioned in the question are pretty small (500 ~~ 800 kb img size) and that's a tiny amount of data which would really not have an impact, so storing them in realm as a data object would work fine. The caveat to that is future expansion; if you decide to later store larger images, it would require a complete re-write of the code; so why not plan for that up front.

Design of the Protobuf binary format: performance and varint

I need to design a binary format to save data from a scientific application. This data has to be encoded in a binary format that can't be easily read by any other application (it is a requirement by some of our clients). As a consequence, we decided to build our own binary format, its encoder and its decoder.
We got some inspiration from many binary format, including protobuf. One thing that puzzles me is the way protobuf encodes the length of embedded messages. According to https://developers.google.com/protocol-buffers/docs/encoding, the size of an embedded message is encoded at its very beginning as a varint.
But before we encode an embedded message, we don't know yet its size (think for instance of an embedded message that contains many integers encoded as varint). As a consequence, we need to encode the message entirely, before we write it to the disk so we know its size.
Imagine that this message is huge. As a consequence, it is very difficult to encode it in an efficient way. We could encode this size as a full int and seek back to this part of the file once the embedded message is written, but we loose the nice property of varints: you don't need to specify if you have a 32-bit or a 64-bit integer. So going back to Google's implementation using a varint:
Is there an implementation trick I am missing, or is this scheme likely to be inefficient for large messages?
Yes, the correct way to do this is to write the message first, at the back of the buffer, and then prepend the size. With proper buffer management, you can write the message in reverse.
That said, why write your own message format when you can just use protobuf? It would be better to just use Protobuf directly and encrypt the file format. That would be easy for you to use, and still be hard for other applications to read.

QUdpsocket send and receive an image

I want to grab the PC screen. I use QPixmap::grab, and I get a QPixmap. Then I want to send this image using QUdpsocket. The image has been already converted to binary.
http://www.java2s.com/Code/Cpp/Qt/Udpserver.htm 's demo can send and receive the image, but use pixel, I wanna send all binary data each 250ms.
If you want to send the whole image in one go, you could try using QDataStream for serialization of a QByteArray.
The problem with this is that a UDP packet has a limited size, and could get fragmented if too large, and while large packets may work on your LAN, they could get fragmented over the internet. As UDP doesn't provide ordering guarantees like TCP, the fragments could come in the wrong order without the QDataStream header. This is probably why in your linked example they are only sending a single line at a time.
You may want to read a comparison of TCP and UDP and evaluate which fits your needs better.

benchmark for using base64 string instead of image reducing http requests

I am looking at replacing source of my images currently set to a image file in my css to a base64 string. Instead of the browser needing to make several calls, one for the CSS file and then one for each image, base64 embedding means that all of the images are embedded within the CSS file itself.
So I am currently investigating introducing this. However I have an issue I would like some advice on, a known problem with this approach. That is in my tests a base64 encoded string image is somewhere around 150% the size of a regular image. This means it’s unusable for large images. While I am not too concerned regarding larger images, I am not sure when I should and shouldn't use it.
Is there a benchmark I should use, as in if the base64 more than 150% larger I should not use it etc?
What are others views on this and what from your own experiences may help with the decision of when to and not to use it?
Base64 encoding always uses 4 output bytes for every 3 input bytes. It works by using essentially 6 bits of each output byte, mapped to characters that are safe to use. So you'll always see a consist 133% increase for anything you base64 encode, rounded up for the last chunk of 4 bytes. You can use gzip compression of your responses to gain some of this loss back.
This works in only handful of browsers. I would not recommend it. Especially for mobile browsers.
Images get cached on browser if you configure webserver properly. So, images don't get downloaded over and over again. They come from cache and thus super fast. There are various easy performance configuration you can do on your webserver to make this work over the base64 encoding of images embedded in CSS.
Take a look at this for some easy ways to boost website performance:
http://omaralzabir.com/making_best_use_of_cache_for_high_performance_website/
You are hopefully serving your HTML and CSS files gzipped. I tested this on a JPEG photo: I base64 encoded and gzipped it and the result was pretty close to the original image file size. So no difference there.
If you're doing it right, you end up with less requests per page but with approximately the same page size with base64 encoding.
The problem is with caching when you change something. Let's say you have 10 images embedded in a single CSS file. If you make any change to the CSS styles or to any single image, the users need to download that whole CSS file with all the embedded images again. You really need to judge yourself if this works for your site.
Base64 encoding requires very close to 4/3 of the original number of bytes, so a fair amount less than 150%, more like 133%.
I can only suggest that you benchmark this yourself and see whether your particular needs are better satisfied with the more complex approach, or whether you're better served sticking with the norm.

Resources