Loading PNG images but using them as COMPRESSED_RGBA_S3TC_DXT5_EXT in WebGL? - opengl-es

I'm trying to load images in WebGL, and then uploading them to the GPU. I'd like to use a compressed texture format, even though the original images are uncompressed/lossless.
To upload, this is what I'm doing:
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, textureSource);
In the above code, textureSource is a loaded (say, "texture.png").
It all works well, but I'd like to load WEBGL_compressed_texture_s3tc formats (COMPRESSED_RGB_S3TC_DXT1_EXT) to store the image in a compressed fashion.
I make sure that the extension is available and enabled...
var ext = gl.getExtension("WEBGL_compressed_texture_s3tc");
var fmt = ext.COMPRESSED_RGBA_S3TC_DXT5_EXT;
console.log(fmt); // 33779
But then I can't use it as a format. Using texImage2D() doesn't work:
gl.texImage2D(gl.TEXTURE_2D, 0, fmt, fmt, gl.UNSIGNED_BYTE, textureSource);
// WebGL: INVALID_ENUM: texImage2D: invalid texture format
// [.WebGLRenderingContext]RENDER WARNING: texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering or is not 'texture complete'
The expected method is compressedTexImage2D(), but that's not very helpful either:
gl.compressedTexImage2D(gl.TEXTURE_2D, 0, fmt, 256, 256, 0, texture.source);
// Uncaught TypeError: Failed to execute 'compressedTexImage2D' on 'WebGLRenderingContext': parameter 7 is not of type 'ArrayBufferView'.
This is obviously because compressedTexImage2D() expects an Uint8Array with the actual DDS/DXT data, not a JavaScript image like what I'm passing.
The obvious solution is to uploading files in their native DDS format - files that have been compressed somewhere else. But that's what I'm trying to avoid: in my current workflow, it'd make sense to have the image in their original format rather than pre-compress them (or having duplicates).
My question then is as such: can I still use the original PNG images, loading them, and having them upload to the GPU on their compressed format? In other words, can I compress textures to their DXT1/5 formats on the fly?
I'm a little bit constrained by video memory in what I'm doing, so any saving would be great. I managed to minimize the space used by the textures by using UNSIGNED_SHORT_4_4_4_4 and the other data types, which is a good start, but I'd like to try using the native compression too.
I haven't found much documentation on the topic, nor found relevant code on other popular libraries (Three.js, Pixi, etc), which leads me to believe that my request is super stupid, but I'd like to understand why. This page hints at licensing issues, which might be why WebGL doesn't feature a way to properly compress the file, nor allow for browser support of image objects.

can I still use the original PNG images, loading them, and having them upload to the GPU on their compressed format? In other words, can I compress textures to their DXT1/5 formats on the fly?
As far as I am informed: NO.
I only work on desktop and embedded GL, but even there is no chance to compress textures on the fly without dedicated code or a library.
(And, well, those DXT formats are not very good, either, if your textures are too detailed or have lots of different colors within small buckets. Most likely you're better off just using smaller textures, as the DXT1 compresses to 1/8th and DXT5 to 1/4 of the original size (which is like halving the resolution of the texture).)

Theoretically I think you could compress the PNGs to DXT with Javascript. I guess that'd be just mad though :)
Better encode beforehand with native code. An option for supporting PNG content is to have an asset proxy for doing the conversions on the fly on server side (our partner company that hosts http://www.meshmoon.com/ does exactly that).

Related

Texture Quality - World of Warcraft Vanilla

Textures. This is something that has been bothering me since I started messing with them.More specifically, their quality in World of Warcraft Vanilla.
I searched and researched it over and over again, tried it in so many ways but it's just never perfect.
The game only accepts the file formats BLP and TGA which I never get them to be as good as PNG.I'm not sure if the issue is the converters I use (I have tried multiple ones) or if it's actually impossible.
Here's an example:
lua code
local Frame = CreateFrame("Frame")
Frame:SetPoint("CENTER", 0, 0)
Frame:SetWidth(82)
Frame:SetHeight(82)
local Texture = Frame:CreateTexture()
Texture:SetAllPoints()
-- 82 / 128 = 0.640625
Texture:SetTexture("Interface\\AddOns\\MyAddOn\\image.tga") -- .blp .tga
Texture:SetTexCoord(0, 0.640625, 0, 0.640625)
image.png
image.blp (in-game look)
image.tga (in-game look)
PNG to BLP converter that I used in this case.
PNG to TGA converter that I used in this case.
image.blp file.
image.tga file.
Resuming: I want to have an in-game texture that is 100% (every pixel) like the original PNG image.

Convert image to Blob

I want to upload image data to a php script on the server. I have a URL for an image source (PNG, the image might be located on a different server). I load this into a Javascript image, draw this into a canvas and use the canvas.toBlob() method (or a polyfill as it is not mainly supported yet) to generate a blob holding the image data. This works fine, but I recognized that the resulting blob size is much bigger than the original image data.
In contrast if I use a HTML File input and let the user select an image on the client the resulting blob has equal size to the original image. Can I get image data from a canvas that is equal to the original image size?
I guess the reason is that I loose the PNG (or any image compression) when using the canvas.toBlob() polyfill:
value: function (callback, type, quality) {
var binStr = atob(this.toDataURL(type, quality).split(',')[1]),
len = binStr.length,
arr = new Uint8Array(len);
for (var i=0; i<len; i++ ) {
arr[i] = binStr.charCodeAt(i);
}
callback(new Blob([arr], {type: type || 'image/png'}));
}
I am confused by so many conversion steps via image, canvas, blob - so maybe there is an alternative to get the image data from a given URL and finally append it to FormData to send it to the server?
The method toDataURL when using the png format only uses a limited set of the possible formats available for PNG files. It is the 8bit per channel RGBA (32 bits) compressed format. There are no options to use any of the other formats available so you are forced to include redundant data when you save as a PNG. PNG also has a 24bit and 8 bit format. PNG also has several compression options available though I am unsure which is used but each browser.
In most cases it is best to send the original image. If you need to modify the image and do not use the alpha channel (no transparency) but still want the quality to be high send it as a jpeg with quality set to 1 (max).
You may also consider the use of a custom encoder for PNG that gives you access to more of the PNG encoding options, or even try one of the many other formats available, or make up your own format, though you will be hard pushed to improve on jpeg and webp.
You could also consider compressing the data on the server when you store it, even jpeg and webp have a little room for more compression. For transport you should not worry as most data these days is compressed as it leaves the page and most definitely compressed by the time it leaves the clients ISP

OpenCV imwrite gives washed-out result for jpeg images

I am using OpenCV 3.0 and whenever I read an image and write it back the result is a washed-out image.
code:
cv::Mat img = cv::imread("dir/frogImage.jpg",-1);
cv::imwrite("dir/result.jpg",img);
Does anyone know whats causing this?
Original:
Result:
You can try to increase the compression quality parameter as shown in OpenCV Documentation of cv::imwrite :
cv::Mat img = cv::imread("dir/frogImage.jpg",-1);
std::vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_JPEG_QUALITY);
compression_params.push_back(100);
cv::imwrite("dir/result.jpg",img, compression_params);
Without specifying the compression quality manually, quality of 95% will be applied.
but 1. you don't know what jpeg compression quality your original image had (so maybe you might increase the image size) and 2. it will (afaik) still introduce additional minor artifacts, because after all it is a lossy compression method.
UPDATE your problem seems to be not because of compression artifacts but because of an image with Adobe RGB 1998 color format. OpenCV interprets the color values as they are, but instead it should scale the color values to fit the "real" RGB color space. Browser and some image viewers do apply the color format correctly, while others don't (e.g. irfanView). I used GIMP to verify. Using GIMP you can decide on startup how to interpret the color values by format, either getting your desired or your "washed out" image.
OpenCV definitely doesn't care about such things, since it's not a photo editing library, so neither on reading nor on writing, color format will be handled.
This is because you are saving the image as JPG. When doing this the OpenCV will compress the image.
try to save it as PNG or BMP and no difference will be exist.
However, the IMPORTANT QUESTION : I am loading the image as jpg and saving it as JPG. So, how there is a difference?!
Yes, this is because there is many not identical compression/decompression algorithms for JPG.
if you want to get into some details see this question:
Reading jpg file in OpenCV vs C# Bitmap
EDIT:
You can see what I mean exactly here:
auto bmp(cv::imread("c:/Testing/stack.bmp"));
cv::imwrite("c:/Testing/stack_OpenCV.jpg", bmp);
auto jpg_opencv(cv::imread("c:/Testing/stack_OpenCV.jpg"));
auto jpg_mspaint(cv::imread("c:/Testing/stack_mspaint.jpg"));
cv::imwrite("c:/Testing/stack_mspaint_opencv.jpg", jpg_mspaint);
jpg_mspaint=(cv::imread("c:/Testing/stack_mspaint_opencv.jpg"));
cv::Mat jpg_diff;
cv::absdiff(jpg_mspaint, jpg_opencv, jpg_diff);
std::cout << cv::mean(jpg_diff);
The Result:
[0.576938, 0.466718, 0.495106, 0]
As #Micha commented:
cv::Mat img = cv::imread("dir/frogImage.jpg",-1);
cv::imwrite("dir/result.bmp",img);
I was always annoyed when mspaint.exe did the same to jpeg images. Especially for the screenshots...it ruined them everytime.

HTML5 Canvas toDataURL 8 bit?

In a webappp I am currently creating the user has to provide images that get stored server side in a database. To minimize server load I am handling image resizing client-side courtesy of the HTML5 Canvas and getting the user to pre-approve the quality of the resized image.
The issue I have run into is this - the file size of the resized image is big. If I resize the same image with Paint.NET I can get a perfectly decent light weight 8 bit PNG image. Even the 32 bit Paint.NET image is smaller than the one that turns up on the server via toDataURL. I tried playing around with the toDataURL quality parameter but changing it has no effect whatsoever - exactly the same data size.
I should mention tha t I am testing with Chrome 20.0.1132.57 m and that the only browsers that are relevant to the app are the desktop versions of Chrome and Safari.
I know I could do some server side image processing but I want to avoid that if possible. Question - what, if anything can I do to cut down on the image file size sent out from the browser?
Browsers may happily ignore any quality parameter given for the toDataUrl and such. I don't believe honoring it is mandatory by the specification.
The only way to control the quality exactly would be
Write your own PNG compressor in JS or use something you can steal from the internets https://github.com/imaya/CanvasTool.PngEncoder
Dump <canvas> data to ArrayBuffer
Pass this to WebWorker
Let WebWorker compress it using your PNG compressor library
I believe there exist JPEG/PNG encoding and decoding solutions already.
Alternative you may try canvas.mozGetAsFile() / canvas.toBlob(), but I'll believe browsers still won't honour quality parameters.
https://developer.mozilla.org/en/DOM/HTMLCanvasElement/

Image format question

I'm using an image loader (DevIL) for image loading. Im just wondering if the image format (the uncompressed format in memory) loaded from files (.jpg, .png, .bmp etc) is determined by the image loading program itself, or is some way contingent upon the actual image file.
All of the images I have looked at so far seem to be loaded into the RGBA / UNSIGNED_BYTE format. However I am wondering if I can always rely on this. Is it conceivable that an image might actually be loaded into the RGBA / FLOAT format instead? (NOTE: i am hoping that the loaded image format will always be the same, i want to rely on it:)
I can't find any docs in DevIL that explains this point, so I'm hoping anyone experienced with imaging / image loading could give me an answer just based on their experience / common sense.
Thanks
I don't know DevIL, but nearly any imaging library is going to provide you with an image object that has some concept of Pixel Format. The pixel format tells you how the image is laid out in memory. Looking quickly at the docs, I see that IlTexImage has a property called Format which can be one of IL_COLOUR_INDEX, IL_RGB, IL_RGBA, etc. The docs say
The format of the image data. Formats accepted are listed here and
are self-explanatory

Resources