Texture Quality - World of Warcraft Vanilla - image

Textures. This is something that has been bothering me since I started messing with them.More specifically, their quality in World of Warcraft Vanilla.
I searched and researched it over and over again, tried it in so many ways but it's just never perfect.
The game only accepts the file formats BLP and TGA which I never get them to be as good as PNG.I'm not sure if the issue is the converters I use (I have tried multiple ones) or if it's actually impossible.
Here's an example:
lua code
local Frame = CreateFrame("Frame")
Frame:SetPoint("CENTER", 0, 0)
Frame:SetWidth(82)
Frame:SetHeight(82)
local Texture = Frame:CreateTexture()
Texture:SetAllPoints()
-- 82 / 128 = 0.640625
Texture:SetTexture("Interface\\AddOns\\MyAddOn\\image.tga") -- .blp .tga
Texture:SetTexCoord(0, 0.640625, 0, 0.640625)
image.png
image.blp (in-game look)
image.tga (in-game look)
PNG to BLP converter that I used in this case.
PNG to TGA converter that I used in this case.
image.blp file.
image.tga file.
Resuming: I want to have an in-game texture that is 100% (every pixel) like the original PNG image.

Related

OpenCV imwrite gives washed-out result for jpeg images

I am using OpenCV 3.0 and whenever I read an image and write it back the result is a washed-out image.
code:
cv::Mat img = cv::imread("dir/frogImage.jpg",-1);
cv::imwrite("dir/result.jpg",img);
Does anyone know whats causing this?
Original:
Result:
You can try to increase the compression quality parameter as shown in OpenCV Documentation of cv::imwrite :
cv::Mat img = cv::imread("dir/frogImage.jpg",-1);
std::vector<int> compression_params;
compression_params.push_back(CV_IMWRITE_JPEG_QUALITY);
compression_params.push_back(100);
cv::imwrite("dir/result.jpg",img, compression_params);
Without specifying the compression quality manually, quality of 95% will be applied.
but 1. you don't know what jpeg compression quality your original image had (so maybe you might increase the image size) and 2. it will (afaik) still introduce additional minor artifacts, because after all it is a lossy compression method.
UPDATE your problem seems to be not because of compression artifacts but because of an image with Adobe RGB 1998 color format. OpenCV interprets the color values as they are, but instead it should scale the color values to fit the "real" RGB color space. Browser and some image viewers do apply the color format correctly, while others don't (e.g. irfanView). I used GIMP to verify. Using GIMP you can decide on startup how to interpret the color values by format, either getting your desired or your "washed out" image.
OpenCV definitely doesn't care about such things, since it's not a photo editing library, so neither on reading nor on writing, color format will be handled.
This is because you are saving the image as JPG. When doing this the OpenCV will compress the image.
try to save it as PNG or BMP and no difference will be exist.
However, the IMPORTANT QUESTION : I am loading the image as jpg and saving it as JPG. So, how there is a difference?!
Yes, this is because there is many not identical compression/decompression algorithms for JPG.
if you want to get into some details see this question:
Reading jpg file in OpenCV vs C# Bitmap
EDIT:
You can see what I mean exactly here:
auto bmp(cv::imread("c:/Testing/stack.bmp"));
cv::imwrite("c:/Testing/stack_OpenCV.jpg", bmp);
auto jpg_opencv(cv::imread("c:/Testing/stack_OpenCV.jpg"));
auto jpg_mspaint(cv::imread("c:/Testing/stack_mspaint.jpg"));
cv::imwrite("c:/Testing/stack_mspaint_opencv.jpg", jpg_mspaint);
jpg_mspaint=(cv::imread("c:/Testing/stack_mspaint_opencv.jpg"));
cv::Mat jpg_diff;
cv::absdiff(jpg_mspaint, jpg_opencv, jpg_diff);
std::cout << cv::mean(jpg_diff);
The Result:
[0.576938, 0.466718, 0.495106, 0]
As #Micha commented:
cv::Mat img = cv::imread("dir/frogImage.jpg",-1);
cv::imwrite("dir/result.bmp",img);
I was always annoyed when mspaint.exe did the same to jpeg images. Especially for the screenshots...it ruined them everytime.

Loading PNG images but using them as COMPRESSED_RGBA_S3TC_DXT5_EXT in WebGL?

I'm trying to load images in WebGL, and then uploading them to the GPU. I'd like to use a compressed texture format, even though the original images are uncompressed/lossless.
To upload, this is what I'm doing:
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, textureSource);
In the above code, textureSource is a loaded (say, "texture.png").
It all works well, but I'd like to load WEBGL_compressed_texture_s3tc formats (COMPRESSED_RGB_S3TC_DXT1_EXT) to store the image in a compressed fashion.
I make sure that the extension is available and enabled...
var ext = gl.getExtension("WEBGL_compressed_texture_s3tc");
var fmt = ext.COMPRESSED_RGBA_S3TC_DXT5_EXT;
console.log(fmt); // 33779
But then I can't use it as a format. Using texImage2D() doesn't work:
gl.texImage2D(gl.TEXTURE_2D, 0, fmt, fmt, gl.UNSIGNED_BYTE, textureSource);
// WebGL: INVALID_ENUM: texImage2D: invalid texture format
// [.WebGLRenderingContext]RENDER WARNING: texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering or is not 'texture complete'
The expected method is compressedTexImage2D(), but that's not very helpful either:
gl.compressedTexImage2D(gl.TEXTURE_2D, 0, fmt, 256, 256, 0, texture.source);
// Uncaught TypeError: Failed to execute 'compressedTexImage2D' on 'WebGLRenderingContext': parameter 7 is not of type 'ArrayBufferView'.
This is obviously because compressedTexImage2D() expects an Uint8Array with the actual DDS/DXT data, not a JavaScript image like what I'm passing.
The obvious solution is to uploading files in their native DDS format - files that have been compressed somewhere else. But that's what I'm trying to avoid: in my current workflow, it'd make sense to have the image in their original format rather than pre-compress them (or having duplicates).
My question then is as such: can I still use the original PNG images, loading them, and having them upload to the GPU on their compressed format? In other words, can I compress textures to their DXT1/5 formats on the fly?
I'm a little bit constrained by video memory in what I'm doing, so any saving would be great. I managed to minimize the space used by the textures by using UNSIGNED_SHORT_4_4_4_4 and the other data types, which is a good start, but I'd like to try using the native compression too.
I haven't found much documentation on the topic, nor found relevant code on other popular libraries (Three.js, Pixi, etc), which leads me to believe that my request is super stupid, but I'd like to understand why. This page hints at licensing issues, which might be why WebGL doesn't feature a way to properly compress the file, nor allow for browser support of image objects.
can I still use the original PNG images, loading them, and having them upload to the GPU on their compressed format? In other words, can I compress textures to their DXT1/5 formats on the fly?
As far as I am informed: NO.
I only work on desktop and embedded GL, but even there is no chance to compress textures on the fly without dedicated code or a library.
(And, well, those DXT formats are not very good, either, if your textures are too detailed or have lots of different colors within small buckets. Most likely you're better off just using smaller textures, as the DXT1 compresses to 1/8th and DXT5 to 1/4 of the original size (which is like halving the resolution of the texture).)
Theoretically I think you could compress the PNGs to DXT with Javascript. I guess that'd be just mad though :)
Better encode beforehand with native code. An option for supporting PNG content is to have an asset proxy for doing the conversions on the fly on server side (our partner company that hosts http://www.meshmoon.com/ does exactly that).

Uncrush PNG image on ubuntu?

IPA image uses pngcrush to compress PNG image, but I want to uncrush a PNG image on Ubuntu.
Can anyone give me any idea?
The standard PNG utility pngcrush has been modified by Apple, which makes it produce technically invalid PNGs: a new chunk is inserted before the mandatory first chunk IHDR, RGB(A) order of pixel data is inverted, and RGB pixels get premultiplied with their alpha.
Hence, I'd rather call these PNGs "fried", rather than just "crushed".
Try my own pngdefry. The source code is written on a Mac OSX machine but it should be compilable for other OSes as well; it's pretty straightforward C code.

How to display JPEG image on microcontroller LCD?

I am recently developing some firmware on the STM3210E development board which has an ARM cortex M3 processor. It has been interfaced to a 240x320 LCD. After going through the demo firmware, I realised that images are encoded in 32 bit variables (correct me if I am wrong) stored in array as shown below.
uint32_t STM32Banner[50] = {0x6461EB7A, 0x646443BC, 0x64669BFE, 0x6468F440, 0x646B4C82,
0x646DA4C4, 0x646FFD06, 0x64725548, 0x6474AD8A, 0x647705CC,
0x64795E0E, 0x647BB650, 0x647E0E92, 0x648066D4, 0x6482BF16,
0x64851758, 0x64876F9A, 0x6489C7DC, 0x648C201E, 0x648E7860,
0x6490D0A2, 0x649328E4, 0x64958126, 0x6497D968, 0x649A31AA,
0x649C89EC, 0x649EE22E, 0x64A13A70, 0x64A392B2, 0x64A5EAF4,
0x64A84336, 0x64AA9B78, 0x64ACF3BA, 0x64AF4BFC, 0x64B1A43E,
0x64B3FC80, 0x64B654C2, 0x64B8AD04, 0x64BB0546, 0x64BD5D88,
0x64BFB5CA, 0x64C20E0C, 0x64C4664E, 0x64C6BE90, 0x64C916D2,
0x64CB6F14, 0x64CDC756, 0x64D01F98, 0x64D277DA, 0x64D4D01C}
Could you please explain me how to convert a JPEG/PNG/BMP image to this format (RGB565) ?
You have two choices:
Write your own set of decoders.
Use available free decoders
The first solution is only really viable for BMP (and perhaps GIF), which is quite a simple format compared to PNG and JPEG. Even so, writing a BMP decoder that handles all different versions and specialties of BMP gracefully takes quite a bit of work (I have tried it). Hacking together something that can extract the image data from the most common BMP formats is quite easy though.
The second solution is probably the way to go for the other formats. Most open-source decoders are available under LGPL or similar, so licensing shouldn't really be a problem. For JPEG images use libJPEG, for PNG use libPNG and for GIF use giflib.
Most of the decoders do not support decoding to RGB565 so you will have to write a converter to convert from RGB888 to RGB565.
use a program like GIMP to convert to an uncompressed bmp (what you normally get when you save-as bmp).
A bmp has something like a 54 byte header then it goes into the data. Each line is pixels either 3 bytes (RGB) or four bytes (RGBX) per pixel. The width is aligned on a 4 byte boundary so if you have three bytes per and multiply that by the width in pixels if that is not a multiple of four (say 3 bits wide * 3 = 9 as a simple example) then there will be some padding. You know from opening the file in gimp how wide it is, you probably want to use gimp to adjust the image to match your lcd screen anyway. The first bytes of data after the header are the pixel in the lower left corner of the image, you might need to flip the image in the y axis, or just start off this way and see what happens.
Knowing the size of your image, (from opening it with gimp), you can do a little math to see if the size of the file matches with what I am saying, if it is dramatically smaller then there is some compression going on and you need to save again and change the settings for the bmp.
Once you have this figured out then write a simple program to extract the pixels from the bmp and save them in the format you desire. Even better read the code and docs and understand how to program the lcd and you can get from raw pixels to the lcd without having to to through their specific format/code.

Compositing PNG onto JPEG fails (colors are all rendered as black) on only one particular JPEG file

I'm building an application which uses Ruby+RMagick to composite PNG images onto various JPEG backgrounds. Everything is working, but we have found one particular JPEG background for which the PNG is composited as a black spot. PNG transparency is respected; the shape of the "spot" is correct, but the colors are being lost and becoming black.
I have tried many JPEGs to try to find another which yields the same result, but (so far) have failed.
I suspect that it may have something to do with the bit depth or some other parameter of the JPEG file in question. I have been searching the Internet, looking for a tool which can analyze this JPEG and tell me all the parameters which might be relevant, but haven't found anything yet.
Has anything like this ever happened to you? What was the cause?
Based on your knowledge of the JPEG format, are there any other parameters which might be relevant?
Do you know of any tool which can analyze JPEG files, and tell me the bit depth and other parameters? Or if I open the JPEG in a hex editor, can you tell me how to find this information?
I still haven't found what is special about that one JPG which the composite operation doesn't work correctly on, but I worked around it using this code:
back = Magick::Image.from_blob(jpg_data).first
png = Magick::Image.from_blob(png_data).first
page1 = Magick::Image.new(back.columns, back.rows)
page1.composite!(back, 0, 0, Magick::OverCompositeOp)
page1.composite!(png, png_x, png_y, Magick::OverCompositeOp)
Rather than:
back = Magick::Image.from_blob(jpg_data).first
png = Magick::Image.from_blob(png_data).first
page1 = back.composite(png, png_x, png_y, Magick::OverCompositeOp)

Resources