is it support the jpeg picture deal with GL_TEXTURE_EXTERNAL_OES mode likes deal with video - cobalt

For saving memory and improving performance,I want to use a special format texture to deal with jpeg picture. The format handles by GL_TEXTURE_EXTERNAL_OES but process is same to GL_TEXTURE_2D (only different from glBindTexture and shader program texture declaration)
I have done it in egl hardware mode('rasterizer_type': 'direct-gles'). But have problems when I use skia hardware mode ('rasterizer_type': 'hardware'), I found skia hardware mode don`t support it directly and will call render_image_fallback_function_ (HardwareRasterizer::Impl::RenderTextureEGL) to deal with it likes 360 video. I found the result for display is very different from it show in egl hardware mode, It seems that the way only use to deal with 360 video. Is there a way to possible I let skia hardware mode support the special format directly or I only add a new way in TexturedMeshRenderer to deal with picture to distinguish 360 video.

Cobalt/Starboard supports letting the platform define custom (possibly accelerated) image decode functionality in starboard/image.h, are you using this to set GL_TEXTURE_EXTERNAL_OES, or are you modifying common Cobalt code?
If you are modifying Cobalt code, you may want to search through https://cobalt.googlesource.com/cobalt/+/master/src/cobalt/renderer/rasterizer/skia/hardware_image.cc for references to "GL_TEXTURE_2D" and make sure that they still make sense after your changes. In particular, you may need to adjust HardwareFrontendImage::CanRenderInSkia().

Related

In OpenGL ES what is an "external image"? Why do we need GL_OES_EGL_image_external?

I am reading through spec for external images. It says:
This extension provides a mechanism for creating EGLImage texture targets
from EGLImages. This extension defines a new texture target,
TEXTURE_EXTERNAL_OES.
I have done my best but I can't find out what an "external image is". This extension, and many of the related extension specs, reference "EGLImages" and similar things but I can't figure out what they are.
Why do I need this?
Typically to create an image I load a file from disk. I believe that is "external".
This question basically says it is an image not created by the graphics driver but wouldn't mean virtually all images ever created would be EGLImages or "external images"? When using OpenGL I don't remember having to worry about if my image was external or not.
Can somebody explain what an "External" image is, why it is needed (mainly I see this w/r/t OpenGL ES) and why these extensions are needed? Frankly I am not sure what an "EGL Image" is either, or why they make a distinction.
Thank you
This is a late answer.
An external image AKA external texture is typically used to supply frames from an image stream (e.g. camera preview, decoded video) as OpenGL textures. Such frames usually have special color encodings and memory layouts (e.g. multi-plane YUV). The extension mentioned above allows sampling such images as if they were regular OpenGL textures (with a few limitations).

GL_FEEDBACK mode doesn't work on specific Windows platforms

We've encountered a strange problem on newer laptops using built-in graphic cards.
In order to draw true-type fonts we obtain the glyph outlines using wglUseFontOutlines and then draw them with in glRenderMode(GL_FEEDBACK).
Afterwards we parse the feedback buffer. This has worked for many years.
Now we have a problem with glyphs containing holes (only on platforms with built-in graphic cards):
wglUseFontOutlines works perfectly. If we just draw the returned display lists, everything is fine. However, the token stream generated with GL_FEEDBACK is corrupt. The debugger shows nothing unusual, all functions return with success and the parsing itself works fine too. It is really the binary data generated by GL_FEEDBACK mode, which is wrong.
Has anyone else encountered this problem?
And is there an alternative way to obtain outlines and fillings for true type fonts on Windows?
I'm just guessing into the blue here: The GL_SELECT and GL_FEEDBACK rendering modes were usually not supported by widespread GPU driver OpenGL implementations. Only a handful graphics cards from the previous century actually did support these rendering modes. Hence you would almost always fall back into a software implementation when using those modes.
However given modern GPU's vastly more flexible feedback mechanisms, the latest drivers could actually try to implement those rendering modes using GPU features (somewhat weird, because those modi have been removed from modern OpenGL profiles). Anyway, this could be the reason why you're experiencing these problems.
In order to draw true-type fonts we obtain the glyph outlines using wglUseFontOutlines and then draw them with in glRenderMode(GL_FEEDBACK). Afterwards we parse the feedback buffer.
That's a cool Rube-Goldberg machine. Why don't you simply cut the middleman and obtain the glyph outlines directly using the appropriate Windows GDI functions (GetGlyphOutline) for this? This is what wglUseFontOutlines is using internally anyway.

three.js normal map rendering differently windows/mac

I have a shader i wrote, using the normal map generated by 3ds max. I get seamless results on windows, but i've seen seams on macs. Is this something that could be related to the directon i develop my normal map, (but then again i believe that i am running chrome in opengl mode), or some kind of precision issue? Is there any way of debugging this without a mac?
Per gmans answer below, i've added
gl.pixelStorei(gl.UNPACK_COLORSPACE_CONVERSION_WEBGL, gl.NONE);
The link is here.
Is it possible it's a color space conversion issue?
By default some browsers apply colorspace conversions when they load images. That's fine if you're just displaying an <img> tag but no so good for normal maps.
To tell WebGL to not allow to the browser to apply colorspace conversions you call
gl.pixelStorei(gl.UNPACK_COLORSPACE_CONVERSION_WEBGL, gl.NONE);
The default is that colorspace conversions are allowed (as in browser specific).
See the WebGL spec

Creating my own graphic file type/extension

When I install games on my computer professional and amateur I find that the resources such as pictures have strange extensions so I cannot open them.
As I cannot find these extensions on Google, I figured it was a method of protecting your artwork so it cannot be stolen so easily.
I have a bunch of JPEG, PNG and bitmap files I would like to do this to so people cannot copy them so easily when I distribute my game.
I use C++ and DirectX if that makes any difference.
Does anyone know how this is done? I know I can change a .txt extension to anything and my program will read it just the same but will this work with pictures?
As I cannot find these extensions on google I figured it was a method of protecting your artwork so it cannot be stolen so easily.
Creating your own extension is easy, just decide how you want to interpret your image, and create a converter to build them from existing images...
... But ... formats are chosen for the sake of the programmer and art tools, not for protection. You can't ever really protect your art from being stolen, as at some point your code will have to convert the graphics to a raw DDB (Device Dependent Bitmap) or DIB (Device Independent Bitmap) before rendering them to the screen or sending them to DX/OpenGL. Honestly, commercial games on cartridges that don't follow standard formats are easily ripped. Hackers even make level editors for proprietary game engines that aren't known to the public.
I don't use png's and jpg's in my game code for the simple reason that I was unable to use libpng in my code, nor a jpeg decoder, and I needed my graphics supplied in 8x8 tiles with 4/8-bit with palette (colour 0 is transparent), or 16-bit RGBA_555_1, which can't be achieved with png's and jpeg's.
At most, you can obscure your graphics by storing them in your own format, encrypting them or even compressing them, that's about it. But beware, your code will have to decrypt/decode it and the picture will at some point be in the thief's memory.
So yes, you can easily change the file type, but that will not stop any user from (a) changing the filename, or (b) figuring out the file type by putting it into a program that can easily recognize the file type. And as someone who's also done video editing, I can tell you that many programs will happily interpret any file and figure out the real format. And it won't stop (c), a scrupulous hacker from ripping your artwork. In fact just have a look at what hackers did with Propellerhead's Refill format, since they couldn't figure out how to read it, they created a program that used Propellerhead's program to read it - think about that. It really doesn't take much to use Vanjar Fukar's Debugger to trace your code when loading images, identify your image loading code, and either copy it, or invoke it themselves (amongst a hundred other hacking tactics).
Usually programs don't care about extensions when reading files, so changing extension to an unknown one shouldn't get you in trouble.

Setting up OpenGL/Cuda interop in Windows

I am writing a DLL that needs to do some work in Cuda 3.2 and some work in OpenGL. OpenGL will render some grayscale images that my Cuda code needs to read in and modify, and then give back to OpenGL as a texture. I believe I need to create PBOs to do that. I have done some basic OpenGL stuff before but never worked with extensions, and that's where my problem is - I've been searching for 2 days and so far haven't been able to find a working example, despite wading through pages and pages of code. None of the samples I've tried work (and I'm sure my vid card will support it, being a GTX470)
Some specific questions:
1. I installed the nvidia opengl sdk. Should I be using glew.h and wglew.h to access the extensions?
2. My DLL does not have any UI - do I need to create a hidden window or is there an easier way to create an off-screen rendering context?
3. Can I create a grayscale PBO by using GL_RED_8UI format? Will both cuda and gl be happy with that? I read the opengl interop section in the cuda programming manual and it said GL_RGBA_8UI was only usable by pixel shaders because it was an OpenGL 3.0 feature, but I didn't know if that applied to a 1-channel format. 1 channel float would also work for my purposes.
4. I thought this would be fairly easy to do - does it really require hundreds of lines of code?
Edit:
I have code to create an OpenGL context attached to a HBITMAP. Should I create a bitmap-rendering context and then try to attach a PBO to that? Or will that slow me down by also rendering to CPU memory? Is it better to create an invisible window and attach the PBO to that? Also, does the pixel format of my PBO have to match the window/bitmap? What about the dimensions?
Thanks,
Alex
There's actually an example of how to use OpenGL and CUDA together. Look at the SimpleGL example.
You may want to take a look at this example:
https://github.com/nvpro-samples/gl_cuda_interop_pingpong_st

Resources