three.js normal map rendering differently windows/mac - three.js

I have a shader i wrote, using the normal map generated by 3ds max. I get seamless results on windows, but i've seen seams on macs. Is this something that could be related to the directon i develop my normal map, (but then again i believe that i am running chrome in opengl mode), or some kind of precision issue? Is there any way of debugging this without a mac?
Per gmans answer below, i've added
gl.pixelStorei(gl.UNPACK_COLORSPACE_CONVERSION_WEBGL, gl.NONE);
The link is here.

Is it possible it's a color space conversion issue?
By default some browsers apply colorspace conversions when they load images. That's fine if you're just displaying an <img> tag but no so good for normal maps.
To tell WebGL to not allow to the browser to apply colorspace conversions you call
gl.pixelStorei(gl.UNPACK_COLORSPACE_CONVERSION_WEBGL, gl.NONE);
The default is that colorspace conversions are allowed (as in browser specific).
See the WebGL spec

Related

is it support the jpeg picture deal with GL_TEXTURE_EXTERNAL_OES mode likes deal with video

For saving memory and improving performance,I want to use a special format texture to deal with jpeg picture. The format handles by GL_TEXTURE_EXTERNAL_OES but process is same to GL_TEXTURE_2D (only different from glBindTexture and shader program texture declaration)
I have done it in egl hardware mode('rasterizer_type': 'direct-gles'). But have problems when I use skia hardware mode ('rasterizer_type': 'hardware'), I found skia hardware mode don`t support it directly and will call render_image_fallback_function_ (HardwareRasterizer::Impl::RenderTextureEGL) to deal with it likes 360 video. I found the result for display is very different from it show in egl hardware mode, It seems that the way only use to deal with 360 video. Is there a way to possible I let skia hardware mode support the special format directly or I only add a new way in TexturedMeshRenderer to deal with picture to distinguish 360 video.
Cobalt/Starboard supports letting the platform define custom (possibly accelerated) image decode functionality in starboard/image.h, are you using this to set GL_TEXTURE_EXTERNAL_OES, or are you modifying common Cobalt code?
If you are modifying Cobalt code, you may want to search through https://cobalt.googlesource.com/cobalt/+/master/src/cobalt/renderer/rasterizer/skia/hardware_image.cc for references to "GL_TEXTURE_2D" and make sure that they still make sense after your changes. In particular, you may need to adjust HardwareFrontendImage::CanRenderInSkia().

Camera texture in Unity with multithreaded rendering

I'm trying to do pretty much what TangoARScreen does but with multithreaded rendering on in Unity. I did some experiments and I'm stuck.
I tried several things, such as letting Tango render into the OES texture that would be then blitted into a regular Texture2D in Unity, but OpenGL keeps complaining about invalid display when I try to use it. Probably not even OnTangoCameraTextureAvailable is called in the correct GL context? Hard to say when you have no idea how Tango Core works internally.
Is registering a YUV texture via TangoService_Experimental_connectTextureIdUnity the way to go? I'd have to deal with YUV2RGB conversion I assume. Or should I use OnTangoImageMultithreadedAvailable and deal with the buffer? Render it with a custom share for instance? The documentation is pretty blank in these areas and every experiment means several wasted days at least. Did anyone get this working? Could you point me in the right direction? All I need is live camera image rendered into Unity's camera background.
Frome the April 2017: Gankino release notes: "The C API now supports getting the latest camera image's timestamp outside a GL thread .... Unity multithreaded rendering support will get added in a future release.". So I guess we need to wait a little bit.
Multithreaded rendering still can be used in applications without camera feed (with motion tracking only), choosing "YUV Texture and Raw Bytes" as overlay method in Tango Application Script.

GL_FEEDBACK mode doesn't work on specific Windows platforms

We've encountered a strange problem on newer laptops using built-in graphic cards.
In order to draw true-type fonts we obtain the glyph outlines using wglUseFontOutlines and then draw them with in glRenderMode(GL_FEEDBACK).
Afterwards we parse the feedback buffer. This has worked for many years.
Now we have a problem with glyphs containing holes (only on platforms with built-in graphic cards):
wglUseFontOutlines works perfectly. If we just draw the returned display lists, everything is fine. However, the token stream generated with GL_FEEDBACK is corrupt. The debugger shows nothing unusual, all functions return with success and the parsing itself works fine too. It is really the binary data generated by GL_FEEDBACK mode, which is wrong.
Has anyone else encountered this problem?
And is there an alternative way to obtain outlines and fillings for true type fonts on Windows?
I'm just guessing into the blue here: The GL_SELECT and GL_FEEDBACK rendering modes were usually not supported by widespread GPU driver OpenGL implementations. Only a handful graphics cards from the previous century actually did support these rendering modes. Hence you would almost always fall back into a software implementation when using those modes.
However given modern GPU's vastly more flexible feedback mechanisms, the latest drivers could actually try to implement those rendering modes using GPU features (somewhat weird, because those modi have been removed from modern OpenGL profiles). Anyway, this could be the reason why you're experiencing these problems.
In order to draw true-type fonts we obtain the glyph outlines using wglUseFontOutlines and then draw them with in glRenderMode(GL_FEEDBACK). Afterwards we parse the feedback buffer.
That's a cool Rube-Goldberg machine. Why don't you simply cut the middleman and obtain the glyph outlines directly using the appropriate Windows GDI functions (GetGlyphOutline) for this? This is what wglUseFontOutlines is using internally anyway.

Large images don't render in Chrome?

Very large images will not render in Google Chrome (although the scrollbars will still behave as if the image is present). The same images will often render just fine in other browsers.
Here are two sample images. If you're using Google Chrome, you won't see the long red bar:
Short Blue
http://i.stack.imgur.com/ApGfg.png
Long Red
http://i.stack.imgur.com/J2eRf.png
As you can see, the browser thinks the longer image is there, but it simply doesn't render. The image format doesn't seem to matter either: I've tried both PNGs and JPEGs. I've also tested this on two different machines running different operating systems (Windows and OSX). This is obviously a bug, but can anyone think of a workaround that would force Chrome to render large images?
Not that anyone cares or is even looking at this post, but I did find an odd workaround. The problem seems to be with the way Chrome handles zooming. If you set the zoom property to 98.6% and lower or 102.6% and higher, the image will render (setting the zoom property to any value between 98.6% and 102.6% will cause the rendering to fail). Note that the zoom property is not officially defined in CSS, so some browsers may ignore it (which is a good thing in this case since this is a browser-specific hack). As long as you don't mind the image being resized slightly, I suppose this may be the best fix.
In short, the following code produces the desired result, as shown here:
<img style="zoom:98.6%" src="http://i.stack.imgur.com/J2eRf.png">
Update:
Actually, this is a good opportunity to kill two birds with one stone. As screens move to higher resolutions (e.g. the Apple Retina display), web developers will want to start serving up images that are twice as large and then scaling them down by 50%, as suggested here. So, instead of using the zoom property as suggested above, you could simply double the size of the image and render it at half the size:
<img style="width:50%;height:50%;" src="http://i.stack.imgur.com/J2eRf.png">
Not only will this solve your rendering problem in Chrome, but it will make the image look nice and crisp on the next generation of high-resolution displays.

Setting up OpenGL/Cuda interop in Windows

I am writing a DLL that needs to do some work in Cuda 3.2 and some work in OpenGL. OpenGL will render some grayscale images that my Cuda code needs to read in and modify, and then give back to OpenGL as a texture. I believe I need to create PBOs to do that. I have done some basic OpenGL stuff before but never worked with extensions, and that's where my problem is - I've been searching for 2 days and so far haven't been able to find a working example, despite wading through pages and pages of code. None of the samples I've tried work (and I'm sure my vid card will support it, being a GTX470)
Some specific questions:
1. I installed the nvidia opengl sdk. Should I be using glew.h and wglew.h to access the extensions?
2. My DLL does not have any UI - do I need to create a hidden window or is there an easier way to create an off-screen rendering context?
3. Can I create a grayscale PBO by using GL_RED_8UI format? Will both cuda and gl be happy with that? I read the opengl interop section in the cuda programming manual and it said GL_RGBA_8UI was only usable by pixel shaders because it was an OpenGL 3.0 feature, but I didn't know if that applied to a 1-channel format. 1 channel float would also work for my purposes.
4. I thought this would be fairly easy to do - does it really require hundreds of lines of code?
Edit:
I have code to create an OpenGL context attached to a HBITMAP. Should I create a bitmap-rendering context and then try to attach a PBO to that? Or will that slow me down by also rendering to CPU memory? Is it better to create an invisible window and attach the PBO to that? Also, does the pixel format of my PBO have to match the window/bitmap? What about the dimensions?
Thanks,
Alex
There's actually an example of how to use OpenGL and CUDA together. Look at the SimpleGL example.
You may want to take a look at this example:
https://github.com/nvpro-samples/gl_cuda_interop_pingpong_st

Resources