Within a WebGl fragment shader I'm using a texture generated from an array of 32bit values but it yields errors when going above a resolution of 7000x7000px this is far below the maximum texture resolution for my gpu 16384x16384px. gl.Unsigned works without issue at higher resolutions but not so when changed to gl.float . Is this a known limitation when dealing with floats? Are there work arounds? any input much appreciated.
my texture parameters -
"gl.texImage2D(gl.TEXTURE_2D, 0, gl.ALPHA, 8192, 8192, 0, gl.ALPHA, gl.FLOAT, Z_pixels)"
7000*7000*32 bits per float*4 ~= 784 megabytes of memory. Perhaps that exceeded your graphic card memory capacity?
As per MSDN https://msdn.microsoft.com/en-us/library/dn302435(v=vs.85).aspx says "[gl.FLOAT] creates 128 bit-per-pixel textures instead of 32 bit-per-pixel for the image." so its possible that gl.ALPHA will still use 128 bits per pixel.
Related
From Arch Wiki:
For Xft.dpi, using integer multiples of 96 usually works best, e.g. 192 for 200% scaling.
I know that 200%, 300%,... scaling is the best possible because every pixel replaced with integer amount of pixels and we don't have situation where we need to display 1.5 pixels.
But what if don't have 4k monitor, and have for example 2.5k(2560x1440) monitor or monitor with some non-standard resolution or aspect ratio. In this case increasing scale factor 2 times is too much.
I have only 2 ideas:
Scale it in 1.25, 1.5, 1.75, so objects with 16x16 and 32x32 size will be properly scaled.
Scale it in (vertical_pixels*horizontal_pixels)/(1920*1080)*96, so you will get size of objects similar to normal display.
I created a 1024*1024 texture with
glCompressedTexImage2D(GL_TEXTURE_2D, 0, GL_COMPRESSED_RGBA_PVRTC_4BPPV1_IMG, 1024, 1024, 0, nDataLen*4, pData1);
then update it's first 512*512 part like this
glCompressedTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 512, 512, GL_COMPRESSED_RGBA_PVRTC_4BPPV1_IMG, nDataLen, pData2);
This update generated glerror 1282(invalid operation), if I update the whole 1024*1024 region all are ok, it seems that pvrtc texture cannot be partial updated.
Is it possible to partial update pvrtc textur, if it is how how?
Sounds to me like you can't on GLES2 (link to spec, see 3.7.3.)
Calling CompressedTexSubImage2D will result in an INVALID_OPERATION error if xoffset or yoffset is not equal to zero, or if width and height do not match the width and height of the texture, respectively. The contents of any texel outside the region modified by the call are undefined. These restrictions may be relaxed for specific compressed internal formats whose images are easily modified
Makes glCompressedTexSubImage2D sound a bit useless to me, tbh, but I guess it's for updating individual mips or texture array levels.
Surprisingly, i copyed a small pvrtc texture data into a large one, it works just like glCompressedTexSubImage2D.But i'am not sure whether it's safe to use this solution in my engine.
Rightly or wrongly, the reason PVRTC1 does not have CompressedTexSubImage2D support is that unlike, say, ETC* or S3TC, the texture data is not compressed as independent 4x4 squares of texels which, in turn, get represented as either 64 or 128 bits of data depending on the format. With ETC*/S3TC any aligned 4x4 block of texels can be replaced without affecting any other region of the texture simply by just replacing its corresponding 64- or 128-bit data block.
With PVRTC1, two aims were to avoid block artifacts and to take advantage of the fact that neighbouring areas are usually very similar and thus can share information. Although the compressed data is grouped into 64-bit units, these affect overlapping areas of texels. In the case of 4bpp they are ~7x7 and for 2bpp, 15x7.
As you later point out, you could copy the data yourself but there may be a fuzzy boundary: For example, I took these 64x64 and 32x32 textures (which have been compressed and decompressed with PVRTC1 #4bpp ) ...
+
and then did the equivalent of "TexSubImage" to get:
As you should be able to see, the border of the smaller texture has smudged as the colour information is shared across the boundaries.
In practice it might not matter but since it doesn't strictly match the requirements of TexSubImage, it's not supported.
PVRTC2 has facilities to do better subimage replacement but is not exposed on at least one well-known platform.
< Unsubtle plug > BTW if you want some more info on texture compression, there is a thread on the Stack Exchange Computer Graphics site < /Unsubtle plug >
I have a 16 bit greyscale PNG with values in [0,65535] that I am using as texture in WebGL:
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
In the shader I read just the R component and use this value to look up a RGBA from a color table.
This works fine. However I don't really understand what is happening here. The image has 2 bytes per pixel. Instead I am telling WebGL that it has 4 components with 1 byte each. I would think that this would result in reading two pixels instead of one.
The image has 2 bytes precision. Since each component is specified to have only 1 byte I wonder; am I loosing precision by doing it this way?
Ideally I want to send only one channel, R, in 16 bit to the GPU.
How do I do this?
There is no way to upload 16 bit textures to WebGL. You can instead upload 2 8bit channels (like R and G) and add them together in the your shader.
vec4 color = texture2D(someSampler, someTexCoord);
float value = color.r + color.g * 256.0f
As for how gl.texImage2D works with images, gl.texImage2D takes the following parameters.
gl.texImage2D(target, level, internalFormat,
format, type, image/video/canvas);
WebGL will first take the image,video, or canvas and convert it to format , type taking into account the UNPACK_PREMULTIPLY_ALPHA_WEBGL, UNPACK_FLIP_Y_WEBGL, and UNPACK_COLORSPACE_CONVERSION_WEBGL settings. It will then call glTexImage2D
Because WebGL supports no 16bit integer formats though there's no way to upload 16bit integer channels. It is possible to upload to floating point or half floating point textures if the user's machine supports those formats and you enable the extensions OES_texture_float and or OES_texture_half_float but when converting from an image there's no guarntee how lossy the conversion will be. Plus, AFAIK Safari is the only browser that can load 16bit and or floating point images at all (it accepts 16 and 32bit .TIF files in image tags or did last time I checked about 4 years ago). Whether it uploads them losslessly in WebGL I don't believe is specified in the WebGL spec nor tested. Only 8bit images are required to upload losslessly and only if you set the UNPACK_ settings correctly.
Of course you can always upload the data yourself using an ArrayBuffer to get lossless half float or float data. But otherwise you need to split the data into 8 bit channels.
I've heard the vertex shader access user's own buffer data(Texture buffer object) using over OpenGL 3.x
(using TexelFetch method)
So recently I'v tried to apply TPB technic on OpenglES 3.0 vertex shader with IOS7 but I could not use TBO bcz OpenGLES 3.0 cant supply it.
My vertex shader have to access TBO and use it's data that such as velocities, positions and forces.
I wanna use similer TBO techinc on OpenglES 3.0.
If i use pixel buffer object can I access them using "texelFetch()" method on shader?
How can I figure my work out?
Is anybody know a good way?
I don't believe you can sample directly from a Pixel Buffer Object.
One obvious option is to use a regular texture instead of a Texture Buffer Object. The maximum texture size of ES 3.0 compatible iOS devices is 4096 (source: https://developer.apple.com/library/iOS/documentation/DeviceInformation/Reference/iOSDeviceCompatibility/OpenGLESPlatforms/OpenGLESPlatforms.html). There are a few sub-cases depending on how big your data is. With n being the number of texels:
If n is at most 4096, you can store it in a 2D texture that has size n x 1.
If n is more than 4096, you can store it in a 2D texture of size 4096 x ((n + 4095) / 4096).
In both cases, you can still use texelFetch() to get the data from the texture. In the first case, you sample (i, 0) to get value i. In the second case, you sample (i % 4096, i / 4096).
If you already have the data in a buffer, you can store it the texture by binding the buffer as GL_PIXEL_UNPACK_BUFFER before calling glTexImage2D(), which will then source the data from the buffer.
Another option to consider are Uniform Buffer Objects. This allows you to bind the content of a buffer to a uniform block, which then gives you access to the values in the shader. Look up glBindBuffer(), glBindBufferBase(), glBindBufferRange() with the GL_UNIFORM_BUFFER target for details. The maximum size of a uniform block in bytes is given by GL_MAX_UNIFORM_BLOCK_SIZE, and is 16,384 on iOS/A7 devices.
My application is dependent on reading depth information back from the framebuffer. I've implemented this with glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, &depth_data)
However this runs unreasonable slow, it brings my application from a smooth 30fps to a laggy 3fps. If I try to other dimensions or data to read back it runs on an acceptable level.
To give an overview:
No glReadPixels -> 30 frames per second
glReadPixels(0, 0, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &depth_data); -> 20 frames per second, acceptable
glReadPixels(0, 0, width, height, GL_RED, GL_FLOAT, &depth_data); -> 20 frames per second, acceptable
glReadPixels(0, 0, width, height, GL_DEPTH_COMPONENT, GL_FLOAT, &depth_data); -> 3 frames per second, not acceptable
Why should the last one be so slow compared to the other calls? Is there any way to remedy it?
width x height is approximately 100 x 1000, the call gets increasingly slower as I increase the dimensions.
I've also tried to use pixel buffer objects but this has no significant effect on performance, it only delays the slowness till the glMapBuffer() call.
(I've tested this on a MacBook Air nVidia 320m graphics OS X 10.6, strangely enough my old MacBook Intel GMA x3100 got ~15 fps reading the depth buffer.)
UPDATE: leaving GLUT_MULTISAMPLE out of the glutInitDisplayMode options made a world of difference bringing the application back to a smooth 20fps again. I don't know what the option does in the first place, can anyone explain?
If your main framebuffer is MSAA-enabled (GLUT_MULTISAMPLE is present), then 2 actual framebuffers are created - one with MSAA and one regular.
The first one is needed for you to fill. It contains front and back color surfaces, plus depth and stencil. The second one has to contain only color that is produced by resolving the corresponding MSAA surface.
However, when you are trying to read depth using glReadPixels the driver is forced to resolve the MSAA-enabled depth surface too, which probably causes your slowdown.
What is the storage format you chose for your depth buffer ?
If it is not GLfloat, then you're asking GL to convert every single depth in the depth buffer to float when reading it. (And it's the same for your 3rd bullet, with GL_RED. was your Color buffer a float buffer ?)
No matter it is GL_FLOAT or GL_UNSIGNED_BYTE, glReadPixels is still very slow. If you use PBO to get RGB value, it will be very fast.
When using PBO to handle RGB value, the CPU usage is 4%. But it will increase to 50% when handling depth value. I've tried GL_FLOAT, GL_UNSIGNED_BYTE, GL_UNSIGNED_INT, GL_UNSIGNED_INT_24_8. So I can conclude that PBO is useless for reading depth value