D3D11 WARNING: ID3D11DeviceContext::Draw: The size of the Constant Buffer at slot 0 of the Vertex Shader unit is too small (16 bytes provided, 64 bytes, at least, expected).` `This is OK, as out-of-bounds reads are defined to return 0. It is also possible the developer knows the missing data will not be used anyway.` `This is only a problem if the developer actually intended to bind a sufficiently large Constant Buffer for what the shader expects.` `[ EXECUTION WARNING #351: DEVICE_DRAW_CONSTANT_BUFFER_TOO_SMALL]
P.S. Could it be a reason why my DDS Cube texture does not draw in G-Buffer ?! It is just a warning but whole day i am sticking with this any help.
Related
For some post-rendering effects I need to read the depth-texture. This works fine as long as multi-sampling is turned off. But with multi-sampling I have trouble reading the texture.
Trying to use the multisample-texture in the shader though an depth2d_ms argument the compiler will fail at run time with an error message "Internal Compiler Error".
I found that with OpenGL you'd first blit the multisample depth buffer to a resolved depth buffer to read the sampled values, but with Metal I get an assertion error stating that sample counts of blit textures need to match, so there is no chance blitting 4 samples into 1.
So how would I read sampled or unsampled values from the depth buffer whilst using multi-sampling?
I don't know the answer. But suggest to try below solution:
Set the storeAction of MTLRenderPassDepthAttachmentDescriptor as below:
depthAttachment.storeAction = MTLStoreActionMultisampleResolve;
and also set its resolveTexture to another texture:
depthAttachment.resolveTexture = targetResolvedDepthTexture.
at last, try to read the targetResolvedDepthTexture content.
MSAA depth resolve is only supported in iOS GPU Family 3 v1 (A9 GPU on iOS 9).
Take a look on the Feature Availability document:
https://developer.apple.com/library/ios/documentation/Miscellaneous/Conceptual/MetalProgrammingGuide/MetalFeatureSetTables/MetalFeatureSetTables.html#//apple_ref/doc/uid/TP40014221-CH13-DontLinkElementID_8
Within a WebGl fragment shader I'm using a texture generated from an array of 32bit values but it yields errors when going above a resolution of 7000x7000px this is far below the maximum texture resolution for my gpu 16384x16384px. gl.Unsigned works without issue at higher resolutions but not so when changed to gl.float . Is this a known limitation when dealing with floats? Are there work arounds? any input much appreciated.
my texture parameters -
"gl.texImage2D(gl.TEXTURE_2D, 0, gl.ALPHA, 8192, 8192, 0, gl.ALPHA, gl.FLOAT, Z_pixels)"
7000*7000*32 bits per float*4 ~= 784 megabytes of memory. Perhaps that exceeded your graphic card memory capacity?
As per MSDN https://msdn.microsoft.com/en-us/library/dn302435(v=vs.85).aspx says "[gl.FLOAT] creates 128 bit-per-pixel textures instead of 32 bit-per-pixel for the image." so its possible that gl.ALPHA will still use 128 bits per pixel.
I've heard the vertex shader access user's own buffer data(Texture buffer object) using over OpenGL 3.x
(using TexelFetch method)
So recently I'v tried to apply TPB technic on OpenglES 3.0 vertex shader with IOS7 but I could not use TBO bcz OpenGLES 3.0 cant supply it.
My vertex shader have to access TBO and use it's data that such as velocities, positions and forces.
I wanna use similer TBO techinc on OpenglES 3.0.
If i use pixel buffer object can I access them using "texelFetch()" method on shader?
How can I figure my work out?
Is anybody know a good way?
I don't believe you can sample directly from a Pixel Buffer Object.
One obvious option is to use a regular texture instead of a Texture Buffer Object. The maximum texture size of ES 3.0 compatible iOS devices is 4096 (source: https://developer.apple.com/library/iOS/documentation/DeviceInformation/Reference/iOSDeviceCompatibility/OpenGLESPlatforms/OpenGLESPlatforms.html). There are a few sub-cases depending on how big your data is. With n being the number of texels:
If n is at most 4096, you can store it in a 2D texture that has size n x 1.
If n is more than 4096, you can store it in a 2D texture of size 4096 x ((n + 4095) / 4096).
In both cases, you can still use texelFetch() to get the data from the texture. In the first case, you sample (i, 0) to get value i. In the second case, you sample (i % 4096, i / 4096).
If you already have the data in a buffer, you can store it the texture by binding the buffer as GL_PIXEL_UNPACK_BUFFER before calling glTexImage2D(), which will then source the data from the buffer.
Another option to consider are Uniform Buffer Objects. This allows you to bind the content of a buffer to a uniform block, which then gives you access to the values in the shader. Look up glBindBuffer(), glBindBufferBase(), glBindBufferRange() with the GL_UNIFORM_BUFFER target for details. The maximum size of a uniform block in bytes is given by GL_MAX_UNIFORM_BLOCK_SIZE, and is 16,384 on iOS/A7 devices.
In order to build shaders for Windows Store apps (and Windows Phone 8) Shader model 4_0_level_9_3 you need to use the vs_4_0_level_9_3 and ps_4_0_level_9_3 . While all this sounds fine using the HLSL syntax designed for DirectX 10 and up, I'm unable to use the VPOS semantic from DirectX 9 or use SV_POSITION from DirectX 10 and up in a pixel shader, so what do I do besides making yet another semantic for outputting the vertex position in clip space ?
PS: Some shaders on 4_0_level_9_3 spit out an "internal error: blob content mismatch between level9 and d3d10 shader" which I have no idea what is about. Probably some inconsistency with the driver I suppose ( I use an Nvidia GTX 560 TI) that I see it goes away if you just compile your shaders with release flags (like optimization level 3 and avoid flow control).
Your best bet is, as you say, to pass these values as secondary semantics (i.e. pass both a "POSITION" and a "SV_POSITION" value). Note that if you place SV_POSITION at the end of the output declaration for the vertex shader, you may omit it from the input declaration for the pixel shader.
Regarding the internal error, this is typically due to the declaration of a texture or other shader input that is optimized out in one pass but not in another. Disabling optimization typically works around the issue, but you should also be able to fix it by eliminating unused (including via dead-code elimination) input declarations, and ensuring you avoid complicated code that reduces to no-op.
Is there a max size for vertex buffer objects binded to GL_ARRAY_BUFFER or GL_ELEMENT_ARRAY_BUFFER???
Originally, I was drawing a mesh composed of 16 submeshes. For each submesh, I created a vertex buffer and during the rendering phase, I called glDrawElements.
This worked fine on the iOS simulator, but when I tried to render to my device, the screen flashes constantly and the meshes aren't displayed.
I then did some reading and found that you shouldn't call glDrawElements too many times during a render phase.
I tried to combine all of my submeshes into one vertex buffer.
The buffer bound to GL_ARRAY_BUFFER contains 3969 vertices, where each vertex contains 20 floats. So the size of this buffer is 317520 bytes.
The indices bound to GL_ELEMENT_ARRAY_BUFFER are 16425 shorts. The size of this buffer is therefore 32850 bytes.
On the OpenGL wiki, it says that "1MB to 4MB is a nice size according to one nVidia document" for a Vertex Buffer Object.
I printed out the result of glGetError after binding each buffer object, and calling glDrawElements, and I don't see any errors.
However, my meshes aren't correctly displayed. It seems as though only the first mesh gets correctly drawn. Is there anything fishy in the way I've implemented this? I didn't want to make this question too long so if there's any extra information you need to answer this question let me know. If there's nothing in theory that seems wrong, perhaps I've just made a mistake in implementing it.
There is a maximum size, in the sense that the GPU can always issue a GL_OUT_OF_MEMORY error. But other than that, no.
See this:
http://www.sunsetlakesoftware.com/2008/08/05/lessons-molecules-opengl-es
There are some natural limits using smaller data types, like obviously ~65000 for using shorts as indexes.
But more importantly there is some additional help in the link, which is a very good tutorial, and includes some anecdotal evidence that shorts up to the natural functional limit work.
I know it is too late to answer this question. However, I wish the answer helps someone!
Based on The Specification of OpenG Graphics System (Version 4.5 (Core Profile) - May 28, 2015), it states:
"There is no limit to the number of vertices that may be specified, other than the size of the vertex arrays." please see page 322.
Sorry, also as Nicol Bolas mention here:
https://stackoverflow.com/a/7369392/4228827
Cheers,
Naif