What's needed to support non-square matrices in GLSL shader? - macos

I'm trying to use non-square matrices in my GLSL shader but when I compile I get a syntax error.
My shader code using:
uniform mat4 my_mat;
compiles just fine.
But if I change it to:
uniform mat4x3 my_mat;
I get
ERROR: 0:5: 'mat4x3' : syntax error syntax error
I get a similar error for
uniform mat4x4 my_mat;
If I print my GL_VERSION and GL_SHADING_LANGUAGE_VERSION I get:
GL_VERSION: 2.1 NVIDIA-1.6.36
GL_SHADING_LANGUAGE_VERSION: 1.20
I'm compiling and running my OpenGL on a Mac OS X 10.6 MacBook Pro. According to this NVidia document and others, GLSL 1.20 and GL 2.1 should encompass support of non-square matrices and this syntax. Is there another catch? Or another way to troubleshoot why I get syntax errors?

If I place
#version 120
At the top of my shader code, the problem goes away. According to the same document listed in the question shader source without the version compiler option will compile "as before" which I guess means that they won't.

Related

Is there any GLES #define in C?

I'm working with a codebase that supports OpenGL and OpenGL ES.
Is there any GLES define I can use to write conditional code in C? Something like:
#include <GLES/gl.h>
#ifdef __GLES
//
#else
//
#endif
In GLSL shaders
For conditional compilation of GLSL shaders, there is GL_ES. The GLSL ES 1.00 specification defines under section 3.4 "Preprocessor":
The following predefined macros are available
__LINE__
__FILE__
__VERSION__
GL_ES
GL_ES will be defined and set to 1. This is not true for the non-ES OpenGL Shading Language, so it can
be used to do a compile time test to see whether a shader is running on ES.
So you can check for GL_ES in shaders.
In host/C code
If you want to know whether any particular C file is being compiled in a project for a particular version of OpenGL ES, then you can use the defines from GLES/gl.h, GLES2/gl2.h, GLES3/gl3.h, GLES3/gl31.h or GLES3/gl32.h, which are: GL_VERSION_ES_CM_1_0, GL_ES_VERSION_2_0, GL_ES_VERSION_3_0, GL_ES_VERSION_3_1 and GL_ES_VERSION_3_2, respectively. Starting with GLES2/gl2.h and anything above, you will always have the defines of the previous versions (up to GLES 2.0) also defined. So when you include GLES3/gl31.h then you will find GL_ES_VERSION_2_0, GL_ES_VERSION_3_0 and GL_ES_VERSION_3_1 to be defined.
However, you make the decision in your application whether to include the OpenGL Desktop header files or the OpenGL ES header files and you could use that same condition for conditionally compiling code that relies on you compiling for either OpenGL Desktop or OpenGL ES. So, you really don't need any pre-defined defines because you make the decision yourself earlier in the program about which header you include.

Conjugate gradient with incomplete cholesky preconditioner returns errors for Eigen library

I'm using Eigen library to solve Ax=b, the default preconditioner didn't do well in time performance ,so I want to try some other predonditioners, such as incomplete cholesky preconditioner,here is my code:
Eigen::ConjugateGradient<Eigen::MatrixXd, Eigen::Lower | Eigen::Upper, Eigen::IncompleteCholesky<double> > cg;
I just add the incomplete cholesky preconditioner to my code. But on compilation with Visual Studio 2019 in windows10 I got the following error:
"twistedBy": is not the member of "Eigen::SelfAdjointView<const Derived,1>" D:\eigen_test\eigen\Eigen\src\IterativeLinearSolvers\IncompleteCholesky.h 202
(I'm using Chinese version of vs2019 so I translated the error message into English)

Can't get the needed attribute from fragment shader in WebGL

There is a famous function from WebGL API (and from OpenGL too) getAttribLocation http://msdn.microsoft.com/en-us/library/ie/dn302408(v=vs.85).aspx
In my project I'm trying to get the needed attribute by using this function, but getting the -1 value.
It's ok, when there isn't such attribute in shader, but in my program there is:
See it? I've dump in console the existed members of shaders, and vertexColor exists.
I don't know how can I get -1, when the dump of shader from which I'm fetching attributes shows that such an attribute exists in memory.
If to fetch other attributes, all is normal, except this one (but it was declared in same way as others and I didn't delete something in memory):
As you can see for vertexPosition and textureCoordinatesAttribute it returns their location number of the attribute.
So what's wrong and why does it occur I can't explain. Please help with a piece of advice.
Can't provide a source code, because I'm developing large library, which is right now 3k+ lines of codes. I can only tell you that you may see how shaders are creating by the screenshots and the function, which is getting attributes is just iterating a collection of the input objects with attribute list, and for each attribute is calling getAttribLocation function from WebGL. Despite on this, the screenshots show the real dump, and you can see, that I'm telling the truth about existed attributes in shader. Also, you can see that fetching two of them gives a correct result and one fails. What's wrong I don't know. As for me and if to use logic it must not return the -1 value. The attributes exists in memory and the calling syntax is correct, so this issue is rather mysterious for me right now.
If the attribute is not used the GLSL compiler is allowed to optimize it out. The WebGL API is designed so you can ignore that. Passing -1 to any of the functions that take an attribute location basically become a no-op.
Similarly the GLSL compiler can optimize away unused uniforms. gl.getUniformLocation will return null for unknown uniforms and passing null to any of the functions that take uniform location are a no-op
It's a good thing it works this way because when debugging it's common to comment out parts of a shader. Because unknown attribute and uniform locations just become a no-op everything just keeps working, no errors.

Is built-in variable such as gl_Normal gl_Vertex supported by GLSL in OpenGL ES 2.0?

I am a fresher of OpenGL ES2.0 and GLSL, and I want to use shaders to process Images. When I coded in Xcode, I used built-in variables such as gl_Normal, gl_Vertex directly and did not declared them at the beginning of shaders. At last, I got a error message:
Use of undeclared identifier gl_Normal. why?
Use of undeclared identifier gl_Normal. why?
In OpenGL-ES 2, and following in its footsteps OpenGL-3 core, there are no longer predefined shader input variables. OpenGL-4 did even away with predefined shader outputs.
Instead you're expected to define your own inputs and outputs. Each input or output variable as assigned a so called location. Either implicitly by OpenGL, and retrieveable by glGetAttribLocation, or explicitly by the programmer using the location storage qualifier attribute or glBindAttribLocation function. Outputs are similarily assigned by fragment data locations.

Is it possible to tell if the OpenGL version is OpenGL ES within the shader code?

Is there any way to tell within the source code for a shader that the shader is being compiled for OpenGL ES? I want to be able to define the version using the #version preprocessor directive to be 100 for OpenGL ES (so that the shader compiles for OpenGL ES 2.0), but is version 110 for OpenGL 2.1).
Is the best way to do this to place the #version as a separate string which is fed in at the application level, or is there a way to to do this within the shader?
Another useful, related thing to be able to do would be to say something like
#if version == 100 compile this code, else compile this code. Is this possible within GLSL?
Thanks.
Prepending #version from the main program as PeterT suggested in the above comment is the only way that will work. Being able to do this (and being able to define constants without having something like a -D compiler switch available) is the main intent behind glShaderSource taking an array of pointers rather than a simple char*.
The GLSL specification (chapter 3.3) requires that #version be the first thing in a shader source, except for whitespace and comments.
Thus, no such thing as
#ifdef foo
#version 123
#endif
is valid, and no such thing will compile (unless the shader compiler is overly permissive, i.e. broken).
About your second question, conditional compilation certainly works and using it in the way you intend to do is a good thing.
This is also related information:
http://blog.beuc.net/posts/OpenGL_ES_2.0_using_Android_NDK/
You can, for example:
#ifdef GL_ES
precision mediump float;
#endif
OpenGL ES 2.0 implementations are required to have a GL_ES macro predefined in the shaders.

Resources