How does one go about setting the GLSL version on Mac? Is this even possible? I'm running a fragment shader and would like to create an array of vec3s, but the shader compiler is producing an error indicating that I need to use a higher GLSL version. The specific error is
'array of 3-component vector of float' : array type not supported here in glsl < 120
Thanks for the help.
Although I have no Mac experience, you can specify the lowest required version of your shader (which is 1.10 by default, I think) by using something like
#version 120 //shader requires version 1.20
as first line in your shader. But of course the specified version also has to be supported by your hardware and driver, which you can check for with glGetString(GL_SHADING_LANGUAGE_VERSION).
EDIT: I confirmed this with a look into the GLSL spec, which also says that all shaders that are linked together should target the same version, although I'm quite sure I myself have once successfully violated this, but this may be due to my forgiving nVidia driver. So if it still complains when linking, add the same #version tag to the vertex shader, too.
Related
For a given version of Godot, you can deterministically generate OpenSimplexNoise using seed (documentation).
However, in the documentation for RandomNumberGenerator there is the following:
Note: The underlying algorithm is an implementation detail. As a
result, it should not be depended upon for reproducible random streams
across Godot versions.
Some workarounds for the above issue are described here, the short answer is to write a custom portable RNG.
Is there any way to insert a custom RNG for OpenSimplexNoise to manage determinism? Is there another solution?
The Godot developers are warning you that they might decide to change RandomNumberGenerator for a future version (also and some changes happened already).
And no, you can't insert a custom random number generator for OpenSimplexNoise.
Anyway, OpenSimplexNoise does not use RandomNumberGenerator or rand. Instead it takes this library as a git module: smcameron/open-simplex-noise-in-c.
Does OpenSimplexNoise change? Rarely. However, there is a breaking change in OpenSimplexNoise for Godot 4 and 3.4: the axis has been swaped (this is a fix).
So that leads us to add a custom noise solution. Which could be a port of OpenSimplex noise to C# or GDScript.
See open-simplex-noise.c from Godot source and digitalshadow/OpenSimplexNoise.cs (which is a port to C#, there are some other ports linked in the comments there too).
And for the texture, there are a couple options:
You can create a script that draws the image (I suggest using lockbits) and set it.
Or you can extend Viewport (which entains adding a Viewport to the scene and attaching a script). Then you take advantage of ViewportTexture, and you can take advantage of shaders to create the texture you want. Take BastiaanOlij/godot-noise-texture-asset for reference.
I'm calling glTextureStorage2D to generate a framebuffer in my engine. I'm using Google's Angle on windows and using libglfw3-dev & libgles2-mesa-dev on ubuntu running on the same machine.
Creating 8bit RGBA textures is fine on both platforms but higher bit depth formats such as GL_RGBA32F, GL_RGBA16F, GL_RGBA16F, GL_RGB10_A2, GL_RGBA16I & GL_R11F_G11F_B10F silently fail on Ubuntu and on inspection (using RenderDoc), appear to be defaulting back to a standard RGBA texture.
I'm interested in both ascertaining if, on any given platform, a texture format is available and also, why the set of libraries I'm using don't seem to support these formats when the machine is clearly capable of supporting them. I'm aware that 'glGetInternalformativ' exists on more up to date gl implementations but that's unlikely to be available on the lower spec machines that I'd like to test on.
I tried installing 'libgles3-mesa-dev' but that doesn't exist and besides, the headers for gles3 are all there and everything runs, just silently fails to create the texture formats I'm after.
Any hints as to why this seems to be the case would be appreciated.
I'm running Octave 3.8.1 on a i3 2GB DDR3 RAM powered notebook within Ubuntu 14.04 on dualboot with Windows 7.
And I'm having a really hardtime saving plots that I use on my seismologic research, they are quite simple and still I wait almost 5 min to save a single plot, the plot is built within seconds, the saving though...
Is it purely a problem with my notebook performance?
When I run a program for the first time I get the following warnings on shadowed functions, has one of them anything to do with it?
warning: function /home/michel/octave/specfun-1.1.0/x86_64-pc-linux-gnu-api-v49+/ellipj.oct shadows a built-in function
warning: function /home/michel/octave/specfun-1.1.0/erfcinv.m shadows a built-in function
warning: function /home/michel/octave/specfun-1.1.0/ellipke.m shadows a core library function
warning: function /home/michel/octave/specfun-1.1.0/expint.m shadows a core library function
Also, this started to happen when I upgraded from a very old version of Octave (2.8 if I'm not mistaken), it seems that the old one used to run on the linux default plot making functions, and the new one (3.8.1) runs on its own, is it correct? I used to take a little more time with this notebook that I take with the lab PC, but not even close to 5min+ for each plot.
Is there anything I can do, like upgrading anything within the octave or "unshadowing" the functions mentioned before?
Thanks a lot for the help.
Shadowing functions is just a name clash which is explained for example here: Warnings after octave installation
As for low performance, octave renderer doesn't seem to be well optimized for writing plots with huge number of points. For example, the following:
tic; x=1:10000; plot(sin(x)); saveas(gcf,'2123.png'); toc;
Will put octave in coma for quite a while. Even though the plot itself is made in an instant. If amounts of your data are of comparable magnitude, consider making it more sparce prior putting it on the graph.
There's no default linux plotmaker, there's gnuplot. You may try your luck with it by invoking
graphics_toolkit gnuplot
before plotting. (To me it didn't do much good though. graphics_toolkit fltk will return octave's usual plotter.)
If the slowness you refer to is in saving three dimensional plots (like mesh), the only workaround I've found on system similar to your is to use alt+prtscr.
Alternatively, you could try obtaining octave 4.0 which is released by now. It's changelogs mention yet another graphics toolkit.
I have a program object which can be rendered successfully.
But sometime in my application at runtime, when I modify and compile its vertex & fragment shaders source, re-link it again by glLinkProgram(), I see the program can not be rendered.
Note that: the shaders and program were re-compiled/re-linked successfully.
I just check their status by
glGetShaderiv(fsId, GL_COMPILE_STATUS, &compileStatus);
and glGetProgramiv(progId, GL_LINK_STATUS, &linkStatus);
the result is compileStatus = linkStatus = 1
I'm wondering we can re-linking a program object in OpenGL ES 2.0 or not?
My GPU info:
GL_RENDERER: PowerVR SGX 530
GL_VENDOR: Imagination Technologies
GL_VERSION: OpenGL ES 2.0
Can you? By the OpenGL ES specification, yes. Should you? No.
The general rule when doing anything in OpenGL, even ES versions, is this: don't do anything unless you know it's commonly done. The farther off the beaten path you go, the more likely you are to encounter driver bugs.
In general, the usage pattern for programs is to link them, then use them a bunch, then delete them when you're closing the application. You should stick to that. If you need a new program, you create a new program.
Re-linking is going to trash all your uniform state anyway. So it's not like you're preserving something by re-linking inside an old program instead of creating a new one. Indeed, it's better this way; if the new link fails, you still have the old program. Whereas if you re-link on a program and it fails, the old data is destroyed.
I've been using OpenGL for quite a while now but I always get confused by it's state management system. In particular the issue I struggle with is understanding exactly which object or target a particular state is stored against.
Eg 1: assigning a texture parameter. Are those parameters stored with the texture itself, or the texture unit? Will binding a texture with a different texture unit move those parameter settings?
Eg 2: glVertexAttribPointer - what exactly is that associated with - is the it the active shader program, the the bound data buffer, the ES context itself? If I bind a different vertex buffer object, do I need to call glVertexAttribPointer again?
So I'm not asking for answers to the above questions - I'm asking if those answers are written down somewhere so I don't need to do the whole trial and error thing everytime I use something new.
Those answers are written in the OpenGL ES 2.0 specification (PDF link). Every function states what state it affects, and there's a big series of tables at the end that specify which state is part of which objects, or just part of the global context.