Convert sampler2D into samplerCube - opengl-es

Is there a way in a fragment shader that has a given sampler2D to convert this to a samplerCube? I want the Cube to have the sampler2D texture on all six sides. The application cannot be changed to pass a samplerCube to the shader, but I need one in my fragment shader.

You can write your own function converting 3D cube map direction coordinate into 2D texture coordinate using vector math ...
this might help:
legacy opengl - rendering cube map layout, understanding glTexCoord3f parameters
The function should do this:
normalize input direction
detect which side your direction is hitting
by doing dot product of the input direction with the direction pointing to each cubemap face center. The maximum will identify hit face
project the direction onto the hit face
and doing dot product with basis vectors of the face you get your 2D coordinates. Beware each face has its rotation ...
Then just use that for access sampler2D with 3D direction as texture coordinate something like this:
uniform sampler2D txr;
...
vec2 mycubemap(vec3 dir)
{
vec2 tex;
dir=normalize(dir);
...
return tex;
}
void main()
{
...
???=texture2D(txr,mycubemap(???));
...
}
When I put all together + some optimizations I got this:
vec2 mycubemap(vec3 t3)
{
vec2 t2;
t3=normalize(t3)/sqrt(2.0);
vec3 q3=abs(t3);
if ((q3.x>=q3.y)&&(q3.x>=q3.z))
{
t2.x=0.5-t3.z/t3.x;
t2.y=0.5-t3.y/q3.x;
}
else if ((q3.y>=q3.x)&&(q3.y>=q3.z))
{
t2.x=0.5+t3.x/q3.y;
t2.y=0.5+t3.z/t3.y;
}
else{
t2.x=0.5+t3.x/t3.z;
t2.y=0.5-t3.y/q3.z;
}
return t2;
}
Using it with the layout rendering from the linked QA I got this output:
Using this as texture instead of cubemap:
Note the lines in square boundaries so there might be some additional edge case handling needed but too lazy to analyze further ...

Related

How do you increase the space between pixels in a fragment shader?

I'm currently working on a shader for a very mundane effect I'd like to achieve. It's a little bit hard to explain, but the basic gist is that I'm trying to "pull apart" the pixels of a pixel art image.
You can see my current progress, however minor, at this jsfiddle:
https://jsfiddle.net/roomyrooms/Lh46z2gw/85/
I can distort the image easily, of course, and make it stretch the further away from the center it is. But this distorts and warps it smoothly, and all the pixels remain connected (whether they're squished/smeared/etc.)
I would like to get an effect where the space between the pixels is stretched rather than the pixels themselves stretching. Sort of like if you were to swipe sand across a table. The grains of sand stay the same size, they just get further apart.
Welcome any ideas! Thanks. Here's what I've got code-wise so far:
var fragmentShader = `
precision mediump float;
varying vec2 vTextureCoord;
uniform sampler2D uSampler;
uniform highp vec4 inputSize;
uniform float time;
vec2 mapCoord( vec2 coord )
{
coord *= inputSize.xy;
coord += inputSize.zw;
return coord;
}
vec2 unmapCoord( vec2 coord )
{
coord -= inputSize.zw;
coord /= inputSize.xy;
return coord;
}
void main(void)
{
vec2 coord = mapCoord(vTextureCoord);
float dist = distance(coord.x, inputSize.x/2.);
coord.x += dist/4.;
coord = unmapCoord(coord);
gl_FragColor = texture2D(uSampler, coord);
}`
EDIT: Added an illustration of the effect I'm trying to achieve here:
I can get something along these lines:
With modulo, but it's discarding half of the image in the process.
You can:
discard some fragments (that is slow on some mobile devices)
use stencil mask to just draw where you want
draw transparent pixels alpha=0 for the ones that you do not want.
and lastly, you can draw an array of points or squires and move them around.
As far as I know, the fragment shader will run on every pixel in your triangle. You can only tell it what color to set that pixel to. In your example you're already duplicating columns of pixels so you can discard some hopefully without losing any of the source image's pixels if you stretch the coord 2x then discard every other column.
vec2 coord = mapCoord(vTextureCoord);
if(coord.x < 100.0 && floor(coord.x/2.0)==floor((coord.x+1.0)/2.0))
discard;

OpenGL - trouble passing ALL data into shader at once

I'm trying to display textures on quads (2 triangles) using opengl 3.3
Drawing a texture on a quad works great; however when I have ONE textures (sprite atlas) but using 2 quads(objects) to display different parts of the atlas. When in draw loop, they end up switching back and fourth(one disappears than appears again, etc) at their individual translated locations.
The way I'm drawing this is not the standard DrawElements for each quad(or object) but I package all quads, uv, translations, etc send them up to the shader as one big chunk (as "in" variables): Vertex shader:
#version 330 core
// Input vertex data, different for all executions of this shader.
in vec3 vertexPosition_modelspace;
in vec3 vertexColor;
in vec2 vertexUV;
in vec3 translation;
in vec4 rotation;
in vec3 scale;
// Output data ; will be interpolated for each fragment.
out vec2 UV;
// Output data ; will be interpolated for each fragment.
out vec3 fragmentColor;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
...
void main(){
mat4 Model = mat4(1.0);
mat4 t = translationMatrix(translation);
mat4 s = scaleMatrix(scale);
mat4 r = rotationMatrix(vec3(rotation), rotation[3]);
Model *= t * r * s;
gl_Position = MVP * Model * vec4 (vertexPosition_modelspace,1); //* MVP;
// The color of each vertex will be interpolated
// to produce the color of each fragment
fragmentColor = vertexColor;
// UV of the vertex. No special space for this one.
UV = vertexUV;
}
Is the vertex shader working as I think it would with a large chunk of data - that it draws each segment passed up as uniform individually because it does not seem like it? Is my train of thought correct on this?
For completeness this is my fragment shader:
#version 330 core
// Interpolated values from the vertex shaders
in vec3 fragmentColor;
// Interpolated values from the vertex shaders
in vec2 UV;
// Ouput data
out vec4 color;
// Values that stay constant for the whole mesh.
uniform sampler2D myTextureSampler;
void main()
{
// Output color = color of the texture at the specified UV
color = texture2D( myTextureSampler, UV ).rgba;
}
A request for more information was made so I will put how i bind this data up to the vertex shader. The following code is just one I use for my translations. I have more for color, rotation, scale, uv, etc:
gl.BindBuffer(gl.ARRAY_BUFFER, tvbo)
gl.BufferData(gl.ARRAY_BUFFER, len(data.Translations)*4, gl.Ptr(data.Translations), gl.DYNAMIC_DRAW)
tAttrib := uint32(gl.GetAttribLocation(program, gl.Str("translation\x00")))
gl.EnableVertexAttribArray(tAttrib)
gl.VertexAttribPointer(tAttrib, 3, gl.FLOAT, false, 0, nil)
...
gl.DrawElements(gl.TRIANGLES, int32(len(elements)), gl.UNSIGNED_INT, nil)
You have just single sampler2D
which means you have just single texture at your disposal
regardless on how many of them you bind.
If you really need to pass the data as single block
then you should add sampler per each texture you got
not sure how many objects/textures you have
but you are limited by gfx hw limit on texture units with this way of data passing
also you need to add another value to your data telling which primitive use which texture unit
and inside fragment then select the right texture sampler ...
You should add stuff like this:
// vertex
in int usedtexture;
out int txr;
void main()
{
txr=usedtexture;
}
// fragment
uniform sampler2D myTextureSampler0;
uniform sampler2D myTextureSampler1;
uniform sampler2D myTextureSampler2;
uniform sampler2D myTextureSampler3;
in vec2 UV;
in int txr;
out vec4 color;
void main
{
if (txr==0) color = texture2D( myTextureSampler0, UV ).rgba;
else if (txr==1) color = texture2D( myTextureSampler1, UV ).rgba;
else if (txr==2) color = texture2D( myTextureSampler2, UV ).rgba;
else if (txr==3) color = texture2D( myTextureSampler3, UV ).rgba;
else color=vec4(0.0,0.0,0.0,0.0);
}
This way of passing is not good for these reasons:
number of used textures is limited to HW texture units limit
if your rendering would need additional textures like normal/shininess/light maps
then you need more then 1 texture per object type and your limit is suddenly divided by 2,3,4...
You need if/switch statements inside fragment which can slow things down considerably
Yes you can do it brunch less but then you would need to access all textures all the time increasing heat stress on gfx without reason...
This kind of passing is suitable for
all textures inside single image (as you mentioned texture atlas)
which can be faster this way and reasonable for scenes with small number of object types (or materials) but large object count...
Since I needed more input on this matter, I linked this page to reddit and someone was able to help me with one response! Anyways the reddit link is here:
https://www.reddit.com/r/opengl/comments/3gyvlt/opengl_passing_all_scene_data_into_shader_each/
The issue of seeing two individual textures/quads after passing all vertices as one data structure over to vertex shader was because my element indices were off. I needed to determine the correct index of each set of vertices for my 2 triangle(quad) objects. Simply had to do something like this:
vertexInfo.Elements = append(vertexInfo.Elements, uint32(idx*4), uint32(idx*4+1), uint32(idx*4+2), uint32(idx*4), uint32(idx*4+2), uint32(idx*4+3))

Draw GL_TRIANGLE_STRIP based on centre point and size

I am rendering TRIANGLE_STRIPS in OpenGL ES 2.0. I was wondering, would it be possible to modify the vertex shader such that instead of feeding it 4 texture vertices, you give it only one vertex that represents the centre of the TRIANGLE_STRIP, with a parameter for texture width and a height?
Assuming my texture vertex is:
GLfloat textureVertices[] = {
x, y
};
Can the vertex shader be modified to work with texSize uniform, which would represent the width/height of the TRIANGLE_STRIP? :
attribute highp vec4 position;
attribute lowp vec4 inputPointCoordinate;
uniform mat4 MVP;
uniform lowp vec4 vertexColor;
uniform float texSize;
varying lowp vec2 textureCoordinate;
varying lowp vec4 color;
void main()
{
gl_Position = MVP*position;
textureCoordinate = inputPointCoordinate.xy;
color = vertexColor;
}
No, at least not in the vertex shader. You need to get the 3 different points in the vertex shader with different attribute values so you can receive the coordinate in the fragment shader which is interpolated.
What you actually can do is pass a center into the vertex shader which is the multiplied with the same matrix as the vertex coordinates. Beside that you would need some kind of radius (or the texture dimensions vector) which will probably need to be scaled if the matrix contains the scale as well. Then you can take both of these values and pass them to the fragment shader (using varying). In the fragment shader you then need to compute the texture coordinates from those 2 parameters and the fragment position.
A simular procedure is used to draw a very nice circle or sphere using only 2 triangles (a square) but I do not suggest you do this as you will only lose on performance plus it is quite a lot of work...

SceneKit painting on texture with texture coordinates

I have a Collada model that I load into SceneKit. When I perform a hittest on the model I am able to retrieve the texture coordinates of the model that was hit.
With these texture coordinates I should be able to replace texture coordinates with a color.
So this way I should be able to draw on the model
Correct me if I am wrong so far.
I read a lot of articles till now but I just don't get my shaders right.
( Though I did get some funky effects ;-)
My vertex shader :
precision highp float;
attribute vec4 position;
attribute vec2 textureCoordinate;
attribute vec2 aTexureCoordForColor; //coordinates from the hittest
uniform mat4 modelViewProjection;
varying vec2 aTexureCoordForColorVarying; // passing to the fragment shader here
varying vec2 texCoord;
void main(void) {
// Pass along to the fragment shader
texCoord = textureCoordinate;
aTexureCoordForColorVarying = aTexureCoordForColor; //assigning here
// output the projected position
gl_Position = modelViewProjection * position;
}
my fragment shader
precision highp float;
uniform sampler2D yourTexture;
uniform vec2 uResolution;
uniform int uTexureCoordsCount;
varying vec2 texCoord;
varying vec2 aTexureCoordForColorVarying;
void main(void) {
// ??????????? no idea anymore what to do here
gl_FragColor = texture2D(yourTexture, texCoord);
}
If you need more code please let me know.
First, shaders aren't the only way to draw onto an object's material. One other option that might work well for you is to use a SpriteKit scene as the material's contents — see this answer for some help with that.
If you stick to the shader route, you don't need to rewrite the whole shader program just to paint on top of the existing texture. (If you do, you lose things that SceneKit's program provides for you, like lighting and bump mapping. No sense reinventing those wheels unless you really want to.) Instead, use a shader modifier — a little snippet of GLSL that gets inserted into the SceneKit shader program. The SCNShadable reference explains how to use those.
Third, I'm not sure you're providing the texture coordinates to your shader in the best way. You want every fragment to get the same texcoord value for the clicked point, so there's little point to passing it into GL as an attribute and interpolating it between the vertex and fragment stages. Just pass it as a uniform, and set that uniform on your material with key-value coding. (See the SCNShadable reference again for info on binding shader parameters with KVC.)
Finally, to get at the main point of your question... :)
To change the output color of the fragment shader (or shader modifier) at or near a particular set of texture coordinates, just compare your passed-in click coordinates to the current set of texcoords that'd be used for the regular texture lookup. Here's an example that does that, going the shader modifier route:
uniform vec2 clickTexcoord;
// set this from ObjC/Swift code with setValue:forKey:
// and an NSValue with CGPoint data
uniform float radius = 0.01;
// change this to determine how large an area to highlight
uniform vec3 paintColor = vec4(0.0, 1.0, 0.0);
// nice and green; you can change this with KVC, too
#pragma body
if (distance(_surface.diffuseTexcoord.x, clickTexcoord.x) < radius) {
_surface.diffuse.rgb = paintColor
}
Use this example as a SCNShaderModifierEntryPointSurface shader modifier and lighting/shading will still be applied to the result. If you want your paint to override lighting, use a SCNShaderModifierEntryPointFragment shader modifier instead, and in the GLSL snippet set _output.color.rgb instead of _surface.color.rgb.

GLSL: gl_FragCoord issues

I am experimenting with GLSL for OpenGL ES 2.0. I have a quad and a texture I am rendering. I can successfully do it this way:
//VERTEX SHADER
attribute highp vec4 vertex;
attribute mediump vec2 coord0;
uniform mediump mat4 worldViewProjection;
varying mediump vec2 tc0;
void main()
{
// Transforming The Vertex
gl_Position = worldViewProjection * vertex;
// Passing The Texture Coordinate Of Texture Unit 0 To The Fragment Shader
tc0 = vec2(coord0);
}
//FRAGMENT SHADER
varying mediump vec2 tc0;
uniform sampler2D my_color_texture;
void main()
{
gl_FragColor = texture2D(my_color_texture, tc0);
}
So far so good. However, I'd like to do some pixel-based filtering, e.g. Median. So, I'd like to work in pixel coordinates rather than in normalized (tc0) and then convert the result back to normalized coords. Therefore, I'd like to use gl_FragCoord instead of a uv attribute (tc0). But I don't know how to go back to normalized coords because I don't know the range of gl_FragCoords. Any idea how I could get it? I have got that far, using a fixed value for 'normalization', though it's not working perfectly as it is causing stretching and tiling (but at least is showing something):
//FRAGMENT SHADER
varying mediump vec2 tc0;
uniform sampler2D my_color_texture;
void main()
{
gl_FragColor = texture2D(my_color_texture, vec2(gl_FragCoord) / vec2(256, 256));
}
So, the simple question is, what should I use in the place of vec2(256, 256) so that I could get the same result as if I were using the uv coords.
Thanks!
gl_FragCoord is in screen coordinates, so to get normalized coords you need to divide by the viewport width and height. You can use a uniform variable to pass that information to the shader, since there is no built in variable for it.
You can also sample the texture by un-normalized coordinates if:
sampling by texture() from GL_TEXTURE_RECTANGLE
sampling by texelFetch() from a regular texture or texture buffer

Resources