I'm trying to rotate a texture in a fragment shader, instead of using the vertex shader and matrix transformations.
The rotation has the pivot at the center.
The algorithm works fine when rendering in a quad with a square shape, but when the quad has a rectangular shape the render result gets messed up.
Anyone can spot the problem?
Thank you
varying vec2 v_texcoord;
uniform sampler2D u_texture;
uniform float u_angle;
void main()
{
vec2 coord = v_texcoord;
float sin_factor = sin(u_angle);
float cos_factor = cos(u_angle);
coord = (coord - 0.5) * mat2(cos_factor, sin_factor, -sin_factor, cos_factor);
coord += 0.5;
gl_FragColor = texture2D(u_texture, coord);
}
The following line of code which was provided in the question:
coord = vec2(coord.x - (0.5 * Resolution.x / Resolution.y), coord.y - 0.5) * mat2(cos_factor, sin_factor, -sin_factor, cos_factor);
is not quite right.
There are some bracketing errors.
The correct version would be:
coord = vec2((coord.x - 0.5) * (Resolution.x / Resolution.y), coord.y - 0.5) * mat2(cos_factor, sin_factor, -sin_factor, cos_factor);
I haven't tried it out myself, but my guess is that since you are using the texture coordinates in a rectangular space, it will cause distortion upon rotation without some factor to correct it.
You'll need to pass it a uniform that declares the width and height of your texture. With this, you can apply the aspect ratio to correct the distortion.
coord = (coord - 0.5) * mat2(cos_factor, sin_factor, -sin_factor, cos_factor);
may become something like:
coord = vec2(coord.x - (0.5 * Resolution.x / Resolution.y), coord.y - 0.5) * mat2(cos_factor, sin_factor, -sin_factor, cos_factor);
Like I said though, I haven't tried it out, but I have had to do this in the past for similar shaders. Might need to reverse Resolution.x / Resolution.y.
Related
So, I have these two functions in GLSL. One that splits a texture by its rgb channels and then displaces them individually. And another that just blurs a texture. I want to combine them. But I want to be able to only blur the channel im displacing. So, for instance I might want to blur the red channel in the rgbShift function.
Problem is that the red channel is a single float and the blur function expects a full sample2D image so it can apply UV and stuff. I guess I need a way to blur just a single float? Im not very experienced with GLSL and ive been trying to figure this out for a few days now. Ill be very thankfull for any pointers or suggestions at all.
The GLSL functions can be viewed below.
vec4 blur5(sampler2D image, vec2 uv, vec2 resolution, vec2 direction) {
vec4 color = vec4(0.0);
vec2 offset = (vec2(1.3333333333333333) * direction) / resolution;
color += texture2D(image, uv) * 0.29411764705882354;
color += texture2D(image, uv + offset) * 0.35294117647058826;
color += texture2D(image, uv - offset) * 0.35294117647058826;
return color;
}
vec3 rgbShift(sampler2D textureimage, vec2 uv, float offset) {
float displace = sin(PI*vUv.y) * offset;
float r = texture2D(textureimage, uv + displace).r;
float g = texture2D(textureimage, uv).g;
float b = texture2D(textureimage, uv + -displace).b;
return vec3(r, g, b);
}
Heres me thinking out loud:
I guess I want to do something like this:
vec4 blurredTexture = blur5(textureImage);
float red = texture2D(blurredTexture, uv + displace).r;
Or this:
float redChannel = texture2D(blurredTexture, uv + displace).r;
vec4 blurredRedChannel = blur5(redChannel );
But neither will work because I cant figure out how to convert the types. I either need to convert the blurred vec4 into a sample2D for the rgbShift function. Or the red channel float into a sample2D for the blur function. Is it even possible to convert a value into a sample2D one way or another?
Maybe I need some other solution where I dont need to convert sample2D at all.
Is it even possible to convert a value into a sample2D one way or another?
Sort-of. You'll need to write that value to a temporary texture. Then you can bind that texture and run a 2nd pass that will sample from that texture. That's probably an overkill for the simple filtering you're trying to do.
Maybe I need some other solution where I dont need to convert sample2D at all.
A simpler solution is to combine those two functions into one:
vec3 shiftAndBlur(sampler2D image, vec2 uv, float offset, vec2 resolution, vec2 direction) {
vec2 offset = (vec2(1.3333333333333333) * direction) / resolution;
float displace = sin(PI*vUv.y) * offset;
float r = texture2D(image, uv + displace).r * 0.29411764705882354
+ texture2D(image, uv + displace + offset).r * 0.35294117647058826
+ texture2D(image, uv + displace - offset).r * 0.35294117647058826;
float g = texture2D(image, uv).g;
float b = texture2D(image, uv - displace).b;
return vec3(r,g,b);
}
I am generating a point cloud representing a rock using Three.js, but am facing a problem with visualizing its structure clearly. In the second screenshot below I would like to be able to denote the topography of the rock, like the corner (shown better in the third screenshot) of the structure, in a more explicit way, as I want to be able to maneuver around the rock and select different points. I have rocks that are more sparse (harder to see structure as points very far away) and more dense (harder to see structure from afar because points all mashed together, like first screenshot but even when closer to the rock), and finding a generalized way to approach this problem has been difficult.
I posted about this problem before here, thinking that representing the ‘depth’ of the rock into the screen would suffice, but after attempting the proposed solution I still could not find a nice way to represent the topography better. Is there a way to add a source of light that my shaders can pick up on? I want to see whether I can represent the colors differently based on their orientation to the source. Using a different software, a friend was able to produce the below image - is there a way to simulate this in Three.js?
For context, I am using Points with a BufferGeometry and ShaderMaterial. Below is the shader code I currently have:
Vertex:
precision mediump float;
varying vec3 vColor;
attribute float alpha;
varying float vAlpha;
uniform float scale;
void main() {
vAlpha = alpha;
vColor = color;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
#ifdef USE_SIZEATTENUATION
//bool isPerspective = ( projectionMatrix[ 2 ][ 3 ] == - 1.0 );
//if ( isPerspective ) gl_PointSize *= ( scale / -mvPosition.z );
#endif
gl_PointSize = 2.0;
gl_Position = projectionMatrix * mvPosition;
}
and
Fragment:
#ifdef GL_OES_standard_derivatives
#extension GL_OES_standard_derivatives : enable
#endif
precision mediump float;
varying vec3 vColor;
varying float vAlpha;
uniform vec2 u_depthRange;
float LinearizeDepth(float depth, float near, float far)
{
float z = depth * 2.0 - 1.0; // Back to NDC
return (2.0 * near * far / (far + near - z * (far - near)) - near) / (far-near);
}
void main() {
float r = 0.0, delta = 0.0, alpha = 1.0;
vec2 cxy = 2.0 * gl_PointCoord.xy - 1.0;
r = dot(cxy, cxy);
float lineardepth = LinearizeDepth(gl_FragCoord.z, u_depthRange[0], u_depthRange[1]);
if (r > 1.0) {
discard;
}
// Reseted back to 1.0 instead of using lineardepth method above
gl_FragColor = vec4(vColor, 1.0);
}
Thank you so much for your help!
I am trying to tweak this ShaderToy example for vertices to create 'sparks'
out of them. Have tried to play with gl_PointCoord and gl_FragCoord without any results. Maybe, someone here could help me?
I need effect similar to this animated gif:
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
#define M_PI 3.1415926535897932384626433832795
float rand(vec2 co)
{
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
void main( ) {
float size = 30.0;
float prob = 0.95;
vec2 pos = floor(1.0 / size * gl_FragCoord.xy);
float color = 0.0;
float starValue = rand(pos);
if (starValue > prob)
{
vec2 center = size * pos + vec2(size, size) * 0.5;
float t = 0.9 + sin(time + (starValue - prob) / (1.0 - prob) * 45.0);
color = 1.0 - distance(gl_FragCoord.xy, center) / (0.5 * size);
color = color * t / (abs(gl_FragCoord.y - center.y)) * t / (abs(gl_FragCoord.x - center.x));
}
else if (rand(gl_FragCoord.xy / resolution.xy) > 0.996)
{
float r = rand(gl_FragCoord.xy);
color = r * ( 0.25 * sin(time * (r * 5.0) + 720.0 * r) + 0.75);
}
gl_FragColor = vec4(vec3(color), 1.0);
}
As I understand have to play with vec2 pos, setting it to a vertex position.
You don't need to play with pos. As Vertex Shader is only run by each vertex, there is no way to process its pixel values there using Pos. However, you can do processing pixel using gl_PointCoord.
I can think of two ways only for changing the scale of a texture
gl_PointSize in Vertex Shader in opengl es
In Fragment Shader, you can change the texture UV value, for example,
vec4 color = texture(texture0, ((gl_PointCoord-0.5) * factor) + vec2(0.5));
If you don't want to use any texture but only pixel processing in FS,
you can set UV like ((gl_PointCoord-0.5) * factor) + vec2(0.5)
instead of uv which is normally set as fragCoord.xy / iResolution.xy in Shadertoy
I have a program that works great when my POT DataTextures are 1:1 (width:height) in their texel dimensions, however when they are 2:1 or 1:2 in texel dimensions it appears that the texels are being incorrectly read and applied. I'm using continuous indexes (1,2,3,4,5...) to access the texels using the two functions below.
I'm wondering if there is something wrong with how I am accessing the texel data, or perhaps if my use of a Float32Array for the integer indexes needs to be switched to a Uint8Array or something else? Thanks in advance!
This function finds the uv for textures that have one texel per particle cloud in my visualization:
float texelSizeX = 1.0 / uPerCloudBufferWidth;
float texelSizeY = 1.0 / uPerCloudBufferHeight;
vec2 perMotifUV = vec2(
mod(cellIndex, uPerCloudBufferWidth)*texelSizeX,
floor(cellIndex / uPerCloudBufferHeight)*texelSizeY );
perCloudUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
This function finds the uv for textures that contain one texel for each particle contained in all of the clouds:
float pTexelSizeX = 1.0 / uPerParticleBufferWidth;
float pTexelSizeY = 1.0 / uPerParticleBufferHeight;
vec2 perParticleUV = vec2(
mod(aParticleIndex, uPerParticleBufferWidth)*pTexelSizeX,
floor(aParticleIndex / uPerParticleBufferHeight)*pTexelSizeY );
perParticleUV += vec2(0.5*pTexelSizeX, 0.5*pTexelSizeY);
Shouldn't this
vec2 perMotifUV = vec2(
mod(cellIndex, uPerCloudBufferWidth)*texelSizeX,
floor(cellIndex / uPerCloudBufferHeight)*texelSizeY );
be this?
vec2 perMotifUV = vec2(
mod(cellIndex, uPerCloudBufferWidth)*texelSizeX,
floor(cellIndex / uPerCloudBufferWidth)*texelSizeY ); // <=- use width
And same for the other? Divide by width not height
I have tried to make better quality of my volume ray casting algorithm. I have set a smaller step of raycast (quality is better), but it causes problem. It is on pictures below (black areas where they shouldnt be).
I am using RGB cube to get direction of ray in volume.
I think, i have the same algorithm like there: volume rendering (using glsl) with ray casting algorithm
Have anybody some ideas, where could be a problem? I need to resolve this, because deadline of my diplom thesis is to close:( I realy don't know, why it doesnt work:(
EDIT:
I cant show there my all code (it could be problem, if i will supply it before hand it in school). But the key code to going throught the volume:
// All variables neede to rays
vec3 rayDirection = texture2D(backFaceCube, texCoo).xyz - varcolor.xyz;
float lenRay = length(rayDirection);
vec3 normDir = normalize(rayDirection);
float d = qualitySteps; //quality steps is size of steps defined by user -> example: 0.01, 0.001, 0.0001 etc.
vec3 step = normDir * d;
float lenStep = length(step);
float accumulatedLength = 0.0;
and then in cycle:
posInCube.xyz += step;
accumulatedLength += lenStep;
...
...
...
if(accumulatedLength >= lenRay || accumulatedColor.a > 1.0 ) {
break;
}
EDIT2:(sorry but like comment it was too long)
Yes, the texture is noisy...i have tried to delete the condition with alpha: if(accumulatedColor.a > 1.0), but the result is same.
I think that there is some direct correlation with length of ray and size of step. I tried many combination and i have found these things.
If step is big, i am able to go throught all volume, but if it is small, than i am realy not able to go throught volume (maybe). If step is extremely big, than i can see mirroved object (it can be caused by repeating texture if i go out of the texture on GPU). If step is too small, than i am able to mapped only small part of texture -> it seems, that ray is too short, but in reality he isnt. Questins are, why mapping of 3D coordinates to 2D texture is wrong and depend on size of step..
Can you please supply the code for your fragment shader?
Are you traversing the whole vector from front to end position? Here's an example shader (the code might contain some errors since I just wrote it from the top of my head. I unfortunately can't test the code on my computer at the moment):
in vec2 texCoord;
out vec4 outColor;
uniform float stepSize;
uniform int numSteps;
uniform sampler2d frontTexture;
uniform sampler2d backTexture;
uniform sampler3d volumeTexture;
uniform sampler1d transferTexture; // Density to RGB
void main()
{
vec4 color = vec4(0.0);
vec3 startPosition = texture(frontTexture, texCoord);
vec3 endPosition = texture(backTexture, texCoord);
vec3 delta = normalize(startPosition - endPosition) * stepSize;
vec3 position = startPosition;
for (int i = 0; i < numSteps; ++i)
{
float density = texture(volumeTexture, position).r;
vec3 voxelColor = texture(transferTexture, density);
// Sampling distance correction
color.a = 1.0 - pow((1.0 - color.a), stepSize * 500.0);
// Front to back blending (no shading done)
color.rgb = color.rgb + (1.0 - color.a) * voxelColor.a * voxelColor.rgb;
color.a = color.a + (1.0 - color.a) * voxelColor.a;
if (color.a >= 1.0)
{
break;
}
// Advance
position += direction;
if (position.x > 1.0 || position.y > 1.0 || position.z > 1.0)
{
break;
}
}
outColor = color;
}