Blending issues with simple particles - opengl-es

I draw the particles in my game as a capsule (Two GL_POINTS, two GL_TRIANGLES). Everything is nicely batched so that I draw the triangles first, then the points second (two draw calls total).
My problem is that in OpenGL es you have to round GL_POINTS yourself, and I have been doing it like this in the fragment shader.
precision highp float;
varying float outColor;
vec3 hsv2rgb(vec3 c)
{
vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);
vec3 p = abs(fract(c.xxx + K.xyz) * 6.0 - K.www);
return c.z * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), c.y);
}
void main()
{
vec2 circCoord = 2.0 * gl_PointCoord - 1.0;
gl_FragColor = vec4( hsv2rgb( vec3(outColor / 360.0, 1.0, 1.0) ) , step(dot(circCoord, circCoord), 1.0) );
}
The problem is I also need to do depth sorting, because a particle is drawn in two separate draw calls sometimes the z position is not correct without depth buffering because they are drawn in two draw calls.
Now that I have the depth buffer going, and the rounded points they are not mixing well, and instead of having rounded particles they have a black area around them. Any ideas?
Some extra notes:
I am doing ios opengl es (tile based deferred rendering I believe)
Each particle is initially defined as two points, the current location and the location it was in last frame. These two points are drawn with GL_POINTS later. Then the rectangle part is made with two triangles decided by finding a vector perpendicular to the vector of the two points.
Also the particles are already sorted in front to back order. Technically their z position is arbitrary, I just need them to be intact.

Also the particles are already sorted in front to back order. Technically their z position is arbitrary, I just need them to be intact.
I suspect that's your problem.
Points are square. You can fiddle the blending to make them appear round, but the geometry (and hence the depth value) is still square. Things behind the point are being Z-failed by the corners which are outside of the coloured round region.
The only fix to this without changing your algorithm completely is either use a triangle mesh rather than a point (so the actual geometry is round), or discard fragments in the fragment shader for point pixels which are outside of the round region you want to keep.
Note that using discard in shaders can be relatively expensive, so check the performance of that approach ...

Related

Read vertex positions as pixels in Three.js

In a scenario where vertices are displaced in the vertex shader, how to retrieve their transformed positions in WebGL / Three.js?
Other questions here suggest to write the positions to a texture and then read the pixels, but the resulting value don't seem to be correct.
In the example below the position is passed to the fragment shader without any transformations:
// vertex shader
varying vec4 vOut;
void main() {
gl_Position = vec4(position, 1.0);
vOut = vec4(position, 1.0);
}
// fragment shader
varying vec4 vOut;
void main() {
gl_FragColor = vOut;
}
Then reading the output texture, I would expect pixel[0].r to be identical to positions[0].x, but that is not the case.
Here is a jsfiddle showing the problem:
https://jsfiddle.net/brunoimbrizi/m0z8v25d/2/
What am I missing?
Solved. Quite a few things were wrong with the jsfiddle mentioned in the question.
width * height should be equal to the vertex count. A PlaneBufferGeometry with 4 by 4 segments results in 25 vertices. 3 by 3 results in 16. Always (w + 1) * (h + 1).
The positions in the vertex shader need a nudge of 1.0 / width.
The vertex shader needs to know about width and height, they can be passed in as uniforms.
Each vertex needs an attribute with its index so it can be correctly mapped.
Each position should be one pixel in the resulting texture.
The resulting texture should be drawn as gl.POINTS with gl_PointSize = 1.0.
Working jsfiddle: https://jsfiddle.net/brunoimbrizi/m0z8v25d/13/
You're not writing the vertices out correctly.
https://jsfiddle.net/ogawzpxL/
First off you're clipping the geometry, so your vertices actually end outside the view, and you see the middle of the quad without any vertices.
You can use the uv attribute to render the entire quad in the view.
gl_Position = vec4( uv * 2. - 1. , 0. ,1.);
Everything in the buffer represents some point on the quad. What seems to be tricky is when you render, the pixel will sample right next to your vertex. In the fiddle i've applied an offset to the world space thing by how much it would be in pixel space, and it didn't really work.
The reason why it seems to work with points is that this is all probably wrong :) If you want to transform only the vertices, then you need to store them properly in the texture. You can use points for this, but ideally they wouldn't be spaced out so much. In your scenario, they would fill the first couple of rows of the texture (since it's much larger than it could be).
You might start running into problems as soon as you try to apply this to something other than PlaneGeometry. In which case this problem has to be broken down.

openGL reverse image texturing logic

I'm about to project image into cylindrical panorama. But first I need to get the pixel (or color from pixel) I'm going to draw, then then do some Math in shaders with polar coordinates to get new position of pixel and then finally draw pixel.
Using this way I'll be able to change shape of image from polygon shape to whatever I want.
But I cannot find anything about this method (get pixel first, then do the Math and get new position for pixel).
Is there something like this, please?
OpenGL historically doesn't work that way around; it forward renders — from geometry to pixels — rather than backwards — from pixel to geometry.
The most natural way to achieve what you want to do is to calculate texture coordinates based on geometry, then render as usual. For a cylindrical mapping:
establish a mapping from cylindrical coordinates to texture coordinates;
with your actual geometry, imagine it placed within the cylinder, then from each vertex proceed along the normal until you intersect the cylinder. Use that location to determine the texture coordinate for the original vertex.
The latter is most easily and conveniently done within your geometry shader; it's a simple ray intersection test, with attributes therefore being only vertex location and vertex normal, and texture location being a varying that is calculated purely from the location and normal.
Extemporaneously, something like:
// get intersection as if ray hits the circular region of the cylinder,
// i.e. where |(position + n*normal).xy| = 1
float planarLengthOfPosition = length(position.xy);
float planarLengthOfNormal = length(normal.xy);
float planarDistanceToPerimeter = 1.0 - planarLengthOfNormal;
vec3 circularIntersection = position +
(planarDistanceToPerimeter/planarLengthOfNormal)*normal;
// get intersection as if ray hits the bottom or top of the cylinder,
// i.e. where |(position + n*normal).z| = 1
float linearLengthOfPosition = abs(position.z);
float linearLengthOfNormal = abs(normal.z);
float linearDistanceToEdge = 1.0 - linearLengthOfPosition;
vec3 endIntersection = position +
(linearDistanceToEdge/linearLengthOfNormal)*normal;
// pick whichever of those was lesser
vec3 cylindricalIntersection = mix(circularIntersection,
endIntersection,
step(linearDistanceToEdge,
planarDistanceToPerimeter));
// ... do something to map cylindrical intersection to texture coordinates ...
textureCoordinateVarying =
coordinateFromCylindricalPosition(cylindricalIntersection);
With a common implementation of coordinateFromCylindricalPosition possibly being simply return vec2(atan(cylindricalIntersection.y, cylindricalIntersection.x) / 6.28318530717959, cylindricalIntersection.z * 0.5);.

GLSL Shader: FFT-Data as Circle Radius

Im trying to crate a shader, that converts fft-data (passed as a texture) to a bar graphic and then to on a circle in the center of the screen. Here is a image of what im trying to achieve: link to image
i experimentet a bit with shader toy and came along wit this shader: link to shadertoy
with all the complex shaders i saw on shadertoy, it thought this should be doable with maths somehow.
can anybody here give me a hint how to do it?
It’s very doable — you just have to think about the ranges you’re sampling in. In your Shadertoy example, you have the following:
float r = length(uv);
float t = atan(uv.y, uv.x);
fragColor = vec4(texture2D(iChannel0, vec2(r, 0.1)));
So r is going to vary roughly from 0…1 (extending past 1 in the corners), and t—the angle of the uv vector—is going to vary from 0…2π.
Currently, you’re sampling your texture at (r, 0.1)—in other words, every pixel of your output will come from the V position 10% down your source texture and varying across it. The angle you’re calculating for t isn’t being used at all. What you want is for changes in the angle (t) to move across your texture in the U direction, and for changes in the distance-from-center (r) to move across the texture in the V direction. In other words, this:
float r = length(uv);
float t = atan(uv.y, uv.x) / 6.283; // normalize it to a [0,1] range - 6.283 = 2*pi
fragColor = vec4(texture2D(iChannel0, vec2(t, r)));
For the source texture you provided above, you may find your image appearing “inside out”, in which case you can subtract r from 1.0 to flip it.

GLSL shader for texture 'smoke' effect

I've looked around and haven't found anything relevant. I'm tyring to create a shader to give a texture smoke effect animation like here:
Not asking for a complete/full solution (although that would be awesome) but any pointers towards where I can get started to achieve this effect. Would i need to have the vertices for the drawing or is this possible if I have the texture only?
Modelling smoke with a fluid simulation isn't simple and can be very slow for a detailed simulation. Using noise to add finer details can be a fair bit faster. If this is the direction you want to head in, this answer has some good links to the little grasshopper. If you have a texture, use it to initialize the smoke density (or spawn particles for that matter) and run the simulation. If you start with vector data, and want the animation to trail along the curve as in your example it gets more complex. Perhaps draw the curve over the top of the smoke simulation, gradually drawing less of it and drawing the erased bits as density into the simulation. Spawning particles along its length and using "noise based particles" as linked above sounds like a good alternative too.
Still, it sounds like you're after something a bit simpler. I've created a short demo on shadertoy, just using perlin noise for animated turbulence on a texture. It doesn't require any intermediate texture storage or state information other than a global time.
https://www.shadertoy.com/view/Mtf3R7
The idea started with trying to create streaks of smoke that blur and grow with time. Start with a curve, sum/average colour along it and then make it longer to make the smoke appear to move. Rather than add points to the curve over time to make it longer, the curve has a fixed number of points and their distance increases with time.
To create a random curve, perlin noise is sampled recursively, providing offsets to each point in turn.
Using mipmapping, the samples towards the end of the curve can cover a larger area and make the smoke appear to blur into nothing, just as your image does. However, since this is a gather operation the end of the smoke curve is actually the start (hence the steps-i below).
//p = uv coord, o = random offset for per-pixel noise variation, t = time
vec3 smoke(vec2 p, vec2 o, float t)
{
const int steps = 10;
vec3 col = vec3(0.0);
for (int i = 1; i < steps; ++i)
{
//step along a random path that grows in size with time
p += perlin(p + o) * t * 0.002;
p.y -= t * 0.003; //drift upwards
//sample colour at each point, using mipmaps for blur
col += texCol(p, float(steps-i) * t * 0.3);
}
return col.xyz / float(steps);
}
As always with these effects, you can spend hours playing with constants getting it to look that tiny bit better. I've used a linearly changing value for the mipmap bias as the second argument to texCol(), which I'm sure could be improved. Also averaging a number of smoke() calls with varying o will give a smoother result.
[EDIT] If you want the smoke to animate along a curve with this method, I'd use a second texture that stores a "time offset" to delay the simulation for certain pixels. Then draw the curve with a gradient along it so the end of the curve will take a little while to start animating. Since it's a gather operation you should draw a much fatter lines into this time offset texture as it's the pixels around them which will gather colour. Unfortunately this will break when parts of the curve are too close or intersect.
In the example pictured it appears as if they have the vertices. Possibly the "drawing" of the flower shape was recorded and then played back continuously. Then the effect hits the vertices based on a time offset from when they were drawn. The effect there appears to mostly be a motion blur.
So to replicate this effect you would need the vertices. See how the top of the flower starts to disappear before the bottom? If you look closely you'll see that actually the blur effect timing follows the path of the flower around counter clockwise. Even on the first frame of your gif you can see that the end of the flower shape is a brighter yellow than the beginning.
The angle of the motion blur also appears to change over time from being more left oriented to being more up oriented.
And the brightness of the segment is also changing over time starting with the yellowish color and ending either black or transparent.
What I can't tell from this is if the effect is additive, meaning that they're applying the effect to the whole frame and then to the results of that effect each frame, or if it's being recreated each frame. If recreated each frame you'd be able to do the effect in reverse and have the image appear.
If you are wanting this effect on a bitmapped texture instead of a line object that's also doable, although the approach would be different.
Let's start with the line object and assume you have the vertices. The way I would approach it is that I would add a percentage of decay as an attribute to the vertex data. Then each frame that you render you'd first update the decay percentage based on the time for that vertex. Stagger them slightly.
Then the shader would draw the line segment using a motion blur shader where the amount of motion blur, the angle of the blur, and the color of the segment are controlled by a varying variable that is assigned by the decay attribute. I haven't tested this shader. Treat it like pseudocode. But I'd approach it this way... Vertex Shader:
uniform mat4 u_modelViewProjectionMatrix;
uniform float maxBlurSizeConstant; // experiment with value and it will be based on the scale of the render
attribute vec3 a_vertexPosition;
attribute vec2 a_vertexTexCoord0;
attribute float a_decay;
varying float v_decay;
varying vec2 v_fragmentTexCoord0;
varying vec2 v_texCoord1;
varying vec2 v_texCoord2;
varying vec2 v_texCoord3;
varying vec2 v_texCoord4;
varying vec2 v_texCoordM1;
varying vec2 v_texCoordM2;
varying vec2 v_texCoordM3;
varying vec2 v_texCoordM4;
void main()
{
gl_Position = u_modelViewProjectionMatrix * vec4(a_vertexPosition,1.0);
v_decay = a_decay;
float angle = 2.8 - a_decay * 0.8; // just an example of angles
vec2 tOffset = vec2(cos(angle),sin(angle)) * maxBlurSizeConstant * a_decay;
v_fragmentTexCoord0 = a_vertexTexCoord0;
v_texCoordM1 = a_vertexTexCoord0 - tOffset;
v_texCoordM2 = a_vertexTexCoord0 - 2.0 * tOffset;
v_texCoordM3 = a_vertexTexCoord0 - 3.0 * tOffset;
v_texCoordM4 = a_vertexTexCoord0 - 4.0 * tOffset;
v_texCoord1 = a_vertexTexCoord0 + tOffset;
v_texCoord2 = a_vertexTexCoord0 + 2.0 * tOffset;
v_texCoord3 = a_vertexTexCoord0 + 3.0 * tOffset;
v_texCoord4 = a_vertexTexCoord0 + 4.0 * tOffset;
}
Fragment Shader:
uniform sampler2D u_textureSampler;
varying float v_decay;
varying vec2 v_fragmentTexCoord0;
varying vec2 v_texCoord1;
varying vec2 v_texCoord2;
varying vec2 v_texCoord3;
varying vec2 v_texCoord4;
varying vec2 v_texCoordM1;
varying vec2 v_texCoordM2;
varying vec2 v_texCoordM3;
varying vec2 v_texCoordM4;
void main()
{
lowp vec4 fragmentColor = texture2D(u_textureSampler, v_fragmentTexCoord0) * 0.18;
fragmentColor += texture2D(u_textureSampler, v_texCoordM1) * 0.15;
fragmentColor += texture2D(u_textureSampler, v_texCoordM2) * 0.12;
fragmentColor += texture2D(u_textureSampler, v_texCoordM3) * 0.09;
fragmentColor += texture2D(u_textureSampler, v_texCoordM4) * 0.05;
fragmentColor += texture2D(u_textureSampler, v_texCoord1) * 0.15;
fragmentColor += texture2D(u_textureSampler, v_texCoord2) * 0.12;
fragmentColor += texture2D(u_textureSampler, v_texCoord3) * 0.09;
fragmentColor += texture2D(u_textureSampler, v_texCoord4) * 0.05;
gl_FragColor = vec4(fragmentColor.rgb, fragmentColor.a * v_decay);
}
Of course the trick is in varying the decay amount per vertex based on a slight offset in time.
If you want to do the same with a sprite you're going to do something very similar except that the difference between the decay per vertex would have to be played with to get right as there are only 4 vertices.
SORRY - EDIT
Sorry... The above shader blurs the incoming texture. It doesn't necessarily blur the color of the line being drawn. This might or might not be what you want to do. But again without knowing more of what you are actually trying to accomplish it's difficult to give you a perfect answer. I get the feeling you'd rather do this on a sprite anyway than a line vertex based object. So no you can't copy and paste this shader in to your code as is. But it shows the concept of how you'd do what you're looking to do. Especially if you're doing it on a texture instead of on a vertex based line.
Also the above shader isn't complete. For example it doesn't expand to allow the blur to get beyond the bounds of the texture. And it gets texture info from outside the area where the sprite is in the sprite sheet. To fix this you'd have to start with a bounding box larger than the sprite and shrink the sprite in the vertex to be the right size. And you'd have to not grab textels from the spite sheet beyond the bounds of the sprite. There are ways of doing this without having to include a bunch of white space around the sprite in the sprite sheet.
Update
On second look it might be particle based. If it is they again have all the vertices but as particle locations. I sort of prefer the idea that it's line segments because I don't see any gaps. So if it is particles there are a lot and they're tightly placed. The particles are still decaying cascading from the top petal around to the last. Even if it's line segments you could treat the vertices as particles to apply the wind and gravity.
As for how the smoke effect works check out this helper app by 71 squared: https://71squared.com/particledesigner
The way it works is that you buy the Mac app to use to design and save your particle. Then you go to their github and get the iOS code. But this particular code creates a particle emitter. Doing a shape out of particles would be different code. But the evolution of the particles is the same.
OpenGL ES suggests that your target platform may not have the computional power to do a real smoke simulation (and if it does, it would consume quite a bit of power, which is undesirable on a device like a phone).
However, your target device will definitively have the power to create a fake texture-space effect which looks good enough to be convincing.
First look at the animation you posted. The flower is blurring and fading, there is a sideway motion to the left ("wind") and an upwards motion of the smoke. Thus, what is primarily needed is ping-ponging between two textures, sampling for each fragment at the fragment's location offset by a vector pointing downwards and right (you only have gather available, not scatter).
No texelFetchOffset or such function in ES 2.0 so you'll have to use plain old texture2D and do the vector add yourself, but that shouldn't be a lot of trouble. Note that since you need to use texture2D anyway you'll not need to worry about gl_FragCoord either. Have the interpolator give you the correct texture coordinate (simply set texcoord of vertices of the quad to 0 on one end, and to 1 on the other end).
To get the blur effect, randomize the offset vector (e.g. by adding another random vector with a much smaller magnitude, so the "overall direction" remains the same), to get the fade effect either multiply alpha with an attenuation factor (such as 0.95) or do the same with the color (which will give you "black" rather than "transparent", but depending on wheter or not you want premultiplied alpha, that may be the correct thing).
Alternatively you could implement the blur and fade effect by generating mipmaps first (gradually fade them to transparent), and using the optional bias value in texture2D, slightly increasing the bias as time progresses. That will be, yet lower quality (possibly with visible box artefacts), but it allows you to preprocess much of the calculation ahead of time and has a much more cache-friendly access pattern.

Direct3D9 Calculating view space point light position

I am working on my own deffered rendering engine. I am rendering the scene to the g-buffer containing diffuse color, view space normals and depth (for now). I have implemented directional light for the second rendering stage and it works great. Now I want to render a point light, which is a bit harder.
I need the point light position for the shader in view space because I have only depth in the g-buffer and I can't afford a matrix multiplication in every pixel. I took the light position and transformed it by the same matrix, by which I transform every vertex in shader, so it should align with verices in the scene (using D3DXVec3Transform). But that isn't the case: transformed position doesn't represent viewspace position nearly at all. It's x,y coordinates are off the charts, they are often way out of the (-1,1) range. The transformed position respects the camera orientation somewhat, but the light moves too quick and the y-axis is inverted. Only if the camera is at (0,0,0), the light stands at (0,0) in the center of the screen. Here is my relevant rendering code executed every frame:
D3DXMATRIX matView; // the view transform matrix
D3DXMATRIX matProjection; // the projection transform matrix
D3DXMatrixLookAtLH(&matView,
&D3DXVECTOR3 (x,y,z), // the camera position
&D3DXVECTOR3 (xt,yt,zt), // the look-at position
&D3DXVECTOR3 (0.0f, 0.0f, 1.0f)); // the up direction
D3DXMatrixPerspectiveFovLH(&matProjection,
fov, // the horizontal field of view
asp, // aspect ratio
znear, // the near view-plane
zfar); // the far view-plane
D3DXMATRIX vysl=matView*matProjection;
eff->SetMatrix("worldViewProj",&vysl); //vertices are transformed ok ín shader
//render g-buffer
D3DXVECTOR4 lpos; D3DXVECTOR3 lpos2(0,0,0);
D3DXVec3Transform(&lpos,&lpos2,&vysl); //transforming lpos into lpos2 using vysl, still the same matrix
eff->SetVector("poslight",&lpos); //but there is already a mess in lpos at this time
//render the fullscreen quad with wrong lighting
Not that relevant shader code, but still, I see the light position this way (passing IN.texture is just me being lazy):
float dist=length(float2(IN.texture0*2-1)-float2(poslight.xy));
OUT.col=tex2D(Sdiff,IN.texture0)/dist;
I have tried to transform a light only by matView without projection, but the problem is still the same. If I transform the light in a shader, it's the same result, so the problem is the matrix itself. But it is the same matrix as is transforming the vertices! How differently are vertices treated?
Can you please take a look at the code and tell me where the mistake is? It seems to me it should work ok, but it doesn't. Thanks in advance.
You don't need a matrix multiplication to reconstruct view position, here is a code snippet (from andrew lauritzen deffered light example)
tP is the projection transform, position screen is -1/1 pixel coordinate and viewspaceZ is linear depth that you sample from your texture.
float3 ViewPosFromDepth(float2 positionScreen,
float viewSpaceZ)
{
float2 screenSpaceRay = float2(positionScreen.x / tP._11,
positionScreen.y / tP._22);
float3 positionView;
positionView.z = viewSpaceZ;
positionView.xy = screenSpaceRay.xy * positionView.z;
return positionView;
}
Result of this transform D3DXVec3Transform(&lpos,&lpos2,&vysl); is a vector in homogeneous space(i.e. projected vector but not divided by w). But in you shader you use it's xy components without respecting this(w). This is (quite probably) the problem. You could divide vector by its w yourself or use D3DXVec3Project instead of D3DXVec3Transform.
It's working fine for vertices as (I suppose) you mul them by the same viewproj matrix in the vertex shader and pass transformed values to interpolator where hardware eventually divides it's xyz by interpolated 'w'.

Resources